Professional Documents
Culture Documents
SAP IQ 16.1 SP 03
Document Version: 1.0.0 – 2018-11-20
This book provides SAP IQ users with reference material for SQL statements, language elements, data types,
functions, system procedures, system tables, SQL statements, and database options.
Other books provide more context on how to perform particular tasks. Use this book to get information about
SQL syntax, parameters, and options. For command line utility start-up parameters, see the SAP IQ Utility
Reference.
These topics provide detailed descriptions of the language elements and conventions of SAP IQ SQL.
In this section:
SQL is not case-sensitive to keywords, but throughout the SAP IQ documentation, keywords are indicated in
uppercase. For example, in this statement, SELECT and FROM are keywords:
SELECT *
FROM Employees
Select *
From Employees
select * from Employees
sELECT * FRoM Employees
In this section:
To use a reserved word in a SQL statement as an identifier, you enclose the word in double quotes. Many, but
not all, of the keywords that appear in SQL statements are reserved words. For example, you must use the
following syntax to retrieve the contents of a table named SELECT:
SELECT *
FROM "SELECT"
If you are using Embedded SQL, you can use the database library function sql_needs_quotes to determine
whether a string requires quotation marks. A string requires quotes if it is a reserved word or if it contains a
character not ordinarily allowed in an identifier.
This table lists the SQL reserved words in SAP IQ. Because SQL is not case-sensitive with respect to keywords,
each of the words in this table may appear in uppercase, lowercase, or any combination of the two. All strings
that differ only in capitalization from these words are reserved words.
Related Information
2.2 Identifiers
Identifiers are names of objects in the database, such as user IDs, tables, and columns.
Identifiers have a maximum length of 128 bytes. They must be enclosed in double quotes or square brackets if
any of these conditions are true:
You can represent an apostrophe/single quote (') inside an identifier by following it with another apostrophe.
If the QUOTED_IDENTIFIER database option is set to OFF, double quotes are used to delimit SQL strings and
cannot be used for identifiers. However, you can always use square brackets to delimit identifiers, regardless of
the setting of QUOTED_IDENTIFIER.
The default setting for the QUOTED_IDENTIFIER option is OFF for Open Client and jConnect connections;
otherwise the default is ON.
Limitations
Examples
Surname
"Surname"
[Surname]
SomeBigName
"Client Number"
In this section:
Related Information
If you use the -n switch in start_iq [ <server-options> ], certain naming restrictions apply.
No character set is conversion performed on the server name. If the client character set and the database
server character set differ, using extended characters in the server name can cause the server to not be found.
If clients and servers run on different operating systems or locales, use 7-bit ASCII characters in the server
name.
Database server names must be valid identifiers. Long database server names are truncated to different
lengths depending on the protocol. Database server names cannot have the following properties:
The server name specifies the name to be used on client application connection strings or profiles. Running
multiple database servers with the same name is not recommended.
Mutexes and semaphores are locking and signaling mechanisms that control the availability or use of a shared
resource such as an external library or a procedure. You can include mutexes and semaphores to achieve the
type of locking behavior your application requires. Choosing whether to use mutexes or semaphores depends
on the requirements of your application.
Mutexes provide the application with a concurrency control mechanism; for example, they can be used to allow
only one connection at a time to execute a critical section in a stored procedure, user-defined function, trigger,
or event. Mutexes can also lock an application resource that does not directly correspond to a database object.
Semaphores provide support for producer/consumer application logic in the database or for access to limited
application resources.
Mutexes and semaphores benefit from the same deadlock detection as database row and table locks.
UPDATE ANY MUTEX SEMAPHORE allows locking/releasing of mutexes and notifying/waiting for semaphores,
CREATE ANY MUTEX SEMAPHORE is necessary to create/replace, and DROP ANY MUTEX SEMAPHORE is
necessary to drop/replace. To have a finer level of control on who can update a mutex or semaphore, you can
grant privileges on the objects they are used in instead. For example, you can grant EXECUTE privilege on a
system procedure that contains a mutex.
A mutex is a lock and release mechanism that limits the availability of a critical section of a shared resource
such as an external library or a stored procedure. Locking and unlocking a mutex is achieved by executing
LOCK MUTEX and RELEASE MUTEX statements, respectively.
The scope of a mutex can be either transaction or connection. In transaction-scope mutexes, the lock is held
until the end of the transaction that has locked the mutex. In connection-scope mutexes, the lock is held until a
RELEASE MUTEX statement is executed by the connection or until the connection terminates.
The mode of a mutex can be either exclusive or shared. In exclusive mode, only the transaction or connection
holding the lock can use the resource. In shared mode, multiple transactions or connections can lock the
mutex.
You can recursively lock a mutex (that is, you can nest LOCK MUTEX statements for the same mutex inside
your code). However, with connection-scope mutexes, an equal number of RELEASE MUTEX statements are
required to release the mutex.
If a connection locks a mutex in shared mode, and then (recursively) locks it again in exclusive mode, then the
lock remains held in exclusive mode until it is released twice, or until the end of the transaction.
Here is a simple scenario showing how you can use a mutex to protect a critical section of a stored procedure.
In this scenario, the critical section can only be executed by one connection at a time (but can span multiple
transactions):
1. The following statement creates a new mutex to protect the critical section:
4. The following statement removes the mutex when the critical section no longer needs protection:
A semaphore is a signaling mechanism that uses a counter to communicate the availability of a resource.
Incrementing and decrementing the semaphore counter is achieved by executing NOTIFY SEMAPHORE and
WAITFOR SEMAPHORE statements, respectively. Use semaphores in a resource availability model or a in a
producer-consumer model. Regardless of model, a semaphore cannot go below 0. That way, the counter is
used to limit the availability of the resource (a license, in this example).
The resource availability model is when a counter is used to limit the availability of a resource. For example,
suppose you have a license that restricts application use to 10 users at a time. You set the semaphore counter
to 10 at create time using the START WITH clause. When a user logs in, a WAITFOR SEMAPHORE statement is
executed, and the count is decremented by one. If the count is 0, then the user waits for up to the specified
timeout period. If the counter goes above 0 before the timeout, then they log in. If not, then the users login
attempt times out. When the user logs out, a NOTIFY SEMAPHORE statement is executed, incrementing the
count by one. Each time a user logs in, the count is decremented; each time they log out, the count is
incremented.
The producer-consumer model is when a counter is used to signal the availability of a resource. For example,
suppose there is a procedure that consumes what another procedure produces. The consumer executes a
WAITFOR SEMAPHORE statement and waits for something to process. When the producer has created output,
it executes a NOTIFY SEMAPHORE statement to signal that work is available. This statement increments the
counter associated with the semaphore. When the waiting consumer gets the work, the counter is
decremented. In the producer-consumer model, the counter cannot go below 0, but it can go as high as the
producers increment the counter.
Here is a simple scenario showing how you can use a semaphore to control the number of licenses for an
application. The scenario assumes there is a total of three licenses available, and that each successful log in to
the application consumes one license:
1. The following statement creates a new semaphore with the number of licenses specified as the initial
count:
So, a common way to use semaphores in a producer-consumer model might look something like this:
In this example, MyProducer and MyConsumer run in different connections. MyProducer just fetches data and
can get at most 100 iterations ahead of MyConsumer. If MyConsumer goes faster than MyProducer,
producer_counter will eventually reach 0. At that point, MyConsumer will block until MyProducer can make
more data. If MyProducer goes faster than MyConsumer, consumer_counter will eventually reach 0. At that
point, MyProducer will block until MyConsumer can consume some data.
2.4 Strings
Strings are either literal strings, or expressions with CHAR or VARCHAR data types.
A literal string is any sequence of characters enclosed in apostrophes ('single quotes'). A SQL variable of
character data type can hold a string. This is a simple example of a literal string:
'This is a string.'
An expression with a CHAR data type might be a built-in or user-defined function, or one of the many other
kinds of expressions available.
● To represent an apostrophe inside a string, use two apostrophes ('') in a row. For example:
'John''s database'
● To represent a backslash character, use two backslashes in a row (\\). For example:
'c:\\temp'
● Hexadecimal escape sequences can be used for any character, printable or not. A hexadecimal escape
sequence is a backslash followed by an x followed by two hexadecimal digits (for example, \x6d represents
the letter m). For example:
'\x00\x01\x02\x03'
Compatibility
For compatibility with SAP Adaptive Server Enterprise, you can set the QUOTED_IDENTIFIER database option
to OFF. With this setting, you can also use double quotes to mark the beginning and end of strings. The option
is ON by default.
Related Information
Expressions are formed from several different kinds of elements, such as constants, column names, SQL
operators, and subqueries.
Syntax
expression:
<case-expression>
| <constant>
| [ <correlation-name>. ] <column-name >[ <java-ref> ]
| - <expression>
| <expression operator> <expression>
| ( <expression> )
| <function-name> ( <expression>, … )
| <if-expression>
| [ <java-package-name>. ] <java-class-name java-ref>
| ( <subquery> )
| <variable-name> [ <java-ref> ]
<case-expression> ::=
{ CASE <search-condition>
... WHEN <expression> THEN <expression> [ , … ]
... [ ELSE <expression> ]
END
| CASE
... WHEN <search-condition> THEN <expression> [ , … ]
... [ ELSE <expression> ]
END }
<constant> ::=
{ <integer> | <number> | '<string>'
| <special-constant> | <host-variable> }
<special-constant> ::=
{ CURRENT { DATE | TIME | TIMESTAMP | USER }
| LAST USER
| NULL
| SQLCODE
| SQLSTATE }
<if-expression> ::=
IF <condition>
... THEN <expression>
... [ ELSE <expression> ]
ENDIF
<java-ref> ::=
{ .<field-name> [ <java-ref> ]
| >> <field-name> [ <java-ref> ]
| .<method-name> ( [ <expression> ] [ , … ] ) [ <java-ref> ]
| >> <method-name> ( [ <expression >] [ , … ] ) [ <java-ref> ] }
<operator> ::=
{ + | - | * | / | || | % }
Anywhere
Authorization
Side Effects
None.
Compatibility
In this section:
String constants are enclosed in apostrophes. An apostrophe is represented inside the string by two
apostrophes in a row.
Related Information
A column name is an identifier preceded by an optional correlation name. A correlation name is usually a table
name.
If a column name has characters other than letters, digits, and underscores, the name must be surrounded by
quotation marks (“”). For example, the following are valid column names:
Employees.Surname
City
"StartDate"
Related Information
A subquery is a SELECT statement enclosed in parentheses. The SELECT statement can contain one and only
one select list item. When used as an expression, a scalar subquery is allowed to return only zero or one value.
Within the SELECT list of the top level SELECT, or in the SET clause of an UPDATE statement, you can use a
scalar subquery anywhere that you can use a column name. However, the subquery cannot appear inside a
conditional expression:
● CASE
● IF
● NULLIF
● ARGN
● COALESCE
● ISNULL
For example, the following statement returns the number of employees in each department, grouped by
department name:
These topics describe the arithmetic, string, and bitwise operators available in SAP IQ.
The normal precedence of operations applies. Expressions in parentheses are evaluated first; then
multiplication and division before addition and subtraction. String concatenation occurs after addition and
subtraction.
In this section:
Related Information
Operator Description
<expression> + Addition. If either expression is the NULL value, the result is the NULL value.
<expression>
<expression> - Subtraction. If either expression is the NULL value, the result is the NULL value.
<expression>
- <expression> Negation. If the expression is the NULL value, the result is the NULL value.
<expression> * Multiplication. If either expression is the NULL value, the result is the NULL value.
<expression>
<expression> / Division. If either expression is the NULL value or if the second expression is 0, the
<expression> result is the NULL value.
<expression> % Modulo finds the integer remainder after a division involving two whole numbers. For
<expression> example, 21 % 11 = 10 because 21 divided by 11 equals 1 with a remainder of 10.
Related Information
Operator Description
<expression> || String concatenation (two vertical bars). If either string is the NULL value, the string is
<expression> treated as the empty string for concatenation.
<expression> + Alternative string concatenation. When using the + concatenation operator, you must en
<expression> sure the operands are explicitly set to character data types rather than relying on implicit
data conversion.
The result data type of a string concatenation operator is a LONG VARCHAR. If you use string concatenation
operators in a SELECT INTO statement, you must have an Unstructured Data Analytics Option license or use
CAST and set LEFT to the correct data type and size.
● SQL – ISO/ANSI SQL compliant. The || operator is the ISO/ANSI SQL string concatenation operator.
● SAP Database Products – The + operator is supported by SAP Adaptive Server Enterprise.
Related Information
You can use these bitwise operators on all unscaled integer data types, in both SAP IQ and SAP Adaptive Server
Enterprise.
Operator Description
& AND
| OR
^ EXCLUSIVE OR
~ NOT
In this section:
Related Information
The AND operator compares 2 bits. If they are both 1, the result is 1.
0 0 0
0 1 0
1 0 0
1 1 1
2.5.4.3.2 Bitwise OR ( | )
The OR operator compares 2 bits. If one or the other bit is 1, the result is 1.
0 0 0
0 1 1
1 0 1
1 1 1
Related Information
The EXCLUSIVE OR operator results in a 1 when either, but not both, of its two operands is 1.
0 0 0
0 1 1
1 0 1
1 1 0
Related Information
The NOT operator is a unary operator that returns the inverse of its operand.
Bit ~ Bit
1 0
0 1
Related Information
The Transact-SQL outer join operators *= and =* are supported in SAP IQ, in addition to the ISO/ANSI SQL join
syntax using a table expression in the FROM clause.
Compatibility
The following query, on the other hand, returns the character string 123456:
You can use the CAST or CONVERT function to explicitly convert data types.
Note
When used with BINARY or VARBINARY data types, the + operator is concatenation, not addition.
Related Information
When you are using more than one operator in an expression, use parentheses to make the order of operation
explicit, rather than relying on an identical operator precedence between SAP Adaptive Server Enterprise and
SAP IQ.
Related Information
2.5.5 IF Expressions
IF <condition>
THEN <expression1>
[ ELSE <expression2> ]
ENDIF
Note
Do not confuse the syntax of the IF expression with that of the IF statement.
Related Information
You can use case expressions anywhere you can use an expression. The syntax of the CASE expression is as
follows:
CASE <expression>
WHEN <expression> THEN <expression> [, …]
[ ELSE <expression> ] END
If the expression following the CASE statement is equal to the expression following the WHEN statement, then
the expression following the THEN statement is returned. Otherwise, the expression following the ELSE
statement is returned, if it exists.
For example, the following code uses a case expression as the second clause in a SELECT statement:
SELECT ID,
(CASE name
WHEN 'Tee Shirt' THEN 'Shirt'
WHEN 'Sweatshirt' THEN 'Shirt'
WHEN 'Baseball Cap' THEN 'Hat'
ELSE 'Unknown'
END) as Type
FROM "GROUPO".Products
CASE
WHEN <search-condition> THEN <expression> [, …]
[ ELSE <expression> ] END
If the search condition following the WHEN statement is satisfied, the expression following the THEN statement
is returned. Otherwise the expression following the ELSE statement is returned, if it exists.
The following example uses a case expression as the third clause of a SELECT statement to associate a string
with a search condition:
In this section:
The NULLIF function provides a way to write some CASE statements in short form.
NULLIF compares the values of the two expressions. If the first expression equals the second expression,
NULLIF returns NULL. If the first expression does not equal the second expression, NULLIF returns the first
expression.
Related Information
These topics describe the compatibility of expressions and constants between SAP Adaptive Server Enterprise
and SAP IQ.
In this section:
Related Information
This table describes the compatibility of expressions between SAP Adaptive Server Enterprise and SAP IQ.
This table is a guide only, and a marking of Both may not mean that the expression performs in an identical
manner for all purposes under all circumstances. For detailed descriptions, see the SAP ASE documentation
and the SAP IQ documentation on the individual expression.
Expression Supported By
constant Both
- expr Both
( expr ) Both
( subquery ) Both
This table describes the compatibility of constants between SAP Adaptive Server Enterprise and SAP IQ.
Constant Supported By
integer Both
number Both
'string' Both
special-constant Both
host-variable SAP IQ
This table is a guide only, and a marking of Both may not mean that the expression performs in an identical
manner for all purposes under all circumstances. For detailed descriptions, see the SAP ASE documentation
and the SAP IQ documentation on the individual expression.
In this section:
By default, SAP Adaptive Server Enterprise and SAP IQ give different meanings to delimited strings — strings
enclosed in apostrophes (single quotes) and in quotation marks (double quotes).
SAP IQ employs the SQL92 convention, in which strings enclosed in apostrophes are constant expressions, and
strings enclosed in quotation marks (double quotes) are delimited identifiers (names for database objects).
SAP ASE employs the convention that strings enclosed in quotation marks are constants, whereas delimited
identifiers are not allowed by default and are treated as strings.
Both SAP Adaptive Server Enterprise and SAP IQ provide a quoted_identifier option that allows the
interpretation of delimited strings to be changed. By default, the quoted_identifier option is set to OFF in
SAP ASE, and to ON in SAP IQ.
You cannot use SQL reserved words as identifiers if the quoted_identifier option is off.
The following statement in either SAP IQ or SAP ASE changes the setting of the quoted_identifier option
to ON:
SET quoted_identifier ON
With the quoted_identifier option set to ON, SAP ASE allows table, view, and column names to be
delimited by quotes. Other object names cannot be delimited in SAP ASE.
The following statement in SAP IQ or SAP ASE changes the setting of the quoted_identifier option to OFF:
You can choose to use either the SQL92 or the default Transact-SQL convention in both SAP ASE and SAP IQ as
long as the quoted_identifier option is set to the same value in each DBMS.
Examples
If you operate with the quoted_identifier option ON (the default SAP IQ setting), the following statements
involving the SQL keyword user are valid for both types of DBMS:
If you operate with the quoted_identifier option OFF (the default SAP ASE setting), the following
statements are valid for both types of DBMS:
SELECT *
FROM Employees
WHERE Surname = "Chin"
Related Information
Conditions are used to choose a subset of the rows from a table, or in a control statement such as an IF
statement to determine control of flow.
SQL conditions do not follow Boolean logic, where conditions are either true or false. In SQL, every condition
evaluates as one of TRUE, FALSE, or UNKNOWN. This is called three-valued logic. The result of a comparison is
UNKNOWN if either value being compared is the NULL value.
Rows satisfy a search condition if and only if the result of the condition is TRUE. Rows for which the condition is
UNKNOWN do not satisfy the search condition.
Subqueries form an important class of expression that is used in many search conditions.
The different types of search conditions are discussed in the following sections.
You specify a search condition for a WHERE clause, a HAVING clause, a CHECK clause, a JOIN clause, or an IF
expression.
Syntax
<compare> ::=
{ = | > | < | >= | <= | <> | != | !< | !> }
Remarks
Anywhere
Authorization
None
Example
The following query retrieves the names and birth years of the oldest employees:
The subqueries that provide comparison values for quantified comparison predicates might retrieve multiple
rows but can have only one column.
In this section:
Related Information
The syntax for comparison conditions is as follows, where <compare> is a comparison operator:
Operator Description
= Equal to
!= Not equal to
For example, the following query retrieves the names and birth years of the oldest employees:
The subqueries that provide comparison values for quantified comparison predicates, as in the preceding
example, might retrieve multiple rows but can only have one column.
Note
Compatibility
● Trailing blanks – any trailing blanks in character data are ignored for comparison purposes by SAP
Adaptive Server Enterprise. The behavior of SAP IQ when comparing strings is controlled by the Ignore
Trailing Blanks in String Comparisons database creation option.
● Case sensitivity – by default, SAP IQ databases, like SAP ASE databases, are created as case-sensitive.
Comparisons are carried out with the same attention to case as the database they are operating on. You
can control the case sensitivity of SAP IQ databases when creating the database.
Related Information
The AND, OR, NOT, and IS logical operators of SQL work in three-valued logic.
OR Operator
NOT Operator
IS Operator
A subquery is a SELECT statement enclosed in parentheses. Such a SELECT statement must contain one and
only one select list item.
A column can be compared to a subquery in a comparison condition (for example, >,<, or !=) as long as the
subquery returns no more than one row. If the subquery (which must have one column) returns one row, the
value of that row is compared to the expression. If a subquery returns no rows, its value is NULL.
Subqueries that return exactly one column and any number of rows can be used in IN conditions, ANY
conditions, ALL conditions, or EXISTS conditions. These conditions are discussed in the following sections.
SAP IQ supports UNION only in uncorrelated subquery predicates, not in scalar value subqueries or correlated
subquery predicates.
SAP IQ does not support multiple subqueries in a single OR clause. For example, the following query has two
subqueries joined by an OR:
In this section:
Related Information
Each subquery can appear within the WHERE or HAVING clause with other predicates, and can be combined
using the AND or OR operators. SAP IQ supports these subqueries, which can be correlated (contain
references to a table that appears in the outer query and cannot be evaluated independently) and uncorrelated
(do not contain references to remote tables).
The IN subquery predicate returns a list of values or a single value. This type is also called a quantified
subquery predicate.
● Existence predicates:
The EXISTS predicate represents the existence of the subquery. The expression EXISTS <subquery>
evaluates to true only if the subquery result is not empty. The EXISTS predicate does not compare results
with any column or expressions in the outer query block, and is typically used with correlated subqueries.
● Quantified comparison predicates:
A quantified comparison predicate compares one or a collection of values returned from a subquery.
● Disjunction of uncorrelated scalar subqueries or IN subqueries that cannot be executed vertically within
the WHERE or HAVING clause.
The SUBQUERY_CACHING_PREFERENCE option lets experienced DBAs choose which subquery caching method
to use.
Examples
Example 1
SELECT COUNT(*)
FROM supplier
WHERE s_suppkey IN (SELECT MAX(l_suppkey)
FROM lineitem
GROUP BY l_linenumber)
OR EXISTS (SELECT p_brand
FROM part
WHERE p_brand = 'Brand#43');
Example 2
SELECT COUNT(*)
FROM supplier
WHERE EXISTS (SELECT l_suppkey
FROM lineitem
WHERE l_suppkey = 12345)
OR EXISTS (SELECT p_brand
FROM part
WHERE p_brand = 'Brand#43');
Example 3
SELECT COUNT(*)
FROM supplier
WHERE s_acctbal*10 > (SELECT MAX(o_totalprice)
FROM orders
WHERE o_custkey = 12345)
OR substring(s_name, 1, 6) IN (SELECT c_name
FROM Customers
WHERE c_nationkey = 10);
SELECT COUNT(*)
FROM lineitem
WHERE l_suppkey > ANY (SELECT MAX(s_suppkey)
FROM supplier
WHERE s_acctbal >100
GROUP BY s_nationkey)
OR l_partkey >= ANY (SELECT MAX(p_partkey)
FROM part
GROUP BY p_mfgr);
Example 5
Disjunction of any correlated subquery predicates:
SELECT COUNT(*)
FROM supplier S
WHERE EXISTS (SELECT l_suppkey
FROM lineitem
WHERE l_suppkey = S.s_suppkey)
OR EXISTS (SELECT p_brand FROM part
WHERE p_brand = 'Brand#43'
AND p_partkey > S.s_suppkey);
Example 6
Before support for disjunction of subqueries, users were required to write queries in two parts, and then use
UNION to merge the final results.
The following query illustrates a merged query that gets the same results as the example for disjunction of any
correlated subquery predicates:
SELECT COUNT(*)
FROM (SELECT s_suppkey FROM supplier S
WHERE EXISTS (SELECT l_suppkey
FROM lineitem
WHERE l_suppkey = S.s_suppkey)
UNION
SELECT s_suppkey
FROM supplier S
WHERE EXISTS (SELECT p_brand
FROM part
WHERE p_brand = 'Brand#43'
AND p_partkey > S.s_suppkey)) as UD;
Performance of the merged query is suboptimal because it scans the supplier table twice and then merges the
results from each UNION to return the final result.
Syntax
The syntax for ALL conditions is as follows, where <compare> is a comparison operator:
The syntax for ANY conditions is as follows, where <compare> is a comparison operator:
For example, an ANY condition with an equality operator is TRUE if <expression> is equal to any of the values
in the result of the subquery, and FALSE if the expression is not NULL and does not equal any of the columns of
the subquery:
The ANY condition is UNKNOWN if <expression> is the NULL value, unless the result of the subquery has no
rows, in which case the condition is always FALSE.
Restrictions
If there is more than one expression on either side of a quantified comparison predicate, an error message is
returned. For example:
Queries of this type can always be expressed in terms of IN subqueries or scalar subqueries using MIN and MAX
set functions.
Compatibility
ANY and ALL subqueries are compatible between SAP Adaptive Server Enterprise and SAP IQ. Only SAP IQ
supports SOME as a synonym for ANY.
The BETWEEN condition can evaluate as TRUE, FALSE, or UNKNOWN. Without the NOT keyword, the condition
evaluates as TRUE if <expr> is between <start-expr> and <end-expr>. The NOT keyword reverses the
meaning of the condition but leaves UNKNOWN unchanged.
A BETWEEN predicate is of the form “A between B and C.” Either “B” or “C” or both “B” and “C” can be
subqueries. “A” must be a value expression or column.
Compatibility
The BETWEEN condition is compatible between SAP IQ and SAP Adaptive Server Enterprise.
If both conditions are TRUE, the combined condition is TRUE. If either condition is FALSE, the combined
condition is FALSE. If otherwise, the combined condition is UNKNOWN.
<condition1> OR <condition2>
If both conditions are TRUE, the combined condition is TRUE. If either condition is FALSE, the combined
condition is FALSE. If otherwise, the combined condition is UNKNOWN. There is no guaranteed order as to
which condition, <condition1> or <condition2>, is evaluated first.
Compatibility
The AND and OR operators are compatible between SAP IQ and SAP Adaptive Server Enterprise.
The syntax for CONTAINS conditions for a column with a WD index is as follows:
The <column-name> must be a CHAR, VARCHAR, or LONG VARCHAR (CLOB) column in a base table, and must
have a WD index. The <word1>, <word2> and <word3> expressions must be string constants no longer than
255 bytes, each containing exactly one word. The length of that word cannot exceed the maximum permitted
word length of the word index of the column.
Without the NOT keyword, the CONTAINS condition is TRUE if <column-name> contains each of the words,
UNKNOWN if <column-name> is the NULL value, and FALSE otherwise. The NOT keyword reverses these
values but leaves UNKNOWN unchanged.
For example, the following search condition is TRUE if the value of <varchar_col> is The cat is on the
mat:
This condition is FALSE, however, if the value of <varchar_col> is The cat chased the mouse.
When SAP IQ executes a statement containing both LIKE and CONTAINS, the CONTAINS condition takes
precedence.
Avoid using the CONTAINS predicate in a view that has a user-defined function, because the CONTAINS criteria
are ignored. Use the LIKE predicate with wildcards instead, or issue the query outside of a view.
For information on using CONTAINS conditions with TEXT indexes, see SAP IQ Administration: Unstructured
Data Analytics.
EXISTS( <subquery> )
The EXISTS condition is TRUE if the subquery result contains at least one row, and FALSE if the subquery
result does not contain any rows. The EXISTS condition cannot be UNKNOWN.
Compatibility
The EXISTS condition is compatible between SAP Adaptive Server Enterprise and SAP IQ.
Without the NOT keyword, the IN condition is TRUE if <expression> equals any of the listed values,
UNKNOWN if <expression> is the NULL value, and FALSE otherwise. The NOT keyword reverses the meaning
of the condition but leaves UNKNOWN unchanged.
Compatibility
IN conditions are compatible between SAP Adaptive Server Enterprise and SAP IQ.
Use the IS DISTINCT FROM and IS NOT DISTINCT FROM search conditions as comparison operators.
Syntax
Remarks
The IS DISTINCT FROM and IS NOT DISTINCT FROM search conditions are sargable and evaluate to TRUE or
FALSE.
The IS NOT DISTINCT FROM search condition evaluates to TRUE if <expression1> is equal to
<expression2>, or if both expressions are NULL. This is equivalent to a combination of two search conditions,
as follows:
SQL/2008 The IS [NOT] DISTINCT FROM predicate is defined in SQL/2008 standard. The IS DISTINCT
FROM predicate is Feature T151, "DISTINCT predicate", of the SQL/2008 standard. The IS NOT DISTINCT
FROM predicate is Feature T152, "DISTINCT predicate with negation", of the SQL/2008 standard.
Use IS NULL conditions in subqueries to NULL values represent missing unknown data.
Without the NOT keyword, the IS NULL condition is TRUE if the expression is the NULL value, and FALSE
otherwise. The NOT keyword reverses the meaning of the condition.
Compatibility
The IS NULL condition is compatible between SAP Adaptive Server Enterprise and SAP IQ.
Use LIKE conditions in subqueries to use wildcards in the WHERE clause to perform pattern matching.
The LIKE condition can evaluate as TRUE, FALSE, or UNKNOWN. You can use LIKE only on string data.
LIKE predicates that start with characters other than wildcard characters may execute faster if an HG index is
available.
● A WD index is available, provided the LIKE pattern contains at least one word bounded on the left end by
whitespace or the end of the pattern
● An NGRAM TEXT index is available, provided the LIKE pattern contains at least <N> contiguous non-
wildcard characters
Without the NOT keyword, the condition evaluates as TRUE if <expression> matches the <pattern>. If either
<expression> or <pattern> is the NULL value, this condition is UNKNOWN. The NOT keyword reverses the
meaning of the condition but leaves UNKNOWN unchanged.
The pattern might contain any number of wildcard characters. The wildcard characters are:
Wildcard Matches
For example, the following search condition is TRUE for any row where name starts with the letter a and has the
letter b as its second-to-last character:
If you specify an <escape-expr>, it must evaluate to a single character. The character can precede a percent,
an underscore, a left square bracket, or another escape character in the <pattern> to prevent the special
character from having its special meaning. When escaped in this manner, a percent matches a percent, and an
underscore matches an underscore.
Supported Patterns
Some patterns between 127 and 254 characters are supported, but only under certain circumstances. See the
following subsections for examples.
Example 1
Under specific circumstances where adjacent constant characters exist in your pattern, patterns of length
between 127 and 254 characters are supported. Each constant character in the string pattern requires two
SAP IQ collapses adjacent constant characters into a single character. For example, consider the following
LIKE predicate with a string length of 130 characters:
'12345678901234567890123456789012345678901234567890123456789012345678901234567890
1234567890123456789012345678901234567890123456%%%%' ;
SAP IQ collapses the four adjacent constant characters %%%% at the end of the string into one % character,
thereby reducing the length of the string from 130 characters to 127. This is less than the maximum of 256
bytes (or 255/2 characters), and no error is generated.
Therefore, if your LIKE predicate contains adjacent constants in the string, patterns of length between 127 and
254 characters are supported as long as the total length of the collapsed string is less than 256 bytes (or 255/2
characters).
Example 2
In this example, the constant characters 7890 replace the four adjacent constant characters %%%% at the end of
the 130-character LIKE predicate:
'12345678901234567890123456789012345678901234567890123456789012345678901234567890
12345678901234567890123456789012345678901234567890' ;
In this case, no characters are collapsed. The character string length remains at 130 characters and SAP IQ
generates an error.
Example 3
In this example, four adjacent underscores ____ (special characters) replace the four constant characters %%%
% at the end of the 130-character LIKE predicate:
'12345678901234567890123456789012345678901234567890123456789012345678901234567890
1234567890123456789012345678901234567890123456____' ;
SAP IQ does not collapse adjacent special characters. The string length remains at 130 characters and SAP IQ
generates an error.
Example 4
In this example, the range [1-3] replaces the four constant characters %%%% at the end of the 130-character
LIKE predicate:
'12345678901234567890123456789012345678901234567890123456789012345678901234567890
1234567890123456789012345678901234567890123456[1-3]' ;
You can specify a set of characters to look for by listing the characters inside square brackets. For example, the
following condition finds the strings smith and smyth:
LIKE 'sm[iy]th'
Specify a range of characters to look for by listing the ends of the range inside square brackets, separated by a
hyphen. For example, the following condition finds the strings bough and rough, but not tough:
LIKE '[a-r]ough'
The range of characters [a-z] is interpreted as “greater than or equal to a, and less than or equal to z,” where
the greater than and less than operations are carried out within the collation of the database. For information
on ordering of characters within a collation, see How the Collation Sequence Sorts Characters in SAP IQ
Administration: Globalization.
The lower end of the range must precede the higher end of the range. For example, a LIKE condition containing
the expression [z-a] returns no rows, because no character matches the [z-a] range.
Unless the database is created as case-sensitive, the range of characters is case-insensitive. For example, the
following condition finds the strings Bough, rough, and TOUGH:
LIKE '[a-z]ough'
If the database is created as a case-sensitive database, the search condition is case-sensitive also.
You can combine ranges and sets within square brackets. For example, the following condition finds the strings
bough, rough, and tough:
LIKE '[a-rt]ough'
The bracket [a-mpqs-z] is interpreted as “exactly one character that is either in the range a to m inclusive, or
is p, or is q, or is in the range s to z inclusive.”
Use the caret character (^) to specify a range of characters that is excluded from a search. For example, the
following condition finds the string tough, but not the strings rough, or bough:
LIKE '[^a-r]ough'
The caret negates the entire contents of the brackets. For example, the bracket [^a-mpqs-z] is interpreted as
“exactly one character that is not in the range a to m inclusive, is not p, is not q, and is not in the range s to z
inclusive.”
Any single character in square brackets indicates that character. For example, [a] matches just the character
a. [^] matches just the caret character, [%] matches only the percent character (the percent character does
not act as a wildcard character in this context), and [_] matches just the underscore character. Also, [[]
matches only the character [.
Compatibility
Note
For information on support of the LIKE predicate with large object data and variables, see Unstructured
Data Queries in SAP IQ Administration: Unstructured Data Analytics.
Users must be specifically licensed to use the large object data types LONG BINARY and LONG VARCHAR.
For details on the Unstructured Data Analytics Option, see SAP IQ Administration: Unstructured Data
Analytics.
Related Information
NOT <condition1>
The NOT condition is TRUE if <condition1> is FALSE, FALSE if <condition1> is TRUE, and UNKNOWN if
<condition1> is UNKNOWN.
IS [ NOT ] <truth-value>
Without the NOT keyword, the condition is TRUE if the <condition> evaluates to the supplied <truth-
value>, which must be one of TRUE, FALSE, or UNKNOWN. Otherwise, the value is FALSE. The NOT keyword
reverses the meaning of the condition but leaves UNKNOWN unchanged.
Compatibility
The selectivity of a condition is the fraction of the table’s rows that satisfy that condition.
The SAP IQ query optimizer uses information from available indexes to select an appropriate strategy for
executing a query. For each condition in the query, the optimizer decides whether the condition can be
executed using indexes, and if so, the optimizer chooses which index and in what order with respect to the
other conditions on that table. The most important factor in these decisions is the selectivity of the condition;
that is, the fraction of the table’s rows that satisfy that condition.
The optimizer normally decides without user intervention, and it generally makes optimal decisions. In some
situations, however, the optimizer might not be able to accurately determine the selectivity of a condition
before it has been executed. These situations normally occur only where either the condition is on a column
If you have a query that is run frequently, then you may want to experiment to see whether you can improve the
performance of that query by supplying the optimizer with additional information to aid it in selecting the
optimal execution strategy.
In this section:
The simplest form of condition hint is to supply a selectivity value that will be used instead of the value the
optimizer would have computed.
Selectivity hints are supplied within the text of the query by wrapping the condition within parentheses. Then
within the parentheses, after the condition, you add a comma and a numeric value to be used as the selectivity.
This selectivity value is expressed as a percentage of the table’s rows, which satisfy the condition. Possible
numeric values for selectivity thus range from 100.0 to 0.0.
Note
Examples
● The following query provides an estimate that one and one half percent of the ship_date values are
earlier than 1994/06/30:
SELECT ShipDate
FROM SalesOrderItems
WHERE ( ShipDate < '2001/06/30', 1.5 )
ORDER BY ShipDate DESC
● The following query estimates that half a percent of the rows satisfy the condition:
SELECT *
Fractional percentages enable more precise user estimates to be specified and can be particularly important
for large tables.
Compatibility
SAP Adaptive Server Enterprise does not support user-supplied selectivity estimates.
Related Information
You can supply additional hint information to the optimizer through a condition hint string.
These per-condition hint strings let users specify additional execution preferences for a condition, which the
optimizer follows, if possible. These preferences include which index to use for the condition, the selectivity of
the condition, the phase of execution when the condition is executed, and the usefulness of the condition,
which affects its ordering among the set of conditions executed within one phase of execution.
Condition hint strings, like the user-supplied selectivity estimates, are supplied within the text of the query by
wrapping the condition within parentheses. Then within the parentheses and after the condition, you add a
comma and a supply a quoted string containing the desired hints. Within that quoted string each hint appears
as a hint type identifier, followed by a colon and the value for that hint type. Multiple hints within the same hint
string are separated from each other by a comma, and multiple hints can appear in any order. White space is
allowed between any of two elements within a hint string.
In this section:
The first hint type that can appear within a hint string is a selectivity hint. A selectivity hint is identified by a hint
type identifier of either “S” or “s”.
Like user-supplied selectivity estimates, the selectivity value is always expressed as a percentage of the table’s
rows, which satisfy the condition.
Example
The following example is exactly equivalent to the second user-supplied condition selectivity example:
SELECT *
FROM Customers c, SalesOrders o
WHERE (o.SalesRepresentative > 1000.0, 's: 0.5)
AND c.ID = o.CustomerID
Related Information
The second supported hint type is an index preference hint, which is identified by a hint type identifier of either
“I” or “i”.
The value for an index preference hint can be any integer between -10 and 10. The meaning of each positive
integer value is to prefer a specific index type, while negative values indicate that the specific index type is to be
avoided.
The effect of an index preference hint is the same as that of the INDEX_PREFERENCE option, except that the
preference applies only to the condition it is associated with rather than all conditions within the query. An
index preference can only affect the execution of a condition if the specified index type exists on that column
and that index type is valid for use when evaluating the associated condition; not all index types are valid for
use with all conditions.
The following example specifies a 3 percent selectivity and indicates that, if possible, the condition should
be evaluated using an HG index:
SELECT *
FROM Customers c, SalesOrders o
WHERE (o.SalesRepresentative > 1000.0, 'S:3.00, I:+2')
AND c.ID = o.CustomerID
Example
The next example specifies a 37.5 percent selectivity and indicates that if possible the condition should not
be evaluated using an HG index:
SELECT *
FROM Customers c, SalesOrders o
WHERE (o.SalesRepresentative > 1000.0, 'i:-2, s:37.500')
AND c.ID = o.CustomerID
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The SAP IQ optimizer normally chooses the best index available to process local WHERE clause predicates and
other operations that can be done within an IQ index. INDEX_PREFERENCE is used to override the optimizer
choice for testing purposes; under most circumstances, it should not be changed.
The third supported hint type is the execution phase hint, which is identified with a hint type identifier of either
"E" or "e".
Within the SAP IQ query engine, there are four distinct phases of execution where conditions can be evaluated.
By default, the optimizer chooses to evaluate each condition within the earliest phase of execution where all the
information needed to evaluate that condition is available. Every condition. therefore, has a default execution
phase where it is evaluated.
Because no condition can be evaluated before the information it needs is available, the execution phase hint
can only be used to delay the execution of a condition to a phase after its default phase. It cannot be used to
force a condition to be evaluated within any phase earlier than its default phase.
The four phases of condition execution, from earliest to latest, are as follows:
● Invariant – a condition that refers to only one column (or two columns from the same table) and that can
be evaluated using an index is generally referred to as a simple invariant condition. Simple invariant
An execution phase hint accepts a value that identifies in which execution phase the user wants the condition
to be evaluated. Each value is a case-insensitive single character:
● D – Delayed
● B – Bound
● H – Horizontal
Example
The following example shows a condition hint string which indicates that the condition should be moved into
the “Delayed” phase of execution, and it indicates that if possible the condition should be evaluated using an
HG index:
SELECT *
FROM Customers c, SalesOrders o
WHERE (o.SalesRepresentative > 10000.0, 'E:D, I:2')
AND c.id = o.CustomerID
The final supported hint type is the usefulness hint, which is identified by a hint type identifier of either “U” or
“u”.
The value for a usefulness hint can be any numeric value between 0.0 and 10.0. Within the optimizer a
usefulness value is computed for every condition, and the usefulness value is then used to determine the order
of evaluation among the set of conditions to be evaluated within the same phase of execution. The higher the
usefulness value, the earlier it appears in the order of evaluation. Supplying a usefulness hint lets users place a
condition at a particular point within the order of evaluation, but it cannot change the execution phase within
which the condition is evaluated.
The following example shows a condition hint string which indicates that the condition should be moved into
the “Delayed” phase of execution, and that its usefulness should be set to 3.25 within that “Delayed” phase:
SELECT *
FROM Customers c, SalesOrders o
WHERE (co.SalesRepresentative > 10000.0, 'U: 3.25, E: D')
AND c.id = o.CustomerID
Compatibility
SAP SQL Anywhere does not support user-supplied condition hint strings.
SAP Adaptive Server Enterprise does not support user-supplied condition hint strings.
Users can specify a join algorithm preference that does not affect every join in the query.
Simple equality join predicates can be tagged with a predicate hint that allows a join preference to be specified
for just that one join. If the same join has more than one join condition with a local join preference, and if those
hints are not the same value, then all local preferences are ignored for that join. Local join preferences do not
affect the join order chosen by the optimizer.
Value Action
1 Prefer sort-merge
2 Prefer nested-loop
4 Prefer hash
9 Prefer partitioned hash join if the join keys include all the partition keys of a hash partitioned
table
10 Prefer partitioned hash-push down join if the join keys include all the partition keys of a hash
partitioned table
11 Prefer partitioned sort-merge join if the join keys include all the partition keys of a hash par
titioned table
12 Prefer partitioned sort-merge push-down join if the join keys include all the partition keys of
a hash partitioned table
-1 Avoid sort-merge
-2 Avoid nested-loop
-4 Avoid hash
-9 Avoid partitioned hash join if the join keys include all the partition keys of a hash partitioned
table
10 Avoid partitioned hash-push down join if the join keys include all the partition keys of a hash
partitioned table
11 Avoid partitioned sort-merge join if the join keys include all the partition keys of a hash parti
tioned table
12 Avoid partitioned sort-merge push-down join if the join keys include all the partition keys of
a hash partitioned table
Example
Related Information
Condition hints are generally appropriate only within frequently run queries.
Only advanced users should experiment with condition hints. The optimizer generally makes optimal decisions,
except where it cannot infer accurate information about a condition from the available indexes.
The optimizer often rewrites or simplifies the original conditions, and it also infers new conditions from the
original conditions. Condition hints are not carried through new to conditions inferred by the optimizer, nor are
they carried through to simplified conditions.
Special values can be used in expressions, and as column defaults when creating tables.
In this section:
Related Information
Data Type
STRING
Data Type
DATE
Related Information
CURRENT PUBLISHER returns a string that contains the publisher user ID of the database for SQL Remote
replications.
Data Type
STRING
CURRENT PUBLISHER can be used as a default value in columns with character data types.
CURRENT TIME returns the current hour, minute, second, and fraction of a second.
Data Type
TIME
Description
The fraction of a second is stored to 6 decimal places, but the accuracy of the current time is limited by the
accuracy of the system clock.
Related Information
Combines CURRENT DATE and CURRENT TIME to form a TIMESTAMP value containing the year, month, day,
hour, minute, second and fraction of a second.
As with CURRENT TIME, the accuracy of the fraction of a second is limited by the system clock.
Data Type
TIMESTAMP
Related Information
CURRENT USER returns a string that contains the user ID of the current connection.
On UPDATE, columns with a default value of CURRENT USER are not changed.
Data Type
STRING
CURRENT USER can be used as a default value in columns with character data types.
Data type
STRING
Remarks
Use EXECUTING USER, INVOKING USER, SESSION USER, and PROCEDURE OWNER to determine which users
can execute, and are executing, procedures and user-defined functions. Depending on how many layers of
nesting a particular procedure call has, and based on whether the previous and current procedure are SQL
SECURITY DEFINER or SQL SECURITY INVOKER, the EXECUTING USER, and INVOKING USER can and do
change.
Standards
SQL special value that returns the user that invoked the current procedure, or returns the current logged in
user if no procedure is executing.
Data type
STRING
Remarks
Use INVOKING USER, SESSION USER, EXECUTING USER, and PROCEDURE OWNER to determine which users
can execute, and are executing, procedures and user-defined functions. Depending on how many layers of
nesting a particular procedure call has, and based on whether the previous and current procedure are SQL
SECURITY DEFINER or SQL SECURITY INVOKER, the INVOKING USER and EXECUTING USER can and do
change.
Standards
LAST USER returns the name of the user who last modified the row.
On INSERT and LOAD, this constant has the same effect as CURRENT USER. On UPDATE, if a column with a
default value of LAST USER is not explicitly modified, it is changed to the name of the current user.
When combined with the DEFAULT TIMESTAMP, a default value of LAST USER can be used to record (in
separate columns) both the user and the date and time a row was last changed.
Data Type
STRING
Related Information
SQL special value that returns the owner of the current procedure, or NULL if queried outside of a procedure
context.
Data type
STRING
Remarks
Use PROCEDURE OWNER, INVOKING USER, SESSION USER, and EXECUTING USER to determine which users
can execute, and are executing, procedures and user-defined functions. Depending on how many layers of
nesting a particular procedure call has, and based on whether the previous and current procedure are SQL
SECURITY DEFINER or SQL SECURITY INVOKER, the EXECUTING USER and INVOKING USER can and do
change.
Standards
SQL special value that stores the user that is currently logged in.
Data type
STRING
Remarks
Use SESSION USER, INVOKING USER, EXECUTING USER, and PROCEDURE OWNER to determine which users
can execute, and are executing, procedures and user-defined functions. Depending on how many layers of
nesting a particular procedure call has, and based on whether the previous and current procedure are SQL
SECURITY DEFINER or SQL SECURITY INVOKER, the INVOKING USER, and EXECUTING USER can and do
change. However, SESSION USER always remains the logged in user.
Standards
The SQLCODE value is set after each statement. You can check the SQLCODE to see whether or not the
statement succeeded.
Data Type
STRING
The SQLSTATE value is set after each statement. You can check the SQLSTATE to see whether or not the
statement succeeded.
Data Type
STRING
TIMESTAMP indicates when each row in the table was last modified.
When a column is declared with DEFAULT TIMESTAMP, a default value is provided for insert and load
operations. The value is updated with the current date and time whenever the row is updated.
On INSERT and LOAD, DEFAULT TIMESTAMP has the same effect as CURRENT TIMESTAMP. On UPDATE, if a
column with a default value of TIMESTAMP is not explicitly modified, the value of the column is changed to the
current date and time.
Note
SAP IQ does not support DEFAULT values of UTC TIMESTAMP or CURRENT UTC TIMESTAMP, nor does it
support the database option DEFAULT_TIMESTAMP_INCREMENT. SAP IQ generates an error every time an
attempt is made to insert or update the DEFAULT value of a column of type UTC TIMESTAMP or CURRENT
UTC TIMESTAMP.
Data Type
TIMESTAMP
Related Information
USER returns a string that contains the user ID of the current connection.
Data Type
STRING
USER can be used as a default value in columns with character data types.
Related Information
In addition to explicitly setting the data type for an object, you can also set the data type by specifying the
%TYPE and %ROWTYPE attributes.
Use the %TYPE and %ROWTYPE attributes when creating or declaring variables, converting values, creating or
altering tables, and creating procedures, to define the data type(s) based on the data type of a column or row in
a table, view, or cursor. The %TYPE attribute sets the data type to that of a column in the specified object,
while the %ROWTYPE attribute sets the data types to those of a row in the specified object.
When %TYPE or %ROWTYPE is specified for a schema object, the database server derives the actual data type
information from system tables. For example, if a %TYPE attribute specifies a table column, the data type is
retrieved from the ISYSTABCOL system table.
Once the data types have been derived and the object (variable, column, and so on) is created, there is no
further link or dependency to the object referenced in the %TYPE and %ROWTYPE attribute. However, in the
case of procedures that use %TYPE and %ROWTYPE to define parameters and return types, the procedure
can return different results if the underlying referenced objects change. This is because %TYPE and
%ROWTYPE are evaluated when the procedure is executed, not when it is created.
Specify the %TYPE attribute to set the data type of a column to the data type of a column in another table or
view. For example:
● myColumnName <other-table-name>.<column-name>%TYPE
The second statement in the following example creates a table, myT2, and sets the data type of its column,
myColumn, to the data type of the last_name column in myT1. Since additional attributes such as nullability are
not applied, myT2.myColumn will not have the same NOT NULL restriction that myT1.last_name does.
Specify the %TYPE or %ROWTYPE attribute to set the data type(s) of the parameters to the data type(s) of a
column or row in a specified table or view.
The following statement creates a function called fullname and sets the data types of the firstname and
lastname parameters to the data types of the Surname and Givenname column of the Employees table:
Specify the %TYPE attribute when to cast or convert a value to the data type of another database object.
The following statement casts a value to the data type defined for the BirthDate column (DATE data type) of
the Employees table:
Domains
Specify the %TYPE attribute to set the domain data type to the data type of a column in a specified table or
view.
In the following example, the second two CREATE DOMAIN statements in the following set of statements
create domains based on the data types of the Surname and GivenName columns of the Customers table.
Variables
Specify the %TYPE attribute to set the data type of a variable to the data type of a column in a specified table,
view, or cursor. When %TYPE is used, only the data type is derived from the referenced object. Other column
attributes such as default values, constraints, and whether NULLs are allowed, are not included and must be
specified separately. Use the %TYPE attribute to declare a variable with the same type as column data when
you want your application to be able to adjust to changes to an underlying table schema.
The following example creates a new variable, ProductID, and uses the %TYPE attribute to set its data type to
the data type of the ID column in the Products table:
Specify the %ROWTYPE attribute to set the data type of a set of columns to the data types of a row in a
specified table, view, or cursor. For example, use the %ROWTYPE attribute to define a variable that can store
row or array values.
When %ROWTYPE is specified, other column attributes such as default values, constraints, and whether
NULLs are allowed, are not included in the derivation.
You can also use the %TYPE attribute to set the data type of a variable to type of another variable, as shown in
the second DECLARE statement in this example:
In this section:
Sets the data type to that of a column in a specified object or variable when creating or declaring variables, or
creating or altering tables, views, procedures, and functions. It can also be used for casting data from one type
to another.
Syntax
<type-source>%TYPE
| TYPE OF ( <type-source> )
<type-source> :
[ <owner>. ]{ <table-name>
| <view-name> }.<column-name>
| <variable-name>
| <variable-name>.<field-name>
Parameters
table-name
Remarks
When creating or altering procedures (parameters and return types), tables, views, and domains, an object
that is referenced in a %TYPE specification must be a permanent object. A reference to a temporary object,
such as a variable, cursor, or temporary table returns an error.
When %TYPE is specified in an IS OF search expression, a WITH <hint> expression in a FROM clause, or a
CAST or CONVERT function, the referenced item must be a permanent object. Specifying a correlation name
or a derived table returns an error.
When %TYPE is specified, other attributes, such as default values, constraints, and whether NULLs are
allowed, are not part of the definition that is inherited and must be specified separately.
When defining or declaring a variable, if the identifier portion of <type-source> is one of the following, then
the identifier portion must be quoted:
● IN
● OUT
● INOUT
● DYNAMIC
● SCROLL
● NO
● INSENSITIVE
● SENSITIVE
● TIMESTAMP
● a name that starts with #
For example, the statement below declares a variable called DYNAMIC. It then declares another variable called
var1, and sets its data type to that of DYNAMIC (INT). Since DYNAMIC is one of the keywords that must be
quoted, quotes are placed around it:
BEGIN
DECLARE dynamic INT;
DECLARE var1 "DYNAMIC"%TYPE;
SET var1 = 1;
MESSAGE var1;
END
Privileges
None.
None.
Standards
Example
In addition to the following examples, there are examples in the documentation for the SQL statements and
functions that support specifying the %TYPE attribute.
The following statement casts a value to the data type defined for the BirthDate column (DATE data type)
of the Employees table:
Sets the data type to the composite data type of a row in a specified table, view, table reference variable, or
cursor.
Syntax
<rowtype-source>%ROWTYPE
| ROWTYPE OF ( <rowtype-source> )
<rowtype-source> :
[ <owner>. ]{ <table-name> | <view-name> }
Parameters
table-name
When specifying <table-name>, the data type of the %ROWTYPE variable is comprised of the data types
of the columns in <table-name>.
view-name
The name of an enabled view (including materialized views). Materialized views must be initialized as well.
When specifying <view-name>, the data type of the %ROWTYPE variable is comprised of the data types
of the columns in <view-name>.
cursor-name
When specifying <cursor-name>, the data type of the %ROWTYPE variable is comprised of the data
types of the select items for the cursor.
table-reference-variable
When specifying a table reference variable, the data type of the %ROWTYPE variable is comprised of the
data types of the columns in the table referenced in <table-reference-variable>.
Remarks
When creating a %ROWTYPE variable, other attributes, such as default values, constraints, and whether
NULLs are allowed, are not part of the definition that is inherited, and must be specified separately.
When creating or altering procedures, views, and domains, an object referenced in a %ROWTYPE
specification must be a permanent object. A reference to a temporary table returns an error.
If you declare a row variable and the argument to the %ROWTYPE construct is a cursor which is not yet
opened, it is possible that the schema of the cursor will be different at open time if any of the underlying
objects have changed. It is safer to declare row variables based on cursors that are already open.
When <rowtype-source> references a cursor, the names of the items in the cursor (for example, column
names) must be simple names, or an alias. If the select list item names in the cursor cannot be
successfully derived, then an error is returned.
● IN
● OUT
● INOUT
● DYNAMIC
● SCROLL
● NO
● INSENSITIVE
● SENSITIVE
● TIMESTAMP
● identifiers that start with #
Restrictions when specifying %ROWTYPE with a table reference variable (TABLE REF (table-
reference-variable) %ROWTYPE):
Specifying %ROWTYPE with a table reference variable is not supported: when creating or altering
procedures, views, and domains. Similarly, specifying %ROWTYPE with a table reference variable is not
supported in an IS OF search expression, or in a WITH <hint> expression in a FROM clause, or in a CAST
or CONVERT function.
When <rowtype-source> references a table reference variable, the table reference variable must already
be initialized when the %ROWTYPE is processed. If TABLE REF (<table-reference-variable>)
%ROWTYPE is used in a statement that is in a batch or procedure, statement must be nested inside another
BEGIN...END block after the table reference variable has been assigned a value or passed as a parameter.
Privileges
None.
Side effects
None.
Standards
Example
In addition to the following examples, there are examples in the documentation for the SQL statements and
functions that support specifying the %ROWTYPE attribute.
The following statement declares a variable, cust_rec, and sets its data type to the composite data type of a
row in the Customers table:
When a variable is created, the initial value is set to NULL unless a default is specified. The value can
subsequently be changed by using the SET statement, the UPDATE statement, or a SELECT statement with an
INTO clause.
Connection-scope variables
Connection-scope variables are set and used in the context of a connection. They are not available to other
connections. There are two types of connection-scope variables: connection-level and local (also referred
to as declared). You can also create connection-scope variables of type TABLE REF to hold references to
tables; these are called table reference variables.
Connection-level variables
Connection-level variables are created by using the CREATE VARIABLE statement and are typically
used to make values available to any procedure executed by the connection.
Connection-level variables persist only for the duration of the connection or until the variable is
explicitly dropped by using the DROP VARIABLE statement
Local variables are created by using the DECLARE statement inside of a BEGIN...END block, and are
typically used to store and modify values within the same compound statement that the local variable
is declared in. Local variable values are not available for use outside of the context of the BEGIN...END
block.
Database-scope variables are used in the context of the database (instead of connection), and are a great
way to share values across connections. Their intended use is to store small, infrequently changing, shared
values. Storing large or frequently changing values may affect the performance of your application, and is
not recommended. The initial values of database-scope variables persist after the database restarts (that
is, changes to their initial value do not persist between database restarts). Database-scope variables can
be used in the same manner as connection-scope and global variables, but they cannot be defined with the
data type ROW, ARRAY, or TABLE REF.
When a database-scope variable is owned by a user, only that user can select from, and update, that
variable, and can do so regardless of the connection.
Database-scope variables can also be owned by a role. However, the only access to a database-scope
variable owned by a role is through the stored procedures, user-defined functions, and events owned
by that role.
Database-scope variables owned by PUBLIC
Database variables owned by PUBLIC are available to all users and connections provided the users
have the right system privileges.
Access to, and administration of, database-scope variables requires system privileges that vary depending
on who owns the variable (self, another user, or PUBLIC). The following table summarizes the privileges
required to access and administer database-scope variables:
Global variables
Global variables are visually distinguished from other variables by having two @ signs preceding their
names. For example, @@error and @@rowcount are global variables.
It is possible to have a statement that has aliases and variables with identical names. This is the sequence the
database server follows when processing an identifier to help you know how the reference is resolved:
Standards
Variables declared within SQL stored procedures or functions by using the DECLARE statement is
supported in the ANSI/ISO SQL Standard as SQL Language Feature P002, "Computational
completeness". CREATE VARIABLE, DROP VARIABLE, and global variables are not in the ANSI/ISO SQL
Standard.
2.10 Variables
All global variables have names beginning with two @ signs. For example, the global variable <@@version> has
a value that is the current version number of the database server. Users cannot define global variables.
In this section:
Local variables are declared by the user, and can be used in procedures or in batches of SQL statements to hold
information.
Local variables are declared using the DECLARE statement, which can be used only within a compound
statement (that is, bracketed by the BEGIN and END keywords). The variable is initially set as NULL. You can set
the value of the variable using the SET statement, or you can assign the value using a SELECT statement with
an INTO clause.
You can pass local variables as arguments to procedures, as long as the procedure is called from within the
compound statement.
Examples
BEGIN
DECLARE local_var INT ;
SET local_var = 10 ;
MESSAGE 'local_var = ', local_var ;
END
Running this batch from ISQL displays this message on the server window:
local_var = 10
● The variable local_var does not exist outside the compound statement in which it is declared. The
following batch is invalid, and displays a "column not found" error:
● The following example illustrates the use of SELECT with an INTO clause to set the value of a local variable:
BEGIN
DECLARE local_var INT ;
SELECT 10 INTO local_var ;
MESSAGE 'local_var = ', local_var ;
END
local_var = 10
Compatibility
● Names – Both SAP Adaptive Server Enterprise and SAP IQ support local variables. In SAP ASE, the all
variables require an @ sign as their prefix. In SAP IQ, the @ prefix is optional. To write compatible SQL,
ensure all your variables have the @ prefix.
● Scope – The scope of local variables differs between SAP IQ and SAP ASE. SAP IQ supports the use of the
DECLARE statement to declare local variables within a batch. However, if the DECLARE is executed within a
compound statement, the scope is limited to the compound statement.
● Declaration – Only one variable can be declared for each DECLARE statement in SAP IQ. In SAP ASE, more
than one variable can be declared in a single statement.
Connection-level variables are declared by the user, and can be used in procedures or in batches of SQL
statements to hold information.
Connection-level variables are declared with the CREATE VARIABLE statement. The CREATE VARIABLE
statement can be used anywhere except inside compound statements. Connection-level variables can be
passed as parameters to procedures.
When a variable is created, it is initially set to NULL. You can set the value of connection-level variables in the
same way as local variables, using the SET statement or using a SELECT statement with an INTO clause.
Connection-level variables exist until the connection is terminated, or until you explicitly drop the variable using
the DROP VARIABLE statement. The following statement drops the variable <con_var>:
Example
The following batch of SQL statements illustrates the use of connection-level variables:
con_var = 10
Compatibility
SAP IQ sets the values of global variables. For example, the global variable <@@version> has a value that is the
current version number of the database server.
Global variables are distinguished from local and connection-level variables by two @ signs preceding their
names. For example, <@@error> is a global variable. Users cannot create global variables, and cannot update
the value of global variables directly.
Some global variables, such as <@@spid>, hold connection-specific information and therefore have
connection-specific values. Other variables, such as <@@connections>, have values that are common to all
connections.
The special constants such as CURRENT DATE, CURRENT TIME, USER, SQLSTATE, and so on, are similar to
global variables.
The following statement retrieves the value of the version global variable:
SELECT @@version
In procedures, global variables can be selected into a variable list. The following procedure returns the server
version number in the <ver> parameter:
In Embedded SQL, global variables can be selected into a host variable list.
<@@error> Commonly used to check the error status (succeeded or failed) of the most recently exe
cuted statement. Contains 0 if the previous transaction succeeded; otherwise, contains the
last error number generated by the system. A statement such as if <@@error> != 0
return causes an exit if an error occurs. Every SQL statement resets <@@error>, so the sta
tus check must immediately follow the statement whose success is in question.
<@@fetch_status> Contains status information resulting from the last fetch statement. <@@fetch_status>
may contain the following values:
This feature is the same as <@@sqlstatus>, except that it returns different values. It is for
Microsoft SQL Server compatibility.
<@@identity> The last value inserted into an Identity/Autoincrement column by an insert, load, or update
statement. <@@identity> is reset each time a row is inserted into a table. If a statement
inserts multiple rows, <@@identity> reflects the Identity/Autoincrement value for the last
row inserted. If the affected table does not contain an Identity/Autoincrement column,
<@@identity> is set to 0.
The value of <@@identity> is not affected by the failure of an insert, load, or update state
ment, or the rollback of the transaction that contained the failed statement. <@@identity>
retains the last value inserted into an Identity/Autoincrement column, even if the statement
that inserted that value fails to commit.
<@@isolation> Current isolation level. <@@isolation> takes the value of the active level.
<@@rowcount> Number of rows affected by the last statement. The value of <@@rowcount> should be
checked immediately after the statement. Inserts, updates, and deletes set <@@rowcount>
to the number of rows affected.
With cursors, <@@rowcount> represents the cumulative number of rows returned from the
cursor result set to the client, up to the last fetch request. The <@@rowcount> is not reset
to zero by any statement, which does not affect rows, such as an IF statement.
<@@sqlstatus> Contains status information resulting from the last FETCH statement.
In this section:
This table includes all SAP Adaptive Server Enterprise global variables that are supported in SAP IQ. SAP
Adaptive Server Enterprise global variables that are not supported by SAP IQ are not included in the list.
This list includes all global variables that return a value, including those for which the value is fixed at NULL, 1,
-1, or 0, and might not be meaningful.
@@char_convert
Returns 0.
@@client_csname
● In SAP ASE – the client's character set name. Set to NULL if client character set has never been
initialized; otherwise, contains the name of the most recently used character set.
● In SAP IQ – returns NULL.
@@client_csid
● In SAP ASE – the client's character set ID. Set to -1 if client character set has never been initialized;
otherwise, contains the most recently used client character set ID from syscharsets.
● In SAP IQ – returns -1.
@@connections
● In SAP ASE – the amount of time, in ticks, that the CPU has spent performing SAP ASE work since the
last time SAP ASE was started.
● In SAP IQ – returns 0.
@@error
Commonly used to check the error status (succeeded or failed) of the most recently executed statement.
Contains 0 if the previous transaction succeeded; otherwise, contains the last error number generated by
the system. A statement such as the following causes an exit if an error occurs:
if @@error != 0 return
Every statement resets <@@error>, including PRINT statements or IF tests, so the status check must
immediately follow the statement whose success is in question.
@@identity
In SAP ASE – the last value inserted into an IDENTITY column by an INSERT, LOAD, or SELECT INTO
statement. <@@identity> is reset each time a row is inserted into a table. If a statement inserts multiple
rows, <@@identity> reflects the IDENTITY value for the last row inserted. If the affected table does not
contain an IDENTITY column, <@@identity> is set to 0. The value of <@@identity> is not affected by
the failure of an INSERT or SELECT INTO statement, or the rollback of the transaction that contained the
● In SAP ASE – the amount of time, in ticks, that SAP ASE has been idle since the server was last started.
● In SAP IQ – returns 0.
@@io_busy
● In SAP ASE – the amount of time, in ticks, that SAP ASE has spent performing input and output
operations since the server was last started.
● In SAP IQ – returns 0.
@@isolation
● In SAP ASE – defines the local language ID of the language currently in use.
● In SAP IQ – returns 0.
@@language
● In SAP ASE – maximum length, in bytes, of a character in the SAP ASE default character set.
● In SAP IQ – returns 1.
@@max_connections
For the network server, the maximum number of active clients (not database connections, as each client
can support multiple connections).
● In SAP ASE – nesting level of current execution (initially 0). Each time a stored procedure or trigger
calls another stored procedure or trigger, the nesting level is incremented.
● In SAP IQ – returns -1.
@@pack_received
● In SAP ASE – number of input packets read by SAP ASE since the server was last started.
● In SAP IQ – returns 0.
@@pack_sent
● In SAP ASE – number of output packets written by SAP ASE since the server was last started.
● In SAP IQ – returns 0.
@@packet_errors
● In SAP ASE – number of errors that have occurred while SAP ASE was sending and receiving packets.
Contains status information resulting from the last FETCH statement. <@@sqlstatus> may contain the
following values:
● In SAP ASE – number of microseconds per tick. The amount of time per tick is machine-dependent.
● In SAP IQ – returns 0.
@@total_errors
● In SAP ASE – number of errors that have occurred while SAP ASE was reading or writing.
● In SAP IQ – returns 0.
@@total_read
● In SAP ASE – number of disk reads by SAP ASE since the server was last started.
● In SAP IQ – returns 0.
@@total_write
● In SAP ASE – number of disk writes by SAP ASE since the server was last started.
● In SAP IQ – returns 0.
@@tranchained
Current transaction mode of the Transact-SQL program. <@@tranchained> returns 0 for unchained or 1
for chained.
@@trancount
Nesting level of transactions. Each BEGIN TRANSACTION in a batch increments the transaction count.
@@transtate
Use comments to attach explanatory text to SQL statements or statement blocks. The database server does
not execute comments.
-- (Double hyphen) The database server ignores any remaining characters on the line. This is the SQL92 com
ment indicator.
// (Double slash) The double slash has the same meaning as the double hyphen.
/* … */ (Slash-asterisk) Any characters between the two comment markers are ignored. The two comment markers
might be on the same or different lines. Comments indicated in this style can be nested.
This style of commenting is also called C-style comments.
% (Percent sign) The percent sign has the same meaning as the double hyphen. You should not use % as a
comment indicator.
Note
The double-hyphen and the slash-asterisk comment styles are compatible with SAP Adaptive Server
Enterprise.
Examples
/*
Lists the names and employee IDs of employees
who work in the sales department.
*/
CREATE VIEW SalesEmployee AS
SELECT emp_id, emp_lname, emp_fname
FROM "GROUPO".Employees
WHERE DepartmentID = 200
The NULL value is a special value that is different from any valid value for any data type. However, the NULL
value is a legal value in any data type. These are two separate and distinct cases where NULL is used:
Situation Description
missing The field does have a value, but that value is unknown.
inapplicable The field does not apply for this particular row.
SQL allows columns to be created with the NOT NULL restriction. This means that those particular columns
cannot contain the NULL value.
The NULL value introduces the concept of three valued logic to SQL. The NULL value compared using any
comparison operator with any value including the NULL value is UNKNOWN. The only search condition that
returns TRUE is the IS NULL predicate. In SQL, rows are selected only if the search condition in the WHERE
clause evaluates to TRUE; rows that evaluate to UNKNOWN or FALSE are not selected.
You can also use the IS [ NOT ] <truth-value> clause, where <truth-value> is one of TRUE, FALSE or
UNKNOWN, to select rows where the NULL value is involved.
In the following examples, the column Salary contains the NULL value.
The same rules apply when comparing columns from two different tables. Therefore, joining two tables
together does not select rows where any of the columns compared contain the NULL value.
The NULL value also has an interesting property when used in numeric expressions. The result of any numeric
expression involving the NULL value is the NULL value. This means that if the NULL value is added to a number,
the result is the NULL value—not a number. If you want the NULL value to be treated as 0, you must use the
ISNULL( expression, 0 ) function.
Syntax
NULL
Remarks
Anywhere
Permissions
Side Effects
None
Example
The following INSERT statement inserts a NULL into the date_returned column of the Borrowed_book
table:
INSERT
INTO Borrowed_book
( date_borrowed, date_returned, book )
VALUES ( CURRENT DATE, NULL, '1234' )
Related Information
SQL data types define the type of data to be stored, such as character strings, numbers, and dates.
In this section:
Use character data types for storing strings of letters, numbers and symbols.
● CHAR [ ( <max-length> ) ]
Character data of maximum length <max-length> bytes. If <max-length> is omitted, the default is 1.
The maximum size allowed is 32KB – 1. See Notes for restrictions on CHAR data greater than 255 bytes.
See the notes below on character data representation in the database, and on storage of long strings.
● CHARACTER [ ( <max-length> ) ]
Same as CHAR.
Same as VARCHAR.
● TEXT
● VARCHAR [ ( <max-length> ) ]
Arbitrary length character data. The maximum size is limited by the maximum size of the database file
(currently 2 gigabytes).
VARCHAR is the same as CHAR, except that no blank padding is added to the storage of these strings, and
VARCHAR strings can have a maximum length of (32 KB – 1). See Notes for restrictions on VARCHAR data
greater than 255 bytes.
● UNIQUEIDENTIFIERSTR
Domain implemented as CHAR( 36 ). This data type is used for remote data access, when mapping
Microsoft SQL Server uniqueidentifier columns.
Note
As a separately licensed option, SAP IQ supports character large object (CLOB) data with a length ranging
from zero (0) to 512 TB (terabytes) for an SAP IQ page size of 128 KB or 2 PB (petabytes) for an SAP IQ
page size of 512 KB. The maximum length is equal to 4 GB multiplied by the database page size. See SAP IQ
Administration: Unstructured Data Analytics.
In this section:
Restriction on CHAR and VARCHAR Data Over 255 Bytes [page 119]
Only the default index, WD, TEXT, and CMP index types are supported for CHAR and VARCHAR columns
over 255 bytes.
Related Information
The storage size of character data, given column definition size and input data size.
Character data is placed in the database using the exact binary representation that is passed from the
application.
This usually means that character data is stored in the database with the binary representation of the
character set used by your system. You can find documentation about character sets in the documentation for
your operating system.
On Windows, code pages are the same for the first 128 characters. If you use special characters from the top
half of the code page (accented international language characters), you must be careful with your databases. In
particular, if you copy the database to a different machine using a different code page, those special characters
are retrieved from the database using the original code page representation. With the new code page, they
appear on the window to be the wrong characters.
This problem also appears if you have two clients using the same multiuser server, but running with different
code pages. Data inserted or updated by one client might appear incorrect to another.
This problem is quite complex. If any of your applications use the extended characters in the upper half of the
code page, make sure that all clients and all machines using the database use the same or a compatible code
page.
3.1.3 Indexes
All index types, except DATE, TIME, and DTTM, are supported for CHAR data and VARCHAR data less than or
equal to 255 bytes in length.
For a column of data type VARCHAR, trailing blanks within the data being inserted are handled differently
depending on whether or not the data is enclosed in quotes.
● Enclosed in quotes
● Not enclosed in quotes
● Binary
For a column of data type VARCHAR, trailing blanks within the data being inserted are handled as follows:
When you write your applications, do not depend on the existence of trailing blanks in VARCHAR columns. If an
application relies on trailing blanks, use a CHAR column instead of a VARCHAR column.
Only the default index, WD, TEXT, and CMP index types are supported for CHAR and VARCHAR columns over 255
bytes.
You cannot create an HG, HNG, DATE, TIME, or DTTM index for these columns.
Values up to 254 characters are stored as short strings, with a preceding length byte. Any values that are longer
than 255 bytes are considered long strings. Characters after the 255th are stored separate from the row
containing the long string value.
SAP SQL Anywhere treats CHAR, VARCHAR, and LONG VARCHAR columns all as the same type.
There are several functions that will ignore the part of any string past the 255th character. They are soundex,
similar, and all of the date functions. Also, any arithmetic involving the conversion of a long string to a
number will work on only the first 255 characters. It would be extremely unusual to run in to one of these
limitations.
All other functions and all other operators work with the full length of long strings.
Syntax
[ UNSIGNED ] BIGINT
SMALLINT
TINYINT
DOUBLE
FLOAT [ ( <precision> ) ]
In this section:
● The INTEGER, NUMERIC, and DECIMAL data types are sometimes called exact numeric data types, in
contrast to the approximate numeric data types FLOAT, DOUBLE, and REAL. Only exact numeric data is
guaranteed to be accurate to the least significant digit specified after arithmetic operations.
● Do not fetch TINYINT columns into Embedded SQL variables defined as CHAR or UNSIGNED CHAR, since
the result is an attempt to convert the value of the column to a string and then assign the first byte to the
variable in the program.
● A period is the only decimal separator (decimal point); comma is not supported as a decimal separator.
You can specify integers as UNSIGNED. By default the data type is signed. Its range is be
tween -9223372036854775808 and 9223372036854775807 (signed) or from 0 to
18446744073709551615 (unsigned).
INT or INTEGER A signed 32-bit integer with a range of values between -2147483648 and 2147483647 requir
ing 4 bytes of storage.
The INTEGER data type is an exact numeric data type; its accuracy is preserved after arith
metic operations.
You can specify integers as UNSIGNED; by default the data type is signed. The range of val
ues for an unsigned integer is between 0 and 4294967295.
SMALLINT A signed 16-bit integer with a range between -32768 and 32767, requiring 2 bytes of storage.
The SMALLINT data type is an exact numeric data type; its accuracy is preserved after
arithmetic operations.
TINYINT An unsigned 8-bit integer with a range between 0 and 255, requiring 1 byte of storage.
The TINYINT data type is an exact numeric data type; its accuracy is preserved after arith
metic operations.
DECIMAL A signed decimal number with <precision> total digits and with <scale> of the digits af
ter the decimal point. The precision can equal 1 to 126, and the scale can equal 0 up to preci
sion value. The defaults are scale = 38 and precision = 126. Results are calculated based on
the actual data type of the column to ensure accuracy, but you can set the maximum scale
of the result returned to the application using the MAX_CLIENT_NUMERIC_SCALE op
tion.
DOUBLE A signed double-precision floating-point number stored in 8 bytes. The range of absolute,
nonzero values is between 2.2250738585072014e-308 and 1.797693134862315708e+308.
Values held as DOUBLE are accurate to 15 significant digits, but might be subject to round
ing errors beyond the 15th digit.
The DOUBLE data type is an approximate numeric data type; it is subject to rounding errors
after arithmetic operations.
FLOAT If <precision> is not supplied, the FLOAT data type is the same as the REAL data type. If
<precision> supplied, then the FLOAT data type is the same as the REAL or DOUBLE
data type, depending on the value of the precision. The cutoff between REAL and DOUBLE
is platform-dependent, and it is the number of bits used in the mantissa of single-precision
floating point number on the platform.
When a column is created using the FLOAT data type, columns on all platforms are guaran
teed to hold the values to at least the specified minimum precision. In contrast, REAL and
DOUBLE do not guarantee a platform-independent minimum precision.
The FLOAT data type is an approximate numeric data type; it is subject to rounding errors
after arithmetic operations.
REAL A signed single-precision floating-point number stored in 4 bytes. The range of absolute,
nonzero values is 1.175494351e-38 to 3.402823466e+38. Values held as REAL are accurate
to 6 significant digits, but might be subject to rounding errors beyond the sixth digit.
The REAL data type is an approximate numeric data type; it is subject to rounding errors
after arithmetic operations.
Precision Storage
1 to 4 2 bytes
5 to 9 4 bytes
10 to 18 8 bytes
4 + 2 * (int(((prec - scale) + 3) / 4) +
int((scale + 3) / 4) + 1)
The storage used by a column is based upon the precision and scale of the column. Each cell in the column has
enough space to hold the largest value of that precision and scale. For example:
The DECIMAL data type is an exact numeric data type; its accuracy is preserved to the least significant digit
after arithmetic operations. Its maximum absolute value is the number of nines defined by [<precision> -
<scale>], followed by the decimal point, and then followed by the number of nines defined by <scale>. The
minimum absolute nonzero value is the decimal point, followed by the number of zeros defined by [<scale> -
1], then followed by a single one. For example:
NUMERIC (3,2) Max positive = 9.99 Min non-zero = 0.01 Max negative = -9.99
If neither precision nor scale is specified for the explicit conversion of NULL to NUMERIC, the default is
NUMERIC(1,0). For example,
is described as:
A NUMERIC(1,0)
B NUMERIC(15,2)
Note
The maximum value supported in SAP SQL Anywhere for the numeric function is 255. If the precision of the
numeric function exceeds the maximum value supported in SAP SQL Anywhere, the following error occurs:
"The result datatype for function '_funcname' exceeds the maximum
supported numeric precision of 255. Please set the proper value for
precision in numeric function, 'location'"
In this section:
Numeric data compatibility differences exist between SAP IQ and SAP Adaptive Server Enterprise and SAP
SQL Anywhere.
● In embedded SQL, fetch TINYINT columns into 2-byte or 4-byte integer columns. Also, to send a TINYINT
value to a database, the C variable should be an integer.
● You should avoid default precision and scale settings for NUMERIC and DECIMAL data types, as these differ
by product:
SAP IQ 126 38
SAP ASE 18 0
● The FLOAT ( <p >) data type is a synonym for REAL or DOUBLE, depending on the value of <p>. For SAP
ASE, REAL is used for <p> less than or equal to 15, and DOUBLE for <p> greater than 15. For SAP IQ, the
cutoff is platform-dependent, but on all platforms, the cutoff value is greater than 22.
● SAP IQ includes two user-defined data types, MONEY and SMALLMONEY, which are implemented as
NUMERIC(19,4) and NUMERIC(10,4), respectively. They are provided primarily for compatibility with SAP
ASE.
3.2.1.2 Indexes
This section describes the relationship between index types and numeric data types.
● The CMP and HNG index types do not support the FLOAT, DOUBLE, and REAL data types, and the HG index
type is not recommended.
● The WD, DATE, TIME, and DTTM index types do not support the numeric data types.
Use binary data types for storing raw binary data, such as pictures, in a hexadecimal-like notation, up to a
length of (32 K – 1) bytes.
Syntax
BINARY [ ( <length> ) ]
VARBINARY [ ( <max-length> ) ]
UNIQUEIDENTIFIER
Related Information
Binary data begins with the characters “0x” or “0X” and can include any combination of digits and the
uppercase and lowercase letters A through F.
You can specify the column length in bytes, or use the default length of 1 byte. Each byte stores 2 hexadecimal
digits. Even though the default length is 1 byte, it is recommended that you always specify an even number of
characters for BINARY and VARBINARY column length. If you enter a value longer than the specified column
length, SAP IQ truncates the entry to the specified length without warning or error.
BINARY Binary data of length <length> bytes. If <length> is omitted, the default is 1 byte. The maxi
mum size allowed is 32767 bytes. Use the fixed-length binary type BINARY for data in which all
entries are expected to be approximately equal in length. Because entries in BINARY columns
are zero-padded to the column length <length>, they might require more storage space than
entries in VARBINARY columns.
VARBINARY Binary data up to a length of <max-length> bytes. If <max-length> is omitted, the default is 1
byte. The maximum size allowed is (32K – 1) bytes. Use the variable-length binary type
VARBINARY for data that is expected to vary greatly in length.
UNIQUEIDENTIFIER The UNIQUEIDENTIFIER data type is used for storage of UUID (also known as GUID) values.
In this section:
All BINARY columns are padded with zeros to the full width of the column. Trailing zeros are truncated in all
VARBINARY columns.
The following example creates a table with all four variations of BINARY and VARBINARY data types defined
with NULL and NOT NULL. The same data is inserted in all four columns and is padded or truncated according
to the data type of the column:
Because each byte of storage holds 2 hexadecimal digits, SAP IQ expects binary entries to consist of the
characters "0x" followed by an even number of digits. When the "0x" is followed by an odd number of digits,
SAP IQ assumes that you omitted the leading 0 and adds it for you.
If the input value does not include "0x", SAP IQ assumes that the value is an ASCII value and converts it. For
example:
col_bin
0x3030323731303030
Note
In the above example, ensure you set the string_rtruncation option to "off".
When you select a BINARY value, specify the value with the padded zeros or use the CAST function, as shown in
these examples:
Any ASCII data loaded from a flat file into a binary type column (BINARY or VARBINARY) is stored as nibbles.
For example, if 0x1234 or 1234 is read from a flat file into a binary column, SAP IQ stores the value as
hexadecimal 1234. SAP IQ ignores the "0x" prefix. The data is rejected if the input data contains any characters
out of the range 0 – 9, a – f, and A – F.
The exact form in which you enter a particular value depends on the platform you are using. Therefore,
calculations involving binary data might produce different results on different machines.
For platform-independent conversions between hexadecimal strings and integers, use the INTTOHEX and
HEXTOINT functions rather than the platform-specific CONVERT function.
Related Information
The concatenation string operators || and + both support binary type data.
Explicit conversion of binary operands to character data types is not necessary with the || operator. Explicit and
implicit data conversion produces different results, however.
● You cannot use the aggregate functions SUM, AVG, STDDEV, or VARIANCE with the binary data types. The
aggregate functions MIN, MAX, and COUNT do support the binary data types BINARY and VARBINARY.
● HNG, WD, DATE, TIME, and DTTM indexes do not support BINARY or VARBINARY data.
● Only the default index, CMP index, and TEXT index types are supported for BINARY and VARBINARY data
greater than 255 bytes in length.
● Bit operations are supported on BINARY and VARBINARY data that is 8 bytes or less in length.
The treatment of trailing zeros in binary data differs between SAP IQ, SAP SQL Anywhere, and SAP Adaptive
Server Enterprise.
VARBINARY NOT NULL Truncated, not padded Truncated, not padded Truncated, not padded
VARBINARY NULL Truncated, not padded Truncated, not padded Truncated, not padded
SAP ASE, SAP SQL Anywhere, and SAP IQ all support the STRING_RTRUNCATION database option, which
affects error message reporting when an INSERT or UPDATE string is truncated. For Transact-SQL compatible
string comparisons, set the STRING_RTRUNCATION option to the same value in both databases.
You can also set the STRING_RTRUNCATION option ON when loading data into a table, to alert you that the data
is too large to load into the field. The default value is ON.
Bit operations on binary type data are not supported by SAP ASE. SAP SQL Anywhere only supports bit
operations against the first four bytes of binary type data. SAP IQ supports bit operations against the first eight
bytes of binary type data.
3.3.1.7 UNIQUEIDENTIFIER
The UNIQUEIDENTIFIER data type is used for storage of UUID (also known as GUID) values.
The UNIQUEIDENTIFIER data type is often used for a primary key or other unique column to hold UUID
(Universally Unique Identifier) values that can be used to uniquely identify rows. The NEWID function generates
UUID values in such a way that a value produced on one computer does not match a UUID produced on
another computer. UNIQUEIDENTIFIER values generated using NEWID can therefore be used as keys in a
synchronization environment.
For example, the following statement updates the table mytab and sets the value of the column uid_col to a
unique identifier generated by the NEWID function, if the current value of the column is NULL:
UPDATE mytab
SET uid_col = NEWID()
WHERE uid_col IS NULL
If you execute the following statement, the unique identifier is returned as a BINARY(16):
SELECT NEWID()
For example, the value might be 0xd3749fe09cf446e399913bc6434f1f08. You can convert this string into a
readable format using the UUIDTOSTR() function.
Because UNIQUEIDENTIFIER values are large, using UNSIGNED BIGINT or UNSIGNED INT identity columns
instead of UNIQUEIDENTIFIER is more efficient, if you do not need cross database unique identifiers.
In this section:
As a separately licensed option, SAP IQ supports binary large object (BLOB) data with a length ranging from
zero (0) to 512 TB (terabytes) for a page size of 128 KB or 2 PB (petabytes) for a page size of 512 KB.
The maximum length is equal to 4 GB multiplied by the database page size. See SAP IQ Administration:
Unstructured Data Analytics
Inserting any nonzero value into a BIT column stores a 1 in the column. Inserting any zero value into a BIT
column stores a 0.
Use date and time data types for storing dates and times.
Syntax
DATE
DATETIME
SMALLDATETIME
TIME
TIMESTAMP
In this section:
Related Information
Familiarize yourself with these usage considerations before using date and time data types.
DATE A calendar date, such as a year, month and day. The year can be from 0001 to 9999. The day
must be a nonzero value, so that the minimum date is 0001-01-01. A DATE value requires 4
bytes of storage.
TIME Time of day, containing hour, minute, second, and fraction of a second. The fraction is stored to
6 decimal places. A TIME value requires 8 bytes of storage. (ODBC standards restrict TIME
data type to an accuracy of seconds. For this reason, do not use TIME data types in WHERE
clause comparisons that rely on a higher accuracy than seconds.)
TIMESTAMP Point in time, containing year, month, day, hour, minute, second, and fraction of a second. The
fraction is stored to 6 decimal places. The day must be a nonzero value. A TIMESTAMP value
requires 8 bytes of storage.
The valid range of the TIMESTAMP data type is from 0001-01-01 00:00:00.000000 to 9999-12-31
23:59:59.999999. The display of TIMESTAMP data outside the range of 1600-02-28 23:59:59 to 7911-01-01
00:00:00 might be incomplete, but the complete datetime value is stored in the database; you can see the
complete value by first converting it to a character string. You can use the CAST() function to do this, as in the
following example, which first creates a table with DATETIME and TIMESTAMP columns, then inserts values
where the date is greater 7911-01-01:
When you select without using CAST, hours and minutes are set to 00:00:
When you select using cast, you see the complete timestamp:
In this section:
Related Information
● All date and time data types support the CMP, HG, and HNG index types; the WD index type is not supported.
● DATE data supports the DATE index.
● TIME data supports the TIME index.
● DATETIME and TIMESTAMP data support the DTTM index.
When you send a time to the database as a string (for the TIME data type) or as part of a string (for TIMESTAMP
or DATE data types), hours, minutes, and seconds must be separated by colons in the format
<hh>:<mm>:<ss>:<sss>, but can appear anywhere in the string. As an option, a period can separate the
seconds from fractions of a second, as in <hh>:<mm>:<ss>.<sss>. The following are valid and unambiguous
strings for specifying times:
Date format strings cannot contain any multibyte characters. Only single-byte characters are allowed in a
date/time/datetime format string, even when the collation order of the database is a multibyte collation order
like 932JPN.
There are three ways in which you can retrieve dates and times from the database.
When a date or time is retrieved as a string, it is retrieved in the format specified by the database options
DATE_FORMAT, TIME_FORMAT, and TIMESTAMP_FORMAT.
Operator Description
timestamp - integer Subtract the specified number of days from a date or time
stamp.
date - date Compute the number of days between two dates or time
stamps.
date + time Create a timestamp combining the given date and time.
Related Information
To compare a date to a string as a string, use the DATEFORMAT function or CAST function to convert the date to
a string before comparing.
DATEFORMAT(invoice_date,'yyyy/mm/dd') = '1992/05/23'
You can use any allowable date format for the DATEFORMAT string expression.
Date format strings must not contain any multibyte characters. Only single-byte characters are allowed in a
date/time/datetime format string, even when the collation order of the database is a multibyte collation order
like 932JPN.
Instead, move the multibyte character outside of the date format string using the concatenation operator:
Using the unambiguous date format prevents misinterpretation of dates according to the user's DATE_ORDER
setting.
Dates in the format <yyyy>/<mm>/<dd> or <yyyy>-<mm>-<dd> are always recognized as dates regardless of
the DATE_ORDER setting. You can use other characters as separators; for example, a question mark, a space
character, or a comma. Use this format in any context where different users might be employing different
DATE_ORDER settings. For example, in stored procedures, use of the unambiguous date format prevents
misinterpretation of dates according to the user's DATE_ORDER setting.
For combinations of dates and times, any unambiguous date and any unambiguous time yield an unambiguous
date-time value. Also, the following form is an unambiguous date-time value:
YYYY-MM-DD HH.MM.SS.SSSSSS
You can use periods in the time only in combination with a date.
In other contexts, you can use a more flexible date format. SAP IQ can interpret a wide range of strings as
formats. The interpretation depends on the setting of the DATE_ORDER database option. The DATE_ORDER
database option can have the value 'MDY', 'YMD', or 'DMY'. For example, to set the DATE_ORDER option to
'DMY' enter:
The default DATE_ORDER setting is 'YMD'. The ODBC driver sets the DATE_ORDER option to 'YMD' whenever a
connection is made. Use the SET OPTION statement to change the value.
You can supply the year as either two or four digits. The value of the NEAREST_CENTURY option [TSQL] affects
the interpretation of two-digit years: 2000 is added to values less than NEAREST_CENTURY, and 1900 is added
to all other values. The default value of this option is 50. Thus, by default, 50 is interpreted as 1950, and 49 is
interpreted as 2049.
The month can be the name or number of the month. The hours and minutes are separated by a colon, but can
appear anywhere in the string.
With an appropriate setting of DATE_ORDER, the following strings are all valid dates:
99-05-23 21:35
99/5/23
1999/05/23
May 23 1999
23-May-1999
Tuesday May 23, 1999 10:00pm
If a string contains only a partial date specification, default values are used to fill out the date. The following
defaults are used:
● Year – 1900
● Month – no default
● Day – 1 (useful for month fields; for example, 'May 1999' is the date '1999-05-01 00:00')
● Hour, minute, second, fraction – 0
Syntax
Parameters
Remarks
Table reference variables (variables of type TABLE REF) allow procedures and functions to be defined even
though the names of the tables they operate on change or have not yet been defined.
When referencing a variable of TABLE REF type in a DML statement, you must specify a correlation name for
results.
When you specify a table reference variable in a statement, the table is looked up immediately before the
statement is executed.
Creating a table reference variable does not create a dependence between the variable and the underlying
table, and DDL statements can still be performed on tables referenced by a table reference variable.
If a table is dropped, then any table reference variables that refer to it are invalidated; an attempt to use an
invalid table reference variable returns an error.
When executing a statement that acts on a table specified by using a table reference variable, you need the
appropriate privileges on the underlying table referenced by the variable.
● Table reference variables cannot be used in a SELECT or DML statement if the variable resolves to the
NULL value.
● Table reference variables cannot be used to specify tables in DDL statements.
● Table reference variables cannot be used as columns in base tables, temporary tables, or views.
● Table reference variables cannot be used in a top-level SELECT block or query expression that is returned
to a client.
● Table reference variables cannot be combined with other types of variables in built-in functions that require
a common super-type for the parameters.
● Table reference variables cannot be ordered or used as part of calculations or comparisons except for
equality and inequality.
The table reference variable functionality overlaps with indirect identifier functionality; both are ways of
indirectly referring to a table. However, a table reference is resolved at creation time and remains a valid
reference, whereas an indirect reference is resolved when the statement that references it is executed and
therefore may not be a valid reference.
Table 2: Result:
v1 v2
apple 100
pear 300
The myTab table in PROC2 shadows (hides) myTab that was created in PROC1, so the only table accessible by
using the name myTab in PROC2 would be the locally declared myTab. Using a table reference (TABLE
REF( @tab_ref )) more precisely identifies the object being joined to (in this example, the table created in
PROC1.
Example
The following example declares a table reference variable, @ref, sets it to the GROUPO.Employees table
reference, and then queries the table using the table reference variable:
The following example creates a table reference variable called @tableDefinition and sets it to the
GROUPO.Employees table reference, and then selects from the table using the table reference variable:
The following code snippet declares a variable named @myTableRefVariable1 with the TABLE REF data type
and sets it to the GROUPO.Employees table reference:
Table 3: Results
surname givenname birthdate
The following example shows a table reference variable (@myTableRefVariable3) being used in several
statements to update the GROUPO.Employees table. Notice that a correlation name (T, in this example) is
required when specifying a table using a table reference variable in a DML statement:
The following example shows how you can use table reference variables in a procedure:
3.7 Domains
Domains are aliases for built-in data types, including precision and scale values where applicable.
Domains, also called user-defined data types, allow columns throughout a database to be defined
automatically on the same data type, with the same NULL or NOT NULL condition. This encourages
consistency throughout the database. Domain names are case-insensitive. SAP IQ returns an error if you
attempt to create a domain with the same name as an existing domain except for case.
In this section:
The following statement creates a data type named street_address, which is a 35-character string:
Although you can use CREATE DATATYPE as an alternative to CREATE DOMAIN, you should use CREATE
DOMAIN instead, since its syntax uses the ISO/ANSI SQL standard.
Requires CREATE DATATYPE system privilege. Once a data type is created, the user ID that executed the
CREATE DOMAIN statement is the owner of that data type. Any user can use the data type, and unlike other
database objects, the owner name is never used to prefix the data type name.
The street_address data type may be used in exactly the same way as any other data type when defining
columns. For example, the following table with two columns has the second column as a street_address
column:
Owners or DBAs can drop domains by issuing a COMMIT and then using the DROP DOMAIN statement:
You can carry out this statement only if no tables in the database are using data type.
Many of the attributes associated with columns, such as allowing NULL values, having a DEFAULT value, and so
on, can be built into a user-defined data type. Any column that is defined on the data type automatically
inherits the NULL setting, CHECK condition, and DEFAULT values. This allows uniformity to be built into
columns with a similar meaning throughout a database.
For example, many primary key columns in the demo database are integer columns holding ID numbers. The
following statement creates a data type that may be useful for such columns:
Any column created using the data type id is not allowed to hold NULLs, defaults to an autoincremented value,
and must hold a positive number. Any identifier could be used instead of <col> in the <@col> variable.
The attributes of the data type can be overridden if needed by explicitly providing attributes for the column. A
column created on data type id with NULL values explicitly allowed does allow NULLs, regardless of the setting
in the id data type.
Syntax
<default-value> ::=
<special-value>
| <string>
| <global variable>
| [ - ] <number>
| ( <constant-expression> )
| <built-in-function>( <constant-expression> )
| AUTOINCREMENT
| CURRENT DATABASE
| CURRENT REMOTE USER
| NULL
| TIMESTAMP
| LAST USER
<special-value> ::=
CURRENT
{ DATE
| TIME
| TIMESTAMP
| USER
| PUBLISHER }
| USER
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
(back to top)
data-type
You can also specify a %TYPE or %ROWTYPE attribute to set the data type to the data type of a column or
row in a table or view. However, specifying a table reference variable for the %ROWTYPE (TABLE REF
(table-reference-variable) %ROWTYPE) is not allowed.
Remarks
(back to top)
User-defined data types are aliases for built-in data types, including precision and scale values, where
applicable. They improve convenience and encourage consistency in the database.
Note
Use CREATE DOMAIN, rather than CREATE DATATYPE, as CREATE DOMAIN is the ANSI/ISO SQL3 term.
The user who creates a data type is automatically made the owner of that data type. No owner can be specified
in the CREATE DATATYPE statement. The user-defined data type name must be unique, and all users can
access the data type without using the owner as prefix.
User-defined data types are objects within the database. Their names must conform to the rules for identifiers.
User-defined data type names are always case-insensitive, as are built-in data type names.
By default, user-defined data types allow NULLs unless the allow_nulls_by_default database option is set
to OFF. In this case, new user-defined data types by default do not allow NULLs. The nullability of a column
created on a user-defined data type depends on the setting of the definition of the user-defined data type, not
on the setting of the allow_nulls_by_default option when the column is referenced. Any explicit setting of
NULL or NOT NULL in the column definition overrides the user-defined data type setting.
The CREATE DOMAIN statement allows you to specify DEFAULT values on user-defined data types. The
DEFAULT value specification is inherited by any column defined on the data type. Any DEFAULT value explicitly
specified on the column overrides that specified for the data type.
The CREATE DOMAIN statement lets you incorporate a rule, called a CHECK condition, into the definition of a
user-defined data type.
SAP IQ enforces CHECK constraints for base, global temporary. local temporary tables, and user-defined data
types.
To drop the data type from the database, use the DROP statement. You must be either the owner of the data
type or have the CREATE DATATYPE or CREATE ANY OBJECT system privilege in order to drop a user-defined
data type.
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
The following example creates a data type named address, which holds a 35-character string, and which may
be NULL:
Domain compatibility differences exist between SAP IQ and SAP Adaptive Server Enterprise and SAP SQL
Anywhere.
● Named constraints and defaults – in SAP IQ, user-defined data types are created with a base data type,
and optionally, a NULL or NOT NULL condition. Named constraints and named defaults are not supported.
Type conversions happen automatically, or you can explicitly request them using the CAST or CONVERT
function.
If a string is used in a numeric expression or as an argument to a function expecting a numeric argument, the
string is converted to a number before use.
If a number is used in a string expression or as a string function argument, then the number is converted to a
string before use.
All date constants are specified as strings. The string is automatically converted to a date before use.
There are certain cases where the automatic data type conversions are not appropriate:
You can use the CAST or CONVERT function to force type conversions.
You can also use the following functions to force type conversions:
● DATE( expression ) – converts the expression into a date, and removes any hours, minutes or seconds.
Conversion errors might be reported.
● DATETIME( expression ) – converts the expression into a timestamp. Conversion errors might be
reported.
● STRING( expression ) – similar to CAST(value AS CHAR), except that string(NULL) is the empty
string (''), whereas CAST(NULL AS CHAR) is the NULL value.
Note
SAP IQ does not silently truncate the conversion result of NUMERIC and DATE data types to CHAR and
VARCHAR. A conversion error is generated when the following data types are converted to a string whose
length is less than the column width:
The CONVERSION_ERROR option controls SAP IQ behavior in cases of conversion error. If you set the
CONVERSION_ERROR option to:
Related Information
There are some differences in behavior between SAP IQ and SAP Adaptive Server Enterprise when converting
strings to date and time data types.
If you convert a string containing only a time value (no date) to a date/time data type, SAP IQ and SAP ASE
both use a default date of January 1, 1900, while SAP SQL Anywhere uses the current date.
If the milliseconds portion of a time is less than three digits, SAP ASE interprets the value differently depending
on whether it was preceded by a period or a colon:
SAP IQ and SAP SQL Anywhere interpret the value the same way, regardless of the separator.
12:34:56.7 to 12:34:56.700
12.34.56.78 to 12:34:56.780
12:34:56.789 to 12:34:56.789
12:34:56:7 to 12:34:56.007
12.34.56:78 to 12:34:56.078
12:34:56:789 to 12:34:56.789
● SAP IQ converts the milliseconds value in the manner that SAP ASE does for values preceded by a period,
in both cases:
12:34:56.7 to 12:34:56.700
12.34.56.78 to 12:34:56.780
12:34:56.789 to 12:34:56.789
12:34:56:7 to 12:34:56.700
12.34.56:78 to 12:34:56.780
12:34:56:789 to 12:34:56.789
In this section:
Related Information
For dates in the first 9 days of a month and hours less than 10, SAP Adaptive Server Enterprise supports a
blank for the first digit; SAP IQ supports a zero or a blank.
For details on supported and unsupported SAP ASE data types, see SAP IQ Administration: Load Management.
SAP IQ supports BIT to BINARYand BIT to VARBINARY implicit and explicit conversion and is compatible with
SAP Adaptive Server Enterprise support of these conversions.
SAP IQ implicitly converts BIT to BINARY and BIT to VARBINARY data types for comparison operators,
arithmetic operations, and INSERT and UPDATE statements.
For BIT to BINARY conversion, bit value ‘b’ is copied to the first byte of the binary string and the rest of the
bytes are filled with zeros. For example, BIT value 1 is converted to BINARY(n) string 0x0100...00 having 2n
nibbles. BIT value 0 is converted to BINARY string 0x00...00.
For BIT to VARBINARY conversion, BIT value ‘b’ is copied to the first byte of the BINARY string and the
remaining bytes are not used; that is, only one byte is used. For example, BIT value 1 is converted to
VARBINARY(n) string 0x01 having 2 nibbles.
The result of both implicit and explicit conversions of BIT to BINARY and BIT to VARBINARY data types is the
same. The following table contains examples of BIT to BINARY and VARBINARY conversions.
BINARY(3) 0x010000
VARBINARY(3) 0x01
BINARY(8) 0x0100000000000000
VARBINARY(8) 0x01
These examples illustrate both implicit and explicit conversion of BIT to BINARY and BIT to VARBINARY data
types.
SAP IQ supports implicit conversion between BIT and CHAR, and BIT and VARCHAR data types for comparison
operators, arithmetic operations, and INSERT and UPDATE statements
Using the following tables and data, these examples illustrate both implicit and explicit conversions between
BIT and CHAR, and BIT and VARCHAR data types:
● Implicit conversion of BIT to VARCHAR / VARCHAR to BIT and implicit conversion of BIT to VARCHAR:
● Explicit conversion of BIT to CHAR / CHAR to BIT and explicit conversion of BIT to CHAR:
● Explicit conversion of BIT to VARCHAR / VARCHAR to BIT and explicit conversion of BIT to VARCHAR:
SAP IQ conforms to the ANSI SQL89 standard, but has many additional features that are defined in the IBM
DB2 and SAA specifications, as well as in the ANSI SQL92 standard.
Certain SAP IQ features are not found in many other SQL implementations.
In this section:
SAP IQ has date, time, and timestamp types that include year, month, day, hour, minutes, seconds, and fraction
of a second. For insertions or updates to date fields, or comparisons with date fields, a free-format date is
supported.
Also, many functions are provided for manipulating dates and times.
4.2 Integrity
This has been implemented via the following two extensions to the CREATE TABLE and ALTER TABLE
statements:
The PRIMARY KEY clause declares the primary key for the relation. SAP IQ will then enforce the uniqueness of
the primary key, and ensure that no column in the primary key contains the NULL value.
The FOREIGN KEY clause defines a relationship between this table and another table. This relationship is
represented by a column (or columns) in this table, which must contain values in the primary key of another
table. The system then ensures referential integrity for these columns; whenever these columns are modified
or a row is inserted into this table, these columns are checked to ensure that either one or more is NULL or the
values match the corresponding columns for some row in the primary key of the other table.
In addition to the NATURAL and OUTER join operators supported in other implementations, SAP IQ allows KEY
joins between tables based on foreign-key relationships. This reduces the complexity of the WHERE clause when
performing joins.
4.4 Updates
Views defined on more than one table can also be updated. Many SQL implementations do not allow updates
on joined tables.
In addition to changes for entity and referential integrity, the following types of alterations are allowed:
DELETE column
RENAME new-table-name
RENAME old-column TO new-column
Tip
After you create a column, you cannot modify the column data type. To change a data type, drop the
column and re-create it with the correct data type.
Unlike SAP SQL Anywhere, SAP IQ does not allow subqueries to appear wherever expressions are allowed.
SAP IQ supports subqueries only as allowed in the SQL-1989 grammar, plus in the SELECT list of the top level
query block or in the SET clause of an UPDATE statement. SAP IQ does not support SAP SQL Anywhere
extensions.
Many SQL implementations allow subqueries only on the right side of a comparison operator. For example, the
following command is valid in SAP IQ, but is not valid in most other SQL implementations:
SELECT SurName,
Related Information
4.8 Cursors
When using Embedded SQL, cursor positions can be moved arbitrarily on the FETCH statement. Cursors can
be moved forward or backward relative to the current position or a given number of records from the beginning
or end of the cursor.
Use the topics in this section to simplify migration to SAP IQ from other SAP database products, and to serve
as a guide for creating SAP IQ applications that are compatible with SAP Adaptive Server Enterprise or SAP
SQL Anywhere.
Compatibility features are addressed in each new version of SAP IQ. This section compares SAP IQ with SAP
ASE, and SAP SQL Anywhere.
In this section:
SAP ASE, SAP SQL Anywhere, and SAP IQ Architectures [page 155]
SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ are complementary products, with
architectures designed to suit their distinct purposes.
SAP SQL Anywhere and SAP IQ Differences and Shared Functionality [page 188]
SAP IQ and SAP SQL Anywhere have differences in starting and managing databases and servers,
database option support, DDL support, and DML support.
In most cases, SQL syntax, functions, options, utilities, procedures, and other features are common to both
products. There are, however, important differences. Do not assume that all features described in SAP SQL
Anywhere documentation are supported for SAP IQ. Use the SAP IQ documentation.
SAP IQ, like SAP SQL Anywhere, supports a large subset of Transact-SQL, which is the dialect of SQL
supported by SAP Adaptive Server Enterprise.
The goal of Transact-SQL support in SAP IQ is to provide application portability. Many applications, stored
procedures, and batch files can be written for use with both SAP ASE and SAP IQ databases.
The aim is to write applications to work with both SAP ASE and SAP IQ. Existing SAP ASE applications
generally require some changes to run on SAP SQL Anywhere or SAP IQ databases.
● Most SQL statements are compatible between SAP IQ and SAP ASE.
● For some statements, particularly in the procedure language used in procedures and batches, a separate
Transact-SQL statement is supported along with the syntax supported in earlier versions of SAP IQ. For
these statements, SAP SQL Anywhere and SAP IQ support two dialects of SQL, which we refer to here as
Transact-SQL and Watcom-SQL.
● A procedure or batch is executed in either the Transact-SQL or Watcom-SQL dialect. Use only the control
statements from one dialect throughout the batch or procedure. For example, each dialect has different
flow control statements.
SAP IQ supports a high percentage of Transact-SQL language elements, functions, and statements for working
with existing data.
Further, SAP IQ supports a very high percentage of the Transact-SQL stored procedure language (CREATE
PROCEDURE syntax, control statements, and so on), and many — but not all — aspects of Transact-SQL data
definition language statements.
There are design differences in the architectural and configuration facilities supported by each product. Device
management, user management, and maintenance tasks such as backups tend to be system-specific. Even
here, however, SAP IQ provides Transact-SQL system tables as views, where the tables that are not meaningful
in SAP IQ have no rows. Also, SAP IQ provides a set of system procedures for some of the more common
administrative tasks.
SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ are complementary products, with
architectures designed to suit their distinct purposes.
SAP IQ is a high-performance, decision-support server designed specifically for data warehousing and analytic
processing. SAP SQL Anywhere works well as a workgroup or departmental server requiring little
administration, and as a personal database. SAP ASE works well as an enterprise-level server for large
databases, with a focus on transaction processing.
This section describes architectural differences among the three products. It also describes the SAP ASE-like
tools that SAP IQ and SAP SQL Anywhere include for compatible database management.
In this section:
The relationship between servers and databases is different in SAP Adaptive Server Enterprise from SAP IQ
and SAP SQL Anywhere.
In SAP ASE, each database exists inside a server, and each server can contain several databases. Users can
have login rights to the server, and can connect to the server. They can then connect to any of the databases on
that server, provided that they have permissions. System-wide system tables, held in a master database,
contain information common to all databases on the server.
In SAP IQ, there is nothing equivalent to the SAP ASE master database. Instead, each database is an
independent entity, containing all of its system tables. Users can have connection rights to a database, rather
than to the server. When a user connects, he or she connects to an individual database. There is no system-
wide set of system tables maintained at a master database level. Each SAP IQ database server can dynamically
start and stop a database, to which users can maintain independent connections. SAP strongly recommends
that you run only one SAP IQ database per server.
SAP SQL Anywhere and SAP IQ provide tools in their Transact-SQL support and Open Server support to allow
some tasks to be carried out in a manner similar to SAP ASE. There are differences, however, in exactly how
these tools are implemented.
SAP Adaptive Server Enterprise, SAP SQL Anywhere and SAP IQ use different models for managing devices
and allocating disk space initially and later, reflecting the different uses for the products.
For example:
● In SAP ASE, you allocate space in database devices initially using DISK INIT and then create a database
on one or more database devices. You can add more space using ALTER DATABASE or automatically, using
thresholds.
● In SAP IQ, you initially allocate space by listing raw devices in the CREATE DATABASE statement. You can
add more space manually using CREATE DBSPACE. Although you cannot add space automatically, you can
create events to warn the DBA before space is actually needed. SAP IQ can also use file system space. SAP
IQ does not support Transact-SQL DISK statements, such as DISK INIT, DISK MIRROR, DISK REFIT,
DISK REINIT, DISK REMIRROR, and DISK UNMIRROR.
● SAP SQL Anywhere is similar to SAP IQ, except that the initial CREATE DATABASE statement takes a single
file system file instead of a list of raw devices. SAP SQL Anywhere lets you initialize its databases using a
command utility named dbinit. SAP IQ provides an expanded version of this utility called iqinit for
initializing SAP IQ databases.
● The catalog store includes system tables and stored procedures, and resides in a set of tables that are
compatible with SAP SQL Anywhere.
● The permanent IQ main store is the set of SAP IQ tables. Table data is stored in indexes.
● The temporary store consists of a set of temporary tables, which the database server uses for sorting and
other temporary processing.
● SAP SQL Anywhere and SAP IQ use a different schema from SAP Adaptive Server Enterprise for the
catalog (tables, columns, and so on).
● SAP SQL Anywhere and SAP IQ provide compatibility views that mimic relevant parts of the SAP ASE
system tables, although there are performance implications when using them.
● In SAP ASE, the database owner (user ID dbo) owns the catalog objects.
● In SAP SQL Anywhere and SAP IQ, the system owner (user ID SYS) owns the catalog objects.
Note
A dbo user ID owns the SAP ASE-compatible system views provided by SAP IQ.
SAP Adaptive Server Enterprise, SAP SQL Anywhere and SAP IQ treat data types differently.
Note
Data types that are not included in this section are supported by all three products.
In this section:
SAP Adaptive Server Enterprise, SAP SQL Anywhere and SAP IQ support the BIT data type, with differences.
SAP IQ, SAP SQL Anywhere and SAP Adaptive Server Enterprise permit CHAR and VARCHAR data, but each
product treats these types differently.
● SAP SQL Anywhere permits inserting integral data types into CHAR or VARCHAR (implicit conversion).
● SAP ASE and SAP IQ require explicit conversion.
● SAP ASE CHAR and VARCHAR depend on the logical page size, which can be 2K, 4K, 8K, and 16K. For
example:
○ 2K page size allows a column as large as a single row, about 1962 bytes.
○ 4K page size allows a column as large as about 4010 bytes.
● Both SAP IQ and SAP SQL Anywhere supports up to 32K-1 with CHAR and VARCHAR. SAP SQL Anywhere
supports up to 2 GB with LONG VARCHAR.
● SAP SQL Anywhere supports the name LONG VARCHAR and its synonym TEXT, while SAP ASE supports
only the name TEXT, not the name LONG
● SAP IQ supports a longer LONG VARCHAR data type — to 512 TB (with an SAP IQ page size of 128 KB) and 2
PB (with an SAP IQ page size of 512 KB) — than SAP SQL Anywhere. See SAP IQ Administration:
Unstructured Data Analytics.
● VARCHAR.
● SAP ASE supports multibyte character set NCHAR and NVARCHAR data types, as well as single-byte
character set UNICHAR and UNIVARCHAR data types.
● SAP SQL Anywhere and SAP IQ support Unicode in the CHAR and VARCHAR data types, rather than as a
separate data type.
● For compatibility between SAP IQ and SAP ASE, always specify a length for character data types.
Related Information
Binary data type support differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere and SAP IQ.
LONG BINARY* Not supported 2 GB - 1 512 TB (IQ page size 128 KB) 2 PB (IQ page
size 512 KB)
*For information on the LONG BINARY data type in SAP IQ, see Unstructured Data Analytics . This feature
requires a separate license.
SAP ASE and SAP SQL Anywhere display binary data differently when projected:
● SAP IQ supports both SAP ASE and SAP SQL Anywhere display formats.
● If '123' is entered in a BINARY field the SAP SQL Anywhere display format is by bytes, as '123'; the SAP ASE
display format is by nibbles, as '0x616263'.
Related Information
Although SAP Adaptive Server Enterprise, SAP SQL Anywhere and SAP IQ all support some form of date and
time data, there are some differences.
● SAP SQL Anywhere and SAP IQ support the 4-byte date and time data types.
● SAP ASE supports an 8-byte datetime type, and timestamp as a user-defined data type (domain)
implemented as binary (8).
● SAP SQL Anywhere and SAP IQ support an 8-byte timestamp type, and an 8-byte datetime domain
implemented as timestamp. The millisecond precision of the SAP SQL Anywhere/SAP IQ datetime data
type differs from that of SAP ASE.
● SAP ASE defaults to displaying dates in the format "MMM-DD-YYYY" but can be changed by setting an
option.
● SAP SQL Anywhere and SAP IQ default to the ISO "YYYY-MM-DD" format but can be changed by setting an
option.
● SAP ASE varies the way it converts time stored in a string to an internal time, depending on whether the
fraction part of the second was delimited by a colon or a period.
● SAP SQL Anywhere and SAP IQ convert times in the same way, regardless of the delimiter.
TIME and DATETIME values retrieved from an SAP ASE database change when inserted into an SAP IQ table
with a DATETIME column using INSERT…LOCATION. The INSERT…LOCATION statement uses Open Client,
which has a DATETIME precision of 1/300 of a second.
For example, assume that the following value is stored in a table column in an SAP ASE database:
2004-11-08 10:37:22.823
When you retrieve and store it in an SAP IQ table using INSERT...LOCATION, the value becomes:
2004-11-08 10:37:22.823333
In this section:
Compatibility of Datetime and Time Values from SAP ASE [page 160]
A DATETIME or TIME value retrieved from an SAP Adaptive Server Enterprise database using
INSERT...LOCATION can have a different value due to the datetime precision of Open Client.
A DATETIME or TIME value retrieved from an SAP Adaptive Server Enterprise database using
INSERT...LOCATION can have a different value due to the datetime precision of Open Client.
For example, the DATETIME value in the – database is ‘2012-11-08 10:37:22.823’. When you retrieve it and store
it in SAP IQ using INSERT...LOCATION, the value becomes ‘2012-11-08 10:37:22.823333’.
SAP IQ supports the SAP Adaptive Server Enterprise data types BIGTIME and BIGDATETIME for Component
Integration Services (CIS) and INSERT...LOCATION.
● Component Integration Services with SAP ASE – aseodbc server class proxy tables mapped to SAP ASE
tables that contain columns of data type BIGTIME and BIGDATETIME.
When you create a proxy table mapped to an SAP ASE table, a BIGDATETIME column is mapped to a
TIMESTAMP column by default, if no mapping is specified. A BIGTIME column is mapped to a TIME column
by default.
● INSERT...LOCATION – the INSERT...LOCATION command to load data into SAP IQ tables from SAP ASE
tables that contain columns of data type BIGTIME and BIGDATETIME.
SAP IQ inserts the SAP ASE data type BIGTIME into the SAP IQ data type TIME.
SAP IQ inserts the SAP ASE data type BIGDATETIME into the SAP IQ data types DATETIME, DATE, TIME,
and TIMESTAMP.
SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ have different default precision and scale.
Support for TEXT data differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
● SAP ASE supports up to 2 GB with LONG VARBINARY (LONG BINARY in SAP SQL Anywhere) and TEXT.
SAP SQL Anywhere does not support LONG VARBINARY as a column type, but uses LONG BINARY for the
same purpose. SAP SQL Anywhere supports up to 2 GB with LONG BINARY and TEXT.
● SAP IQ supports up to 32 KB - 1 with VARCHAR. SAP IQ also supports up to 512 TB (with an IQ page size of
128 KB) and 2 PB (with an IQ page size of 512 KB) with LONG VARCHAR. For information on the LONG
VARCHAR data type in SAP IQ, see SAP IQ Administration: Unstructured Data Analytics.
Support for IMAGE data differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
SAP Adaptive Server Enterprise allows Java data types in the database. SAP SQL Anywhere and SAP IQ do not.
Differences exist between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ in how you create
databases and database objects.
In this section:
Creating a Transact-SQL Compatible Database Using the CREATE DATABASE statement [page 163]
Use Interactive SQL to create a Transact-SQL compatible database.
CREATE DEFAULT, CREATE RULE, and CREATE DOMAIN Statements Usage Considerations [page 168]
SAP IQ provides an alternative means of incorporating rules.
Procedure
5.5.2 Case-Sensitivity
In this section:
The case-sensitivity of the data is reflected in indexes, in the results of queries, and so on.
You decide the case-sensitivity of SAP IQ data in comparisons when you create the database. By default, SAP
IQ databases are case-sensitive in comparisons, although data is always held in the case in which you enter it.
SAP Adaptive Server Enterprise sensitivity to case depends on the sort order installed on the SAP ASE system.
You can change case-sensitivity for single-byte character sets by reconfiguring the SAP ASE sort order.
Identifiers include table names, column names, user IDs, and so on.
SAP IQ does not support case-sensitive identifiers. In SAP Adaptive Server Enterprise, the case-sensitivity of
identifiers follows the case-sensitivity of the data.
All passwords in newly created databases are case-sensitive, regardless of the case-sensitivity of the database.
When you rebuild an existing database, SAP IQ determines the case-sensitivity of the password as follows:
● If the database was originally entered in a case-insensitive database, the password remains case-
insensitive.
● If the password was originally entered in a case-sensitive database, uppercase and mixed-case passwords
remain case-sensitive. If the password was entered in all lowercase, then the password becomes case-
insensitive.
● Changes to both existing passwords and new passwords are case-sensitive.
Each database object must have a unique name within a certain name space.
Outside this name space, duplicate names are allowed. Some database objects occupy different name spaces
in SAP Adaptive Server Enterprise as compared to SAP SQL Anywhere and SAP IQ.
● For SAP IQ and SAP SQL Anywhere, table names must be unique within a database for a given owner. For
example, both user1 and user2 can create a table called employee; uniqueness is provided by the fully
qualified names, user1.employee and user2.employee.
● For SAP ASE, table names must be unique within the database and to the owner.
Index name uniqueness requirements apply within a table. In all three products, indexes are owned by the
owner of the table on which they are created. Index names must be unique on a given table, but any two tables
can have an index of the same name, even for the same owner. For example, in all three products, tables t1 and
t2 can have indexes of the same name, whether they are owned by the same or different users.
SAP IQ allows you to rename explicitly created indexes, foreign key role names of indexes, and foreign keys,
using the ALTER INDEX statement. SAP SQL Anywhere allows you to rename indexes, foreign key role names,
and foreign keys, using the ALTER INDEX statement. SAP ASE does not allow you to rename these objects.
When creating tables for compatibility, be aware of the following compatibility considerations for NULL
treatment, check constraints, referential integrity, default values, identify columns, computed columns,
temporary tables, and table location.
NULL in Columns
● SAP SQL Anywhere and SAP IQ assume that columns can be null unless NOT NULL is stated in the column
definition. You can change this behavior by setting the database option ALLOW_NULLS_BY_DEFAULT to the
Transact-SQL compatible setting of OFF.
● SAP SQL Anywhere and SAP IQ assume that BIT columns cannot be NULL.
● SAP Adaptive Server Enterprise assumes that columns cannot be null unless NULL is stated.
Check Constraints
SAP IQ enforces check constraints on base, global temporary, and local temporary tables, and on user-defined
data types. Users can log check integrity constraint violations and specify the number of violations that can
occur before a LOAD statement rolls back.
SAP IQ does not allow the creation of a check constraint that it cannot evaluate, such as those composed of
user-defined functions, proxy tables, or non-SAP IQ tables. Constraints that cannot be evaluated are detected
the first time the table on which the check constraint is defined is used in a LOAD, INSERT, or UPDATE
statement. SAP IQ does not allow check constraints containing the following:
● Subqueries
● Expressions specifying a host language parameter, a SQL parameter, or a column as the target for a data
value
● Set functions
● Invocations of nondeterministic functions or functions that modify data
SAP ASE and SAP SQL Anywhere enforce CHECK constraints. SAP SQL Anywhere allows subqueries in check
constraints.
SAP IQ supports user-defined data types that allow constraints to be encapsulated in the data type definition.
● SAP SQL Anywhere supports all ANSI actions: SET NULL, CASCADE, DEFAULT, RESTRICT.
● SAP ASE supports two of these actions: SET NULL, DEFAULT.
Note
You can achieve CASCADE in SAP ASE by using triggers instead of referential integrity.
● SAP ASE and SAP SQL Anywhere support specifying a default value for a column.
● Only SAP SQL Anywhere supports DEFAULT UTC TIMESTAMP.
● SAP IQ supports specifying a default value for a column, except for the special values DEFAULT UTC
TIMESTAMP and DEFAULT CURRENT UTC TIMESTAMP. SAP IQ also ignores settings for the
DEFAULT_TIMESTAMP_INCREMENT database option.
Identity Columns
● SAP SQL Anywhere supports the AUTOINCREMENT default value. SAP SQL Anywhere supports identity
columns of any numeric type with any allowable scale and precision. The identity column value can be
positive, negative, or zero, limited by the range of the data type. SAP SQL Anywhere supports any number
of identity columns per table, and does not require identity_insert for explicit inserts, drops, and updates.
The table must be empty when adding identity columns. SAP SQL Anywhere identity columns can be
altered to be nonidentity columns, and vice versa. You can add or drop AUTOINCREMENT columns from SAP
SQL Anywhere views.
● SAP ASE supports a single identity column per table. SAP ASE identity columns are restricted to only
numeric data type scale 0, maximum precision 38. They must be positive, are limited by the range of the
data type, and cannot be null. SAP ASE requires identity_insert for explicit inserts and drops, but not for
updates to the identity column. The table can contain data when you add an identity column. SAP ASE
users cannot explicitly set the next value chosen for an identity column. SAP ASE views cannot contain
IDENTITY/AUTOINCREMENT columns. When using SELECT INTO under certain conditions, SAP ASE
allows Identity/Autoincrement columns in the result table if they were in the table being selected from.
Temporary Tables
You can create a temporary table by placing a pound sign (#) without an owner specification in front of the
table name in a CREATE TABLE statement. These temporary tables are SAP IQ-declared temporary tables and
are available only in the current connection.
Locating Tables
Physical placement of a table is carried out differently in SAP ASE and SAP IQ. SAP IQ supports the ON
<segment-name> clause, but <segment-name> refers to an SAP IQ dbspace.
● SAP Adaptive Server Enterprise supports the Create Default and Create Rule statements to create
named defaults.
● SAP SQL Anywhere and SAP IQ support the CREATE DOMAIN statement to achieve the same objective.
Support for triggers differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
Note
A trigger is effectively a stored procedure that is run automatically either immediately before or
immediately after an INSERT, UPDATE, or DELETE as part of the same transaction that can be used to
cause a dependent change (for example, to automatically update the name of an employee’s manager
CREATE INDEX syntax differs slightly between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP
IQ.
● SAP ASE and SAP SQL Anywhere support clustered or nonclustered indexes, using the following syntax:
SAP ASE also allows the NONCLUSTERED keyword, but the default is NONCLUSTERED for both products.
● SAP ASE CREATE INDEX statements work in SAP SQL Anywhere because SAP SQL Anywhere allows, but
ignores, the keywords FILLFACTOR, IGNORE_DUP_KEY, SORTED_DATA, IGNORE_DUP_ROW, and
ALLOW_DUP_ROW.
● SAP SQL Anywhere CREATE INDEX syntax supports the VIRTUAL keyword for use by its Index Consultant,
but not for actual query executions.
● SAP IQ supports seven specialized index types: HG, HNG, DATE, TIME, DTTM, and WD. SAP IQ also supports a
CMP index on the relationship between two columns of identical data type, precision, and scale. SAP IQ
defaults to creating an HG index unless the index type is specified in the CREATE INDEX statement:
There are some differences between the SAP Adaptive Server Enterprise and SAP SQL Anywhere and SAP IQ
models of users and roles/groups.
In SAP ASE, users connect to a server, and each user requires a login ID and password to the server as well as a
user ID for each database they want to access on that server.
SAP SQL Anywhere and SAP IQ users do not require a server login ID. All SAP SQL Anywhere and SAP IQ users
receive a user ID and password for a database.
To allow you to grant permissions to many users at one time, SAP SQL Anywhere and SAP IQ support user
roles while SAP ASE supports user groups. Though basically roles are groups are equivalent, there are some
behavioral differences:
All three products have a public role or group, for defining default permissions. Every user automatically
becomes a member of the public role or group.
GRANT and REVOKE statements for granting permissions on individual database objects are very similar in all
three products.
● All three products allow SELECT, INSERT, DELETE, UPDATE, and REFERENCES permissions on database
tables and views, and UPDATE permissions on selected columns of database tables. SAP SQL Anywhere
and SAP IQ also allow LOAD and TRUNCATE permissions on database tables and views.
For example, the following statement is valid in all three products:
This statement grants permission to use the INSERT and DELETE statements on the TITLES table to user
MARY and to the SALES role or group.
● All three products allow EXECUTE permissions to be granted on stored procedures.
● SAP ASE also supports GRANT and REVOKE on additional items:
○ Objects: columns within tables, columns within views, and stored procedures
○ User abilities: CREATE DATABASE, CREATE DEFAULT, CREATE PROCEDURE, CREATE RULE, CREATE
TABLE, CREATE VIEW
● SAP SQL Anywhere and SAP IQ require a user to have the MANAGE ANY OBJECT PRIVILEGE system
privilege to grant database objects permissions. (A closely corresponding SAP ASE permission is GRANT
ALL, used by a Database Owner.)
● All three products support the WITH GRANT OPTION clause, allowing the recipient of permissions to grant
them in turn, although SAP IQ and SAP SQL Anywhere do not permit WITH GRANT OPTION to be used on
a GRANT EXECUTE statement.
Database-wide Permissions
Adding Users
SAP ASE requires a two-step process to add a user: sp_addlogin followed by sp_adduser.
SAP IQ Login Management stored procedures, although not required to add or drop users, allow users with
applicable system privileges to add or drop SAP IQ user accounts. When SAP IQ User Administration is
enabled, these SAP IQ user accounts allow control user connections and password expirations.
Although SAP SQL Anywhere and SAP IQ allow SAP ASE system procedures for managing users and groups,
the exact syntax and function of these procedures differs in some cases.
Related Information
Load format support differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
Note
The syntax of the SAP IQ and SAP SQL Anywhere LOAD statement is based on BCP and designed to offer
exactly the same functionality.
Query requirements differ between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
In this section:
Even if more than one server supports a given SQL statement, it might be a mistake to assume that default
behavior is the same on each system.
● When writing SQL for use on more than one database management system, make your SQL statements as
explicit as possible.
● Spell out all of the available options, rather than using default behavior.
● Use parentheses to make the order of execution within statements explicit, rather than assuming identical
default order of precedence for operators.
● Use the Transact-SQL convention of an @ sign preceding variable names for SAP Adaptive Server
Enterprise portability.
● Declare variables and cursors in procedures and batches immediately following a BEGIN statement. SAP IQ
requires this, although SAP ASE allows declarations to be made anywhere in a procedure or batch.
● Do not use reserved words from either SAP ASE or SAP IQ as identifiers in your databases.
There are two criteria for writing a query that runs on both SAP IQ and SAP Adaptive Server Enterprise
databases.
● The data types, expressions, and search conditions in the query must be compatible.
● The syntax of the SELECT statement itself must be compatible.
Syntax
Parameters
select-list:
{ <table-name.* >}…
{ <*> }…
{ <expression> }…
{ <alias-name> = <expression> }…
{ <expression as identifier> }…
{ <expression as T_string> }…
table-spec:
[ <owner>. ]<table-name>
… [ [ AS ] <correlation-name> ]
…
alias-name:
The sections that follow provide details on several items to be aware of when writing compatible queries.
Related Information
SAP IQ currently provides support for subqueries that is somewhat different from that provided by SAP
Adaptive Server Enterprise and SAP SQL Anywhere.
SAP ASE and SAP SQL Anywhere support subqueries in the ON clause; SAP IQ does not currently support this.
● SAP SQL Anywhere supports UNION in both correlated and uncorrelated subqueries.
● SAP IQ supports UNION only in uncorrelated subqueries.
● SAP ASE does not support UNION in any subqueries.
SAP SQL Anywhere supports subqueries in many additional places that a scalar value might appear in the
grammar. SAP ASE and SAP IQ follow the ANSI standard as to where subqueries can be specified.
GROUP BY ALL support differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
● SAP ASE supports GROUP BY ALL, which returns all possible groups including those eliminated by the
WHERE clause and HAVING clause. These have the NULL value for all aggregates.
● SAP SQL Anywhere does not support the GROUP BY ALL Transact-SQL extension.
● SAP IQ and SAP SQL Anywhere support ROLLUP and CUBE in the GROUP BY clause.
● SAP ASE does not currently support ROLLUP and CUBE.
SAP ASE supports projecting non-grouped columns in the SELECT clause. This is known as extended group by
semantics and returns a set of values. SAP IQ supports and SAP SQL Anywhere do not support extended group
by semantics. Only SAP SQL Anywhere supports the List() aggregate to return a list of values.
COMPUTE support differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
The WHERE clause differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ in
support for the Contains() predicate, and treatment of trailing white space in the Like() predicate.
● SAP IQ supports the Contains() predicate for word searches in character data (similar to Contains in MS
SQL Server and Verity). SAP IQ uses WORD indexes and TEXT indexes to optimize these, if possible.
● SAP ASE does not support Contains().
Supported syntax for outer joins differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP
IQ.
● SAP ASE fully supports *= and =* Transact-SQL syntax for outer joins.
● SAP SQL Anywhere and SAP IQ support Transact-SQL outer joins, but reject some complex Transact-SQL
outer joins that are potentially ambiguous.
● SAP IQ does not support chained (nested) Transact-SQL outer joins. Use ANSI syntax for this type of
multiple outer join.
Note
Transact-SQL outer join syntax is deprecated in SAP SQL Anywhere and SAP IQ.
For detailed information on Transact-SQL outer joins, including ANSI syntax alternatives, see the white paper
Semantics and Compatibility of Transact-SQL Outer Joins, on the SAP Community Network . Although
written for SAP SQL Anywhere, the information in the document also applies to SAP IQ.
Support for ANSI join syntax differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
SAP Adaptive Server Enterprise has Transact-SQL extensions that permit predicates to compare the null value.
SAP SQL Anywhere and SAP IQ use ANSI semantics for null comparisons unless the ANSINULL option is set to
OFF, in which case such comparisons are SAP ASE-compatible.
Note
SAP SQL Anywhere 8.0 and later adds support for the TDS_EMPTY_STRING_AS_NULL to offer SAP ASE
compatibility in mapping empty strings to the null value.
Zero-length strings are treated differently in SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
● SAP ASE treats zero-length strings as the null value.SAP ASE users store a single space for blank strings.
● SAP SQL Anywhere and SAP IQ follow ANSI semantics for zero-length strings, that is, a zero-length string
is a real value; it is not null.
HOLDLOCK, SHARED, and FOR BROWSE syntax differs between SAP Adaptive Server Enterprise, SAP SQL
Anywhere, and SAP IQ.
SAP IQ supports most of the same functions as SAP SQL Anywhere and SAP Adaptive Server Enterprise, with
some differences.
● SAP ASE supports the USING CHARACTERS | USING BYTES syntax in PatIndex(); SAP SQL
Anywhere and SAP IQ do not.
● SAP ASE supports the Reverse() function; SAP SQL Anywhere and SAP IQ do not.
● SAP ASE supports Len() as alternative syntax for Length(); SAP SQL Anywhere does not support this
alternative.
Note
● SAP SQL Anywhere and SAP IQ support Lcase() and Ucase() as synonyms of Lower() and Upper();
SAP ASE does not.
● SAP SQL Anywhere and SAP IQ support the Locate() string function; SAP ASE does not.
● SAP SQL Anywhere supports the IsDate() and IsNumeric() function to test the ability to convert a
string to the respective data type; SAP ASE does not. SAP IQ supports IsDate(). You can use IsNumeric
in SAP IQ, but CIS functional compensation performance considerations apply.
● SAP SQL Anywhere supports the NEWID, STRTOUID, and UUIDTOSTR functions; SAP ASE does not. These
are native functions in SAP IQ, so CIS functional compensation performance considerations do not apply.
Note
Some SQL functions, including SOUNDEX and DIFFERENCE string functions, and some date functions
operate differently in SAP IQ and SAP SQL Anywhere. The SAP IQ database option
ASE_FUNCTION_BEHAVIOR specifies that output of some of the SAP IQ data type conversion functions,
including HEXTOINT and INTTOHEX, is consistent with the output of SAP ASE functions.
Currently, SAP Adaptive Server Enterprise does not support OLAP functions. SAP IQ and SAP SQL Anywhere
do.
● Corr()
● Covar_Pop()
● Covar_Samp()
● Cume_Dist
● Dense_Rank()
● Exp_Weighted_Avg
● First_Value
● Last_Value
● Median
● Ntile()
Note
Support for OLAP functions is a rapidly evolving area of SAP product development.
SAP IQ and SAP SQL Anywhere do not support certain SAP Adaptive Server Enterprise system functions.
These SAP ASE system functions are not supported by SAP SQL Anywhere and SAP IQ:
User-defined function (UDF) support differs between SAP Adaptive Server Enterprise, SAP SQL Anywhere, and
SAP IQ.
SAP SQL Anywhere and SAP IQ interpret arithmetic expressions on dates as shorthand notation for various
date functions. SAP Adaptive Server Enterprise does not.
There are differences in the types of tables permitted in SELECT INTO statements in SAP Adaptive Server
Enterprise, SAP SQL Anywhere, and SAP IQ.
● SAP ASE permits <table1> to be permanent, temporary, or a proxy table. SAP ASE also supports SELECT
INTO EXISTING TABLE.
● SAP SQL Anywhere and SAP IQ permit <table1> to be a permanent or a temporary table. A permanent
table is created only when you select into <table> and specify more than one column. SELECT INTO
<#table>, without an owner specification, always creates a temporary table, regardless of the number of
columns specified. SELECT INTO table with just one column selects into a host variable.
SAP Adaptive Server Enterprise and SAP SQL Anywhere are more liberal than ANSI permits on the view
definitions that are updatable when the WITH CHECK option is not requested.
SAP SQL Anywhere offers the ANSI_UPDATE_CONSTRAINTS option to control whether updates are restricted
to those supported by SQL92, or a more liberal set of rules.
SAP IQ permits UPDATE only on single-table views that can be flattened. SAP IQ does not support WITH
CHECK.
SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ all support the FROM clause with multiple
tables in UPDATE and DELETE.
The stored procedure language is the part of SQL used in stored procedures and batches.
SAP SQL Anywhere and SAP IQ support a large part of the Transact-SQL stored procedure language in addition
to the Watcom-SQL dialect based on SQL92.
In this section:
Because it is based on the ISO/ANSI draft standard, the SAP SQL Anywhere and SAP IQ stored procedure
language differs from the Transact-SQL dialect in many ways.
Many of the concepts and features are similar, but the syntax is different. SAP SQL Anywhere and SAP IQ
support for Transact-SQL takes advantage of the similar concepts by providing automatic translation between
dialects. However, you must write a procedure exclusively in one of the two dialects, not in a mixture of the two.
● Passing parameters
● Returning result sets
● Returning status information
● Providing default values for parameters
● Control statements
● Error handling
Batches can be stored in command files. The ISQL utility in SAP SQL Anywhere and SAP IQ and the isql utility
in SAP Adaptive Server Enterprise provide similar capabilities for executing batches interactively.
The control statements used in procedures can also be used in batches. SAP SQL Anywhere and SAP IQ
support the use of control statements in batches and the Transact-SQL-like use of nondelimited groups of
statements terminated with a GO statement to signify the end of a batch.
For batches stored in command files, SAP SQL Anywhere and SAP IQ support the use of parameters in
command files. SAP ASE does not support parameters.
You cannot mix the two dialects within a procedure or batch. This means that:
● You can include Transact-SQL-only statements with statements that are part of both dialects in a batch or
procedure.
● You can include statements not supported by SAP Adaptive Server Enterprise with statements that are
supported by both servers in a batch or procedure.
● You cannot include Transact-SQL–only statements with SAP IQ—only statements in a batch or procedure.
SQL statements not separated by semicolons are part of a Transact-SQL procedure or batch. See SAP IQ SQL
Reference for details of individual statements.
Transact-SQL compatibility has improved; incorrect SQL syntax that was previously accepted now fails with an
error.
In this section:
SAP Adaptive Server Enterprise and SAP SQL Anywhere support comparisons between a variable and a scalar
value returned by an expression subquery.
For example:
Permitted usage of the CASE statement differs in SAP IQ and SAP SQL Anywhere.
The CASE statement is not supported in SAP Adaptive Server Enterprise, which supports case expressions
only.
Related Information
SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ support the use of cursors with UPDATE and
DELETE.
In SAP IQ, updatable cursors are asensitive only, for one table only, and chained only. Updatable hold cursors
are not permitted. Updatable cursors in SAP IQ get a table lock.
Support for PRINT differs in SAP Adaptive Server Enterprise, SAP SQL Anywhere, and SAP IQ.
Note
In addition to supporting Transact-SQL alternative syntax, SAP SQL Anywhere and SAP IQ provide aids for
translating statements between the Watcom-SQL and Transact-SQL dialects.
Functions returning information about SQL statements and enabling automatic translation of SQL statements
include:
These are functions and thus can be accessed using a SELECT statement from ISQL. For example, the following
statement returns the value Watcom-SQL:
SAP SQL Anywhere, SAP IQ procedures and Transact-SQL procedures return result sets differently.
SAP SQL Anywhere and SAP IQ use a RESULT clause to specify returned result sets.
The following Transact-SQL procedure illustrates how Transact-SQL stored procedures return result sets:
There are minor differences in the way the client tools present multiple results to the client:
SAP SQL Anywhere and SAP IQ assign values to variables in procedures differently than Transact-SQL.
SAP SQL Anywhere and SAP IQ use the SET statement to assign values to variables in a procedure.
In Transact-SQL, values are assigned using the SELECT statement with an empty table list. The following
simple procedure illustrates how the Transact-SQL syntax works:
Related Information
Default procedure error handling is different in the Watcom-SQL and Transact-SQL dialects.
By default, Watcom-SQL dialect procedures exit when they encounter an error, returning SQLSTATE and
SQLCODE values to the calling environment.
You can build explicit error handling into Watcom-SQL stored procedures using the EXCEPTION statement, or
you can instruct the procedure to continue execution at the next statement when it encounters an error, using
the ON EXCEPTION RESUME statement.
When a Transact-SQL dialect procedure encounters an error, execution continues at the following statement.
The global variable @@error holds the error status of the most recently executed statement. You can check
this variable following a statement to force return from a procedure. For example, the following statement
causes an exit if an error occurs:
IF @@error != 0 RETURN
When the procedure completes execution, a return value indicates the success or failure of the procedure. This
return status is an integer, and can be accessed as follows:
This table describes the built-in procedure return values and their meanings:
Value Meaning
-1 Missing object
-4 Permission error
-5 Syntax error
The RETURN statement can be used to return other integers, with their own user-defined meanings.
In this section:
By itself, RAISERROR does not cause an exit from the procedure, but it can be combined with a RETURN
statement or a test of the @@error global variable to control execution following a user-defined error.
If you set the ON_TSQL_ERROR database option to CONTINUE, RAISERROR no longer signals an execution-
ending error. Instead, the procedure completes and stores the RAISERROR status code and message, and
returns the most recent RAISERROR. If the procedure causing the RAISERROR was called from another
procedure, RAISERROR returns after the outermost calling procedure terminates.
You lose intermediate RAISERROR statuses and codes when the procedure terminates. If, at return time, an
error occurs along with RAISERROR, the error information is returned and you lose the RAISERROR
You can make a Watcom-SQL dialect procedure handle errors in a Transact-SQL-like manner.
The presence of an ON EXCEPTION RESUME clause prevents explicit exception handling code from being
executed, so avoid using these two clauses together.
SAP IQ and SAP SQL Anywhere have differences in starting and managing databases and servers, database
option support, DDL support, and DML support.
For additional information, always refer to the SAP IQ documentation set when using the product. Refer to the
SAP SQL Anywhere documentation set when using SAP SQL Anywhere, or when the SAP IQ documentation
refers to SAP SQL Anywhere documentation for specific functionality only.
In this section:
SAP SQL Anywhere Server and Database Startup and Administration [page 189]
Starting and managing databases and servers differs between SAP IQ and SAP SQL Anywhere.
SAP SQL Anywhere Data Definition Language (DDL) Differences [page 189]
SAP SQL Anywhere and SAP IQ have differences in DDL behavior.
SAP SQL Anywhere Data Manipulation Language (DML) Differences [page 190]
Not all SAP SQL Anywhere DML objects and syntax are supported by SAP IQ.
Starting and managing databases and servers differs between SAP IQ and SAP SQL Anywhere.
● SAP IQ uses the server startup command start_iq, instead of the SAP SQL Anywhere network server
startup command.
● SAP IQ does not support personal servers.
● SAP IQ supports many SAP SQL Anywhere server command line options, but not all. Other server options
are supported for SAP IQ but not for SAP SQL Anywhere.
● SAP IQ provides the stop_iq utility (UNIX) to shut down servers.
● Clauses permitted in the BACKUP DATABASE and RESTORE DATABASE statements differ in SAP IQ and
SAP SQL Anywhere.
● SQL Remote is supported in SAP IQ only for multiplex operations.
SAP IQ supports many SAP SQL Anywhere database administration utilities, but not all:
● The following SAP SQL Anywhere utilities are not supported by SAP IQ:
○ backup
○ compression
○ console
○ initialization
○ license
○ log transfer
○ log translation
○ rebuild
○ spawn
○ some transaction log options (-g, -il, -ir, -n, -x, -z)
○ uncompression
○ unload
○ upgrade
○ write file
● SAP IQ supports the SAP SQL Anywhere validation utility only on the catalog store. To validate the IQ
main store, use sp_iqcheckdb.
● In a DELETE/DROP or PRIMARY KEY clause of an ALTER TABLE statement, SAP IQ takes the RESTRICT
action (reports an error if there are associated foreign keys). SAP SQL Anywhere always takes the
CASCADE action.
● Similarly, DROP TABLE statement reports an error in SAP IQ if there are associated foreign-key constraints.
Not all SAP SQL Anywhere DML objects and syntax are supported by SAP IQ.
Note
● SAP IQ supports the INSERT...LOCATION syntax; SAP SQL Anywhere does not.
● LOAD TABLE options differ in SAP IQ and SAP SQL Anywhere.
● OPEN statement in SAP IQ does not support BLOCK and ISOLATION LEVEL clauses.
● SAP IQ does not support triggers.
● Use of transactions, isolation levels, checkpoints, and automatically generated COMMITs, as well as cursor
support, is different in SAP IQ and SAP SQL Anywhere.
● When you SELECT from a stored procedure in SAP IQ, CIS functional compensation performance
considerations apply.
● SAP IQ ignores the database name qualifier in fully qualified names in SAP Adaptive Server Enterprise
SELECT statements, such as a FROM clause with <database name>.<owner>.<table name>. For
example, SAP IQ interprets the query SELECT * FROM XXX..TEST as SELECT * FROM TEST.
SAP IQ and SAP Adaptive Server Enterprise have differences in stored procedure support and views support.
For additional information, always refer to the SAP IQ documentation set when using the product. Refer to the
SAP ASE documentation set when using SAP ASE, or when the SAP IQ documentation refers to SAP ASE
documentation for specific functionality only.
In this section:
SAP IQ does not support these SAP Adaptive Server Enterprise stored procedures:
● sp_addserver
● sp_configure
● sp_estspace
● sp_help
● sp_helpuser
● sp_who
● sp_column_privileges
● sp_databases
● sp_datatype_info
● sp_server_info
SAP IQ does not support these SAP Adaptive Server Enterprise views:
● sysalternates
● sysaudits
The column name used in the SAP ASE view SYSTYPES is “allownulls”. The column name used in the SAP IQ
view SYSTYPES is “allowsnulls”.
Functions return information from the database and are allowed anywhere an expression is allowed.
When using functions with SAP IQ, unless otherwise stated, any function that receives the NULL value as a
parameter returns a NULL value.
If you omit the FROM clause, or if all tables in the query are in the SYSTEM dbspace, SAP SQL Anywhere
processes the query, instead of SAP IQ, and might behave differently, especially with regard to syntactic and
semantic restrictions and the effects of option settings.
If you have a query that does not require a FROM clause, you can force SAP IQ to process the query by adding
the clause “FROM iq_dummy,” where iq_dummy is a one-row, one-column table that you create in your
database.
In this section:
Related Information
Aggregate functions summarize data over a group of rows from the database. The groups are formed using the
GROUP BY clause of the SELECT statement.
Simple aggregate functions, such as SUM(), MIN(), MAX(), AVG() and COUNT() are allowed only in the select
list and in the HAVING and ORDER BY clauses of a SELECT statement. These functions summarize data over a
group of rows from the database. Groups are formed using the GROUP BY clause of the SELECT statement.
A class of aggregate functions, called window functions, provides moving averages and cumulative measures
that compute answers to queries such as, “What is the quarterly moving average of the Dow Jones Industrial
average?” or “List all employees and their cumulative salaries for each department.”
● Simple aggregate functions, such as AVG(), COUNT(), MAX(), MIN(), and SUM() summarize data over a
group of rows from the database. The groups are formed using the GROUP BY clause of the SELECT
statement.
● Newer statistical aggregate functions that take one argument include STDDEV(), STDDEV_SAMP(),
STDDEV_POP(), VARIANCE(), VAR_SAMP(), and VAR_POP().
Both the simple and newer categories of aggregates can be used as a windowing function that incorporates a
window clause in a SQL query specification (a window) that conceptually creates a moving window over a
result set as it is processed.
Another class of window aggregate functions supports analysis of time series data. Like the simple aggregate
and statistical aggregate functions, you can use these window aggregates with a SQL query specification (or
<window-spec>). The time series window aggregate functions calculate correlation, linear regression,
ranking, and weighted average results:
● ISO/ANSI SQL:2008 OLAP functions for time series analysis include: CORR(), COVAR_POP(),
COVAR_SAMP(), CUME_DIST(), FIRST_VALUE(), LAST_VALUE(), REGR_AVGX(), REGR_AVGY(),
REGR_COUNT(), REGR_INTERCEPT(), REGR_R2(), REGR_SLOPE(), REGR_SXX(), REGR_SXY(), and
REGR_SYY().
● Non-ISO/ANSI SQL:2008 OLAP aggregate function extensions used in the database industry include
FIRST_VALUE(), MEDIAN(), and LAST_VALUE().
● Weighted OLAP aggregate functions that calculate weighted moving averages include
EXP_WEIGHTED_AVG() and WEIGHTED_AVG().
Time series functions designed exclusively for financial time series forecasting and analysis have names
beginning with “TS_”.
For information on aggregate function support of the LONG BINARY and LONG VARCHAR data types, see SAP
IQ Administration: Unstructured Data Analytics.
The aggregate functions AVG, SUM, STDDEV, and VARIANCE do not support the binary data types (BINARY and
VARBINARY).
Related Information
Analytical functions include simple aggregates, window functions, and numeric functions.
● Simple aggregates – AVG, COUNT, MAX, MIN, and SUM, STDDEV, and VARIANCE
Note
You can use all simple aggregates, except the Grouping() function, with an OLAP windowed function.
● Window functions:
○ Windowing aggregates – AVG, COUNT, MAX, MIN, and SUM.
○ Ranking functions – RANK, DENSE_RANK, PERCENT_RANK, ROW_NUMBER, and NTILE.
○ Statistical functions – STDDEV, STDDEV_SAMP, STDDEV_POP, VARIANCE, VAR_SAMP, and VAR_POP.
○ Distribution functions – PERCENTILE_CONT and PERCENTILE_DISC.
○ Interrow functions – LAG and LEAD.
● Numeric functions – WIDTH_BUCKET, CEIL, and LN, EXP, POWER, SQRT, and FLOOR.
Note
The ranking and inverse distribution analytical functions are not supported by SAP Adaptive Server
Enterprise.
Unlike some aggregate functions, you cannot specify DISTINCT in window functions.
* The OLAP SQL standard allows Grouping() in GROUP BY CUBE, or GROUP BY ROLLUP operations only.
In this section:
A major feature of the ISO/ANSI SQL extensions for OLAP is a construct called a window.
This windowing extension let users divide result sets of a query (or a logical partition of a query) into groups of
rows called partitions and determine subsets of rows to aggregate with respect to the current row.
You can use three classes of window functions with a window: ranking functions, the row numbering function,
and window aggregate functions.
Windowing extensions specify a window function type over a window name or specification and are applied to
partitioned result sets within the scope of a single query expression.
Windowing operations let you establish information such as the ranking of each row within its partition, the
distribution of values in rows within a partition, and similar operations. Windowing also lets you compute
moving averages and sums on your data, enhancing the ability to evaluate your data and its impact on your
operations.
A window partition is a subset of rows returned by a query, as defined by one or more columns in a special
OVER() clause:
Related Information
The OLAP ranking functions let application developers compose single-statement SQL queries that answer
questions such as "Name the top 10 products shipped this year by total sales," or "Give the top 5% of
salespeople who sold orders to at least 15 different companies."
These functions include the ranking functions, RANK(), DENSE_RANK(), PERCENT_RANK(), ROW_NUMBER(),
and NTILE().
The ORDER BY clause specifies the parameter on which ranking is performed and the order in which the rows
are sorted in each group. This ORDER BY clause is used only within the OVER clause and is not an ORDER BY for
SELECT. No aggregation functions in the rank query ROW are allowed to specify DISTINCT.
Note
The OVER (ORDER_BY) clause of the ROW_NUMBER() function cannot contain a ROWS or RANGE clause.
The OVER clause indicates that the function operates on a query result set. The result set is the rows that are
returned after the FROM, WHERE, GROUP BY, and HAVING clauses have all been evaluated. The OVER clause
defines the data set of the rows to include in the computation of the rank analytical function.
The value <expression> is a sort specification that can be any valid expression involving a column reference,
aggregates, or expressions invoking these items.
The ASC or DESC parameter specifies the ordering sequence as ascending or descending. Ascending order is
the default.
Rank analytical functions are only allowed in the select list of a SELECT or INSERT statement or in the ORDER
BY clause of the SELECT statement. Rank functions can be in a view or a union. You cannot use rank functions
in a subquery, a HAVING clause, or in the select list of an UPDATE or DELETE statement. More than one rank
analytical function is allowed per query in SAP IQ 16.1.
Statistical aggregate analytic functions summarize data over a group of rows from the database.
The groups are formed using the GROUP BY clause of the SELECT statement. Aggregate functions are allowed
only in the select list and in the HAVING and ORDER BY clauses of a SELECT statement. These functions include
STDDEV, STDDEV_POP, STDDEV_SAMP, VARIANCE, VAR_POP, and VAR_SAMP.
The OLAP functions can be used as a window function with an OVER() clause in a SQL query specification that
conceptually creates a moving window over a result set as it is processed.
The inverse distribution analytical functions PERCENTILE_CONT and PERCENTILE_DISC take a percentile
value as the function argument and operate on a group of data specified in the WITHIN GROUP clause, or
operate on the entire data set.
These functions return one value per group. For PERCENTILE_DISC, the data type of the results is the same as
the data type of its ORDER BY item specified in the WITHIN GROUP clause. For PERCENTILE_CONT, the data
The inverse distribution analytical functions require a WITHIN GROUP (ORDER BY) clause. For example:
The value of <expression1> must be a constant of numeric data type and range from 0 to 1 (inclusive). If the
argument is NULL, then a "wrong argument for percentile" error is returned. If the argument value is less than
0, or greater than 1, then a "data value out of range" error is returned.
The ORDER BY clause, which must be present, specifies the expression on which the percentile function is
performed and the order in which the rows are sorted in each group. This ORDER BY clause is used only within
the WITHIN GROUP clause and is not an ORDER BY for the SELECT.
The WITHIN GROUP clause distributes the query result into an ordered data set from which the function
calculates a result.
The value <expression2> is a sort specification that must be a single expression involving a column
reference. Multiple expressions are not allowed and no rank analytical functions, set functions, or subqueries
are allowed in this sort expression.
The ASC or DESC parameter specifies the ordering sequence as ascending or descending. Ascending order is
the default.
Inverse distribution analytical functions are allowed in a subquery, a HAVING clause, a view, or a union. The
inverse distribution functions can be used anywhere the simple non analytical aggregate functions are used.
The inverse distribution functions ignore the NULL value in the data set.
The interrow functions LAG and LEAD enable access to previous values or subsequent values in a data series.
These functions provide access to more than one row of a table or partition simultaneously without a self join.
The LAG function provides access to a row at a given physical offset prior to the CURRENT ROW in the table or
partition. The LEAD function provides access to a row at a given physical offset after the CURRENT ROW in the
table or partition. Use the LAG and LEAD functions to create queries such as, "What was the stock price two
intervals before the current row?" and "What was the stock price one interval after the current row?"
In this section:
Interrow functions also partition simultaneously without a self-join. LAG provides access to a row at a given
physical offset prior to the CURRENT ROW in the table or partition. LEAD provides access to a row at a given
physical offset after the CURRENT ROW in the table or partition.
LAG and LEAD syntax is identical. Both functions require an OVER (ORDER_BY) window specification.
● LAG syntax:
● LEAD syntax:
The PARTITION BY clause in the OVER (ORDER_BY) clause is optional. The OVER (ORDER_BY) clause cannot
contain a window frame ROWS/RANGE specification.
<value_expr >is a table column or expression that defines the offset data to return from the table. You can
define other functions in the <value_expr>, with the exception of analytic functions.
For both functions, specify the target row by entering a physical offset. The <offset >value is the number of
rows above or below the current row. Enter a non-negative numeric data type (entering a negative value
generates an error). If you enter 0, SAP IQ returns the current row.
The optional <default> value defines the value to return if the <offset >value goes beyond the scope of the
table. The default value of <default> is NULL. The data type of <default> must be implicitly convertible to
the data type of the <value_expr> value, or SAP IQ generates a conversion error.
The inter-row functions are useful in financial services applications that perform calculations on data streams,
such as stock transactions. The following example uses the LAG function to calculate the percentage change in
the trading price of a particular stock. Consider the following trading data from a fictional table called
stock_trades:
Note
The query partitions the trades by stock symbol, orders them by time of trade, and uses the LAG function to
calculate the percentage increase or decrease in trade price between the current trade and the previous trade:
The NULL result in the first and fourth output rows indicates that the LAG function is out of scope for the first
row in each of the two partitions. Since there is no previous row to compare to, SAP IQ returns NULL as
specified by the <default> variable.
Data type conversion functions convert arguments from one data type to another.
The database server carries out many data type conversions automatically. For example, if a string is supplied
where a numerical expression is required, the string is automatically converted to a number.
Related Information
Date and time functions perform conversion, extraction, or manipulation operations on date and time data
types and can return date and time information.
The date and time functions allow manipulation of time units. Most time units (such as MONTH) have four
functions for time manipulation, although only two names are used (such as MONTH and MONTHS).
These functions are Transact-SQL date and time functions. They allow an alternative way of accessing and
manipulating date and time functions:
● DATEADD
● DATEDIFF
● DATENAME
● DATEPART
● GETDATE
You should convert arguments to date functions to dates before using them. For example:
SAP IQ does not have the same constants or data type promotions as SAP SQL Anywhere, with which it shares
a common user interface. If you issue a SELECT statement without a FROM clause, the statement is passed to
SAP SQL Anywhere. The following statement is handled exclusively by SAP SQL Anywhere:
SELECT WEEKS('1998/11/01');
The following statement, processed by SAP IQ, uses a different starting point for the WEEKS function and
returns a different result than the statement above:
Consider another example. The MONTHS function returns the number of months since an “arbitrary starting
date.” The “arbitrary starting date” of SAP IQ, the imaginary date 0000-01-01, is chosen to produce the most
efficient date calculations and is consistent across various data parts. SAP SQL Anywhere does not have a
single starting date. The following statements, the first processed by SAP SQL Anywhere, the second by SAP
IQ, both return the answer 12:
SELECT MONTHS('0001/01/01');
SELECT DAYS('0001/01/01');
The first, processed by SAP SQL Anywhere, yields the value 307, but the second, processed by SAP IQ, yields
166.
Note
Create a dummy table with only one column and row. You can then reference this table in the FROM clause
for any SELECT statement that uses date or time functions, thus ensuring processing by SAP IQ, and
consistent results.
In this section:
Related Information
Many of the date functions use dates built from date parts.
Quarter qq 1–4
Month mm 1 – 12
Week wk 1 – 54
Day dd 1 – 31
Dayofyear dy 1 – 366
Hour hh 0 – 23
Minute mi 0 – 59
Second ss 0 – 59
Millisecond ms 0 – 999
Calyearofweek cyr Integer. The year in which the week begins. The week containing
the first few days of the year can be part of the last week of the pre
vious year, depending upon which day it begins. If the new year
starts on a Thursday through Saturday, its first week starts on the
last Sunday of the previous year. If the new year starts on a Sunday
through Wednesday, none of its days are part of the previous year.
Calweekofyear cwk An integer from 1 to 54 representing the week number within the
year that contains the specified date.
Caldayofweek cdw The day number within the week (Sunday = 1, Saturday = 7).
Note
By default, Sunday is the first day of the week. To make Monday the first day, use:
For compatibility with SAP Adaptive Server Enterprise, use the Transact-SQL date and time functions.
Related Information
HTTP functions facilitate the handling of HTTP requests within Web services.
Note
Ensure your Web services use best coding practices to safeguard against cross-site scripting (XSS) attacks.
Open-source resources are available at organizations such as OWASP .
Related Information
SAP IQ does not have the same constants or data type promotions as SAP SQL Anywhere, with which it shares
a common user interface. If you issue a SELECT statement without a FROM clause, the statement is passed
through to SAP SQL Anywhere. For the most consistent results, include the table name in the FROM clause
whether you need it or not.
Note
Related Information
String functions perform conversion, extraction, or manipulation operations on strings, or return information
about strings.
When working in a multibyte character set, check carefully whether the function being used returns
information concerning characters or bytes.
Most of the string functions accept binary data (hexadecimal strings) in the <string-expr> parameter, but
some of the functions, such as LCASE, UCASE, LOWER, and LTRIM, expect the string expression to be a
character string.
Unless you supply a constant LENGTH argument to a function that produces a LONG VARCHAR result (such as
SPACE or REPEAT), the default length is the maximum allowed.
SAP IQ queries containing one or more of these functions might return one of the following errors:
ASA Error -1009080: Key doesn't fit on a single database page: 65560(4, 1)
ASA Error -1009119: Record size too large for database page size
For example:
To avoid such errors, cast the function result with an appropriate maximum length; for example:
The errors are more likely with a page size of 64K or a multibyte collation.
Note
For information on string functions that support the LONG BINARY and LONG VARCHAR data types, see
Function Support in SAP IQ Administration: Unstructured Data Analytics.
Related Information
Description
Databases currently running on a server are identified by a database name and a database ID number. The
db_id and db_name functions provide information on these values.
A set of system functions provides information about properties of a currently running database, or of a
connection, on the database server. These system functions take the database name or ID, or the connection
name, as an optional argument to identify the database or connection for which the property is requested.
Performance
System functions are processed differently than other SAP IQ functions. When queries to SAP IQ tables include
system functions, performance is reduced.
In this section:
Related Information
Not all SAP Adaptive Server Enterprise system functions are implemented in SAP IQ.
Some of the system functions are implemented in SAP IQ as system stored procedures.
Function Status
col_length Implemented.
col_name Implemented.
index_col Implemented.
object_id Implemented.
object_name Implemented.
user_id Implemented.
user_name Implemented.
datalength Implemented.
Retrieve the value of a specific connection property or the values of all connection properties.
Retrieves the value of a connection property. The following statement returns the number of pages that
have been read from file by the current connection:
Retrieves the values of all connection properties. The following statement returns separate row appears for
each connection, for each property:
call sa_conn_properties
Related Information
Retrieve the value of a specific server property or the values of all server properties.
The Server Edition property returns the SAP SQL Anywhere edition, not the SAP IQ edition. To show SAP SQL
Anywhere license information, use the sp_iqlmconfig system procedure.
Retrieves the value of a server property. The following statement returns the number of cache pages being
used to hold the main heap:
call sa_eng_properties
In this section:
Related Information
The database server can store the values of numeric database server properties so that you can track the
changes of numeric database server properties over time.
Tracking database server property values over time aids in evaluating the overall health of a database server.
For example, a brief increase in CPU usage may not be an issue, but if the CPU usage is at 100 percent for an
extended period of time, it could indicate that the hardware is insufficient.
Rather than having to poll, analyze, and store database server property values at regular intervals, configure
the database server to track database server properties that return numeric values. These values are stored in
memory for a period of time so that polling can be done at larger intervals, reducing the load on the database
server.
When database server property tracking is enabled, property values are tracked at fixed intervals and you can
query historic property values by using the sp_property_history system procedure.
In this section:
View the list of database server property values that can be tracked.
Context
You can find the PropNum of the database server property by running the sa_eng_properties system procedure
or by calling the PROPERTY_NUMBER function.
Procedure
Related Information
Configure your database server to track the values of numeric database server properties.
Context
Procedure
Option Action
Specify ○ When starting the database server, use the -phl and -phs database server options to turn on history track
data
ing for a list of specified database server properties and to specify the maximum amount of memory to use
base
server for tracking property history. For example, run the following command:
proper start_iq -n myserver -phl ProcessCPUSystem, ProcessCPUUser -phs 250K
ties to ○ If the database server is already running, then use the sa_server_option system procedure to configure
be
property tracking for the database server. For example, execute the following statement:
tracked
for the CALL
data sa_server_option( 'PropertyHistoryList,ProcessCPUSystem,ProcessCPUUser,P
base ropertyHistorySize,250K' );
server
Specify
Use the sa_db_option system procedure to configure property tracking of database server properties for the da
data
base tabase. For example, execute the following statement:
server CALL sa_db_option( 'PropertyHistoryList,ProcessCPUSystem,ProcessCPUUser' );
proper
ties to
be
tracked
for the
data
base
Results
The values of the specified database server properties are tracked for either the specified amount of time or
until the specified maximum amount of memory has been reached.
Connection properties are available for each connection to a database. Connection property names are case
insensitive. Use the CONNECTION_PROPERTY system function or the sa_conn_properties system procedure
to retrieve connection properties.
Example
The following statement returns the number of pages that have been read from file by the current
connection.
Use the sa_conn_properties system procedure to retrieve the values of all connection properties:
CALL sa_conn_properties( );
Connection properties
allow_nulls_by_default Whether columns created without specifying either NULL or NOT NULL are allowed to contain
NULL values. This property corresponds to the allow_nulls_by_default option for the connection.
allow_read_client_file Whether the database server allows the reading of files on a client computer. This property cor
responds to the allow_read_client_file option for the connection.
allow_snapshot_isola Whether snapshot isolation is enabled or disabled. This property corresponds to the allow_snap
tion shot_isolation option for the connection.
allow_write_client_file Whether the database server allows the writing of files to a client computer. This property corre
sponds to the allow_write_client_file option for the connection.
ansi_blanks Indicates when character data is truncated at the client side. This property corresponds to
ansi_blanks option for the connection.
ansi_close_cur Whether cursors opened WITH HOLD are closed when a ROLLBACK is performed. This property
sors_on_rollback corresponds to the ansi_close_cursors_on_rollback option for the connection.
ansi_permissions Whether privileges are checked for DELETE and UPDATE statements. This property corresponds
to the ansi_permissions option for the connection.
ansi_substring The behavior of the SUBSTRING (SUBSTR) function when negative values are provided for the
start or length parameters. This property corresponds to the ansi_substring option for the con
nection.
ansi_update_constraints The range of updates that are permitted. This property corresponds to the ansi_update_con
straints option for the connection.
ansinull How NULL values are interpreted. This property corresponds to the ansinull option.
AppInfo Information about the client that made the connection. For HTTP connections, this includes in
formation about the browser. For connections using older versions of SAP Open Client or jCon
nect, the information may be incomplete.
ApproximateCPUTime The estimate of the amount of CPU time accumulated by a given connection, in seconds. The
value returned may differ from the actual value by as much as 50%, although typical variations
are in the 5-10% range. On multi-processor computers, each CPU (or hyperthread or core) accu
mulates time, so the sum of accumulated times for all connections may be greater than the
elapsed time.
auditing Whether auditing is enabled for the database(On) or not (Off). This option corresponds to the
auditing option.
Authenticated Whether the application sent a valid connection authentication string (Yes) or not (No).
AuthType The type of authentication used when connecting. The value returned is one of Standard, Inte
grated, Kerberos, LDAPUA, or an empty string. The value is an empty string when the connection
is an internal connection or for connections for HTTP services that use AUTHORIZATION OFF.
auto_commit Whether the database server automatically commits after each statement. By default, the data
base server operates in manual commit mode. To turn on automatic commits, set the auto_com
mit database option (a server-side option). Do not confuse this option with the Interactive SQL
option of the same name.
auto_commit_on_cre Whether the database server performs a COMMIT before an index is created on a local tempo
ate_local_temp_index rary table. This property corresponds to the value of the auto_commit_on_create_lo
cal_temp_index option.
background_priority This property is deprecated. The value of the background_priority option for the connection,
which indicates how much impact the current connection has on the performance of other con
nections.
BlockedOn Whether the current connection is blocked or not (zero). When the connection is blocked be
cause of a locking conflict, the value is the connection number on which the connection is
blocked.
blocking The database server's behavior in response to locking conflicts. This property corresponds to
the blocking option for the connection.
blocking_others_time The length of time that another connection can block on the current connection's row and table
out locks before the current connection is rolled back. This property corresponds to the value of the
blocking_others_timeout option.
blocking_timeout The length of time, in milliseconds, a transaction waits to obtain a lock. This property corre
sponds to the blocking_timeout option.
BytesReceived The number of bytes received during client/server communications. This value is updated for
HTTP and HTTPS connections.
BytesReceivedUncomp The number of bytes that would have been received during client/server communications if
compression was disabled. This value is the same as the value for BytesReceived if compression
is disabled.
BytesSent The number of bytes sent during client/server communications. This value is updated for HTTP
and HTTPS connections.
BytesSentUncomp The number of bytes that would have been sent during client/server communications if com
pression was disabled. This value is the same as the value for BytesSent if compression is disa
bled.
CacheRead The number of database pages that have been looked up in the cache.
CacheReadIndInt The number of index internal-node pages that have been read from the cache.
CacheReadIndLeaf The number of index leaf pages that have been read from the cache.
CacheReadTable The number of table pages that have been read from the cache.
CarverHeapPages The number of heap pages used for short-term purposes such as query optimization.
chained The value of the chained option, which indicates the transaction mode used in the absence of a
BEGIN TRANSACTION statement.
CharSet The CHAR character set used by the connection. This property has extensions that you can
specify when querying the property value.
checkpoint_time The value of the checkpoint_time option, which indicates the maximum time, in minutes, that
the database server runs without doing a checkpoint.
cis_option Specifies 7 if debugging information for remote data access appears in the database server mes
sages window. Specifies 0 if the debugging information for remote data access does not appear
in the database server messages window. This property corresponds to the cis_option option.
cis_rowset_size The number of rows that are returned from remote servers for each fetch. This property corre
sponds to the value of the cis_rowset_size option.
ClientLibrary The connection library type. The value is jConnect for jConnect connections; CT_Library for SAP
Open Client connections; None for HTTP connections, and CmdSeq for ODBC, Embedded SQL,
OLE DB, ADO.NET, and SQL Anywhere JDBC driver connections.
ClientNodeAddress The node for the client in a client/server connection. When the client and server are both on the
same computer, an empty string is returned. This property is a synonym for the NodeAddress
property.
The value is NA if the request that is currently executing is part of an event handler.
ClientPort The client's TCP/IP port number or 0 if the connection isn't a TCP/IP connection.
ClientStmtCacheHits The number of prepares that were not required for this connection because of the client state
ment cache. This value is the number of additional prepares that would be required if client
statement caching was disabled.
ClientStmtCacheMisses The number of statements in the client statement cache for this connection that were prepared
again. This value is the number of times a cached statement was considered for reuse, but could
not be reused because of a schema change, a database option setting, or a DROP VARIABLE
statement.
close_on_endtrans Whether cursors are closed at the end of a transaction. This property corresponds to the
close_on_endtrans option.
collect_statis Whether statistics are gathered during the execution of data-altering DML statements such as
tics_on_dml_updates INSERT, DELETE, and UPDATE. This property corresponds to the collect_statistics_on_dml_up
dates option.
CommLink The communication link for the connection. The value is one of the supported network protocols
supported, or local for a same-computer connection. The value is NA if the request that is cur
rently executing is part of an event handler.
CommNetworkLink The communication link for the connection. This value returned is one of the supported network
protocols. Values include SharedMemory and TCPIP. The value always includes the name of the
link, regardless of whether it is same-computer or not. The value is NA if the request that is cur
rently executing is part of an event handler.
CommProtocol The communication protocol. The value is TDS for SAP Open Client and jConnect connections,
HTTP for HTTP connections, HTTPS for HTTPS connections. The value is CmdSeq for ODBC,
Embedded SQL, OLE DB, ADO.NET, and SQL Anywhere JDBC driver connections.
Compression Whether communication compression is enabled on the connection. The value is NA if the re
quest that is currently executing is part of an event handler.
conn_auditing Whether auditing is enabled or disabled for the connection when the auditing option is also set
to On. This property corresponds to the conn_auditing option.
ConnectedTime The total length of time, in seconds, that a connection has been connected.
connection_authentica The string used to authenticate the client. Authentication is required before the database can be
tion modified. This property corresponds to the connection_authentication option.
connection_type The value of the connection_type database option: one of: Event, Internal, Standard, or Monitor
continue_after_raiserror Whether execution of a procedure or trigger is stopped whenever the RAISERROR statement is
encountered. This property corresponds to the continue_after_raiserror option
conversion_error Whether data type conversion failures are reported when fetching information from the data
base This property corresponds to the value of the conversion_error option.
cooperative_com This property is deprecated. The value of the cooperative_commit_timeout option, which is the
mit_timeout time, in milliseconds, that the database server waits for other connections to fill a page of the log
before writing to disk.
cooperative_commits This property is deprecated. The value of the cooperative_commits option, which is On or Off to
indicate when commits are written to disk.
CurrentLineNumber The current line number of the procedure or compound statement a connection is executing.
The procedure can be identified using the CurrentProcedure property. If the line is part of a com
pound statement from the client, an empty string is returned.
CurrentProcedure The name of the procedure that a connection is currently executing. If the connection is execut
ing nested procedure calls, the name is the name of the current procedure. If there is no proce
dure executing, an empty string is returned.
Cursor The number of declared cursors that are currently being maintained by the database server.
CursorOpen The number of open cursors that are currently being maintained by the database server.
database_authentication Indicates the string used to authenticate the database. Authentication is required for authenti
cated database servers before the database can be modified. This property corresponds to the
database_authentication option
date_format The value of the date_format option, which is a string indicating the format for dates retrieved
from the database.
date_order The value of the date_order option, which is a string indicating how dates are formatted.
db_publisher The user ID of the database publisher. This property corresponds to the db_publisher option.
debug_messages Whether MESSAGE statements that include a DEBUG ONLY clause are executed. This property
corresponds to the debug_messages option
dedicated_task Whether a request handling task is dedicated exclusively to handling requests for the connec
tion. This property corresponds to the dedicated_task option
default_dbspace The name of the default dbspace, or an empty string if the default dbspace has not been speci
fied. This property corresponds to the default_dbspace option,
default_timestamp_in The number of microseconds that is added to a column of type TIMESTAMP to keep values in
crement the column unique. This property corresponds to the default_timestamp_increment.
delayed_commit_time The time, in milliseconds, that the database server waits to return control to an application fol
out lowing a COMMIT. This property corresponds to the delayed_commit_timeout option.
delayed_commits Whether the database server returns control to an application following a COMMIT or not. This
property corresponds to the delayed_commits option.
DiskRead The number of pages that have been read from disk.
DiskReadIndInt The number of index internal-node pages that have been read from disk.
DiskReadIndLeaf The number of index leaf pages that have been read from disk.
DiskReadTable The number of table pages that have been read from disk.
disk_sandbox Whether the read-write file operations of the database are restricted to the directory where the
main database file is located. This property corresponds to the disk_sandbox option.
DiskWaitRead The number of times the database server waited for an asynchronous read.
DiskWaitWrite The number of times the database server waited for an asynchronous write.
DiskWrite The number of modified pages that have been written to disk.
divide_by_zero_error Whether if division by zero results in an error (On) or not (Off). This property corresponds to the
divide_by_zero_error option.
escape_character This property is reserved for system use. Do not change the setting of this option.
EventName The name of the associated event if the connection is running an event handler. Otherwise, an
empty string is returned.
exclude_operators This property is reserved for system use. Do not change the setting of this option.
ExprCacheAbandons The number of times that the expression cache was abandoned because the hit rate was too low.
ExprCacheDropsToRea The number of times that the expression cache dropped to read-only status because the hit rate
dOnly was low.
ExprCacheResumesO The number of times that the expression cache resumed read-write status because the hit rate
fReadWrite increased.
ExprCacheStarts The number of times that the expression cache was started.
extern_login_credentials Whether remote connections are attempted using the logged in user's external login credentials
or the effective user's external login credentials. This property corresponds to the ex
tern_login_credentials option.
extended_join_syntax Whether queries with duplicate correlation name syntax for multi-table joins are allowed, or
whether they are reported as errors. This property corresponds to the extended_join_syntax op
tion.
fire_triggers Whether triggers are fired in the database. This property corresponds to the fire_triggers option.
first_day_of_week The number that is used for the first day of the week, where 7=Sunday and 1=Monday. This prop
erty corresponds to the first_day_of_week option.
for_xml_null_treatment The value is Omit if elements and attributes that contain NULL values are omitted from the re
sult. The value is Empty if empty elements or attributes are generated for NULL values when the
FOR XML clause is used in a query. This property corresponds to the for_xml_null_treatment op
tion.
force_view_creation This property is reserved for system use. Do not change the setting of this option.
FullCompare The number of comparisons that have been performed beyond the hash value in an index.
global_database_id The starting value used for columns created with DEFAULT GLOBAL AUTOINCREMENT. This
property corresponds to the global_database_id option.
HashForcedPartitions The number of times that a hash operator was forced to partition because of competition for
memory.
HasSecuredFeature Whether at least one feature of the feature set is secured (Yes) or not (No). This property has
extensions that you can specify when querying the property value.
HeapsCarver The number of heaps used for short-term purposes such as query optimization.
HeapsQuery The number of heaps used for query processing (hash and sort operations).
http_connec The nominal threshold size of database connections. This property corresponds to the http_con
tion_pool_basesize nection_pool_basesize option.
http_connec The maximum length of time that unused connections are stored in the connection pool. This
tion_pool_timeout property corresponds to the http_connection_pool_timeout option.
http_session_timeout The current HTTP session timeout, in minutes. This property corresponds to http_session_time
out option.
HttpServiceName The service name entry point for the current HTTP request. This property is useful for error re
porting and flow control. An empty string is returned when this property is selected from a
stored procedure that did not originate from an HTTP request or if the connection is currently
inactive or waiting to continue an HTTP session.
IdleTimeout The idle timeout value of the connection. The value is NA if the request that is currently execut
ing is part of an event handler.
integrated_server_name The name of the Domain Controller server used for looking up Windows user group membership
for integrated logins. This property corresponds to the integrated_server_name option.
isolation_level The isolation level of the connection. This property corresponds to the isolation_level option.
java_class_path The list of additional directories or JAR files that are searched for classes. This property corre
sponds to the java_class_path option.
java_location The path of the Java VM for the database if one has been specified. This property corresponds to
the java_location option.
java_vm_options The command line options that the database server uses when it launches the Java VM. This
property corresponds to the java_vm_options option.
LastCommitRedoPos The redo log position after the last COMMIT operation was written to the transaction log by the
connection.
LastPlanText The long text plan of the last query executed on the connection. You control the remembering of
the last plan by setting the RememberLastPlan option of the sa_server_option system proce
dure, or using the -zp server option.
LastReqTime The time at which the last request for the specified connection started, in the timezone of the
database. This property can return an empty string for internal connections, such as events. If
the database has the time_zone option set, then the value is returned using the database's time
zone.
LastStatement The most recently prepared SQL statement for the current connection. The LastStatement value
is set when a statement is prepared, and is cleared when a statement is dropped. Only one state
ment string is remembered for each connection. When client statement caching is enabled and a
cached statement is reused, the value is an empty string. If sa_conn_activity reports a non-
empty value for a connection, it is most likely the statement that the connection is currently exe
cuting. If the statement had completed, it would likely have been dropped and the property value
would have been cleared. If an application prepares multiple statements and retains their state
ment handles, then the LastStatement value does not reflect what a connection is currently do
ing.
LivenessTimeout The liveness timeout period for the current connection. The value is NA if the request that is cur
rently executing is part of an event handler.
lock_rejected_rows This property is reserved for system use. Do not change the setting of this option.
LockObjectOID The value is zero if the connection isn't blocked on a table, mutex, or a semaphore, or if the con
nection is on a different database than the connection calling CONNECTION_PROPERTY. Other
wise, LockObjectOID is the object ID of the table, permanent mutex, or permanent semaphore
that the connection is blocked on. A negative value indicates the ID of a temporary mutex or
semaphore. LockObjectOID can be used to look up information about temporary mutexes and
semaphores using the sp_list_mutexes_semaphores system procedure. If the object is a table,
LockObjectOID can be used to look up table information using the SYSTAB system view.
LockObjectType The ID for the type of object the connection is blocked on. Use the ID to look up the object type in
the SYSOBJECT view. Can be one of 'TABLE' or 'MUTEX SEMAPHORE'.
LockName The 64-bit unsigned integer value representing the lock for which a connection is waiting.
LockTableOID Zero if the connection isn't blocked, or isn't blocked on a table, or if the connection is on a differ-
ent database than the connection calling CONNECTION_PROPERTY. Otherwise, this is the ob
ject ID of the table for the lock on which this connection is waiting. The object ID can be used to
look up table information using the SYSTAB system view.
log_deadlocks Whether deadlock information is recorded (On) or not (Off). This property corresponds to the
log_deadlocks option.
LogFreeCommit The number of redo free commits. A redo free commit occurs when a commit of the transaction
log is requested but the log has already been written (so the commit was done for free.)
login_mode The type of login that is supported. This property corresponds to the login_mode option.
login_procedure The name of the stored procedure used to set compatibility options at startup. This property
corresponds to the login_procedure option.
LoginTime The date and time the connection was established. If the database has the time_zone option set,
then the value is returned using the database's time zone.
LogWrite The number of pages that have been written to the transaction log.
materialized_view_opti The value of the materialized_view_optimization option for the connection, which indicates
mization whether materialized views are used during query optimization.
max_connections The value of the max_connections option, which indicates the number of concurrent connec
tions allowed to the database.
max_client_state The value of the max_client_statements_cached option, which indicates the number of state
ments_cached ments cached by the client.
max_cursor_count The maximum number of cursors that a connection can use at once. This property corresponds
to the max_cursor_count option.
max_plans_cached The maximum number of execution plans to be stored in a cache. This property corresponds to
the max_plans_cached option.
max_priority The maximum priority level a connection can have. This property corresponds to the max_prior
ity option for the connection.
max_query_tasks The maximum number of requests that the database server can use to process a query. This
property corresponds to the max_query_tasks option.
max_recursive_itera The maximum number of iterations a recursive common table expression can make. This prop
tions erty corresponds to the max_recursive_iterations option.
max_statement_count The maximum number of prepared statements that a connection can use simultaneously. This
property corresponds to max_statement_count option.
max_temp_space The maximum amount of temporary file space available for a connection. This property corre
sponds to f the max_temp_space option for the connection.
MessageReceived The string that was generated by the MESSAGE statement that caused the WAITFOR statement
to be interrupted. Otherwise, an empty string is returned.
min_password_length The minimum length for new passwords in the database. This property corresponds to
min_password_length option.
min_role_admins The minimum number of administrators required for a role. This property corresponds to the
min_role_admins option.
Name The name of the current connection. You can specify a connection name using the Connection
Name (CON) connection parameter.
NcharCharSet The NCHAR character set used by the connection. This property has extensions that you can
specify when querying the property value.
nearest_century The value of the nearest_century option, which indicates how two-digit years are interpreted in
string-to-date conversions.
NodeAddress The node for the client in a client/server connection. When the client and server are both on the
same computer, an empty string is returned.
non_keywords The value of the non_keywords option, which is a list of keywords, if any, that are turned off so
they can be used as identifiers.
odbc_describe_bi The value is Off if the SAP IQ ODBC driver describes both BINARY and VARBINARY columns as
nary_as_varbinary SQL_BINARY. The value is On if the ODBC driver describes BINARY and VARBINARY columns as
SQL_VARBINARY. This property corresponds to the odbc_describe_binary_as_varbinary option.
odbc_distin Whether CHAR columns are described as SQL_CHAR (On) or they are described as SQL_VAR
guish_char_and_varchar CHAR (OFF). This property corresponds to the odbc_distinguish_char_and_varchar option.
oem_string The string stored in the header page of the database file. This property corresponds to the
oem_string option.
on_charset_conver The behavior when an error is encountered during character set conversion. This property corre
sion_failure sponds to the on_charset_conversion_failure option.
on_tsql_error The behavior when an error is encountered while executing a stored procedure or T-SQL batch.
This property corresponds to the on_tsql_error option.
optimization_goal How query processing is optimized. This property corresponds to the optimization_goal option.
optimization_level The value of the optimization_level option, which is a value between 0 and 15. This number is
used to control the level of effort made by the SAP IQ query optimizer to find an access plan for a
SQL statement.
optimization_workload The level of effort made by the SAP IQ query optimizer to find an access plan for a SQL state
ment. This property corresponds to the optimization_workload option for the connection.t
OSUser The operating system user name associated with the client process. If the client process is im
personating another user (or the set ID bit is set on Unix), the impersonated user name is re
turned. An empty string is returned for version 10.0.1 and earlier clients, and for HTTP and TDS
clients.
PacketSize The packet size used by the connection, in bytes. The value is NA if the request that is currently
executing is part of an event handler. This property corresponds to the CommBufferSize
(CBSIZE) connection parameter.
PacketsReceived The number of client/server communication packets received. This value is not updated for
HTTP or HTTPS connections.
PacketsReceivedUn The number of packets that would have been received during client/server communications if
comp compression was disabled. (This value is the same as the value for PacketsReceived if compres
sion is disabled.)
PacketsSent The number of client/server communication packets sent. This value is not updated for HTTP or
HTTPS connections.
PacketsSentUncomp The number of packets that would have been sent during client/server communications if com
pression was disabled. (This value is the same as the value for PacketsSent if compression is dis
abled.)
parameterization_level The value of the parameterization_level option for the connection, which indicates the statement
parameterization behavior.
ParameterizationPrepar The number of prepares for statements that have been automatically parameterized.
eCount
ParentConnection The connection ID of the connection that created a temporary connection to perform a database
operation (such as performing a backup or creating a database). For other types of connections,
the value is NULL.
pinned_cursor_per The value of the pinned_cursor_percent_of_cache option, which indicates the percentage of the
cent_of_cache cache that can be used for pinning cursors.
post_login_procedure The name of the procedure whose result set contains messages that should be displayed by ap
plications when a user connects .This property corresponds to the post_login_procedure option.
precision The value of the precision option, which indicates the decimal and numeric precision setting.
prefetch The value of the prefetch option. The value is Off if no prefetching is done. The value is Condi
tional if prefetching occurs unless the cursor type is SENSITIVE or the query includes a proxy
table. The value is Always if prefetching is done even for SENSITIVE cursor types and cursors
that involve a proxy table.
PrepStmt The number of prepared statements currently being maintained by the database server for this
connection.
preserve_source_format Whether the original source definition of procedures, triggers, views, and event handlers is saved
in system tables (On) or not (Off). This property corresponds to the preserve_source_format op
tion.
prevent_arti Whether updates are not allowed to the primary key columns of tables involved in publications
cle_pkey_update (On) or not (Off). This property corresponds to the prevent_article_pkey_update option.
priority The value of the priority option for the connection, which indicates the priority level of a connec
tion.
Progress Information about how long a query has been running. This property has extensions that you can
specify when querying the property value.
QueryBypassedCosted The number of requests processed by the optimizer bypass using costing.
QueryBypassedHeuris The number of requests processed by the optimizer bypass using heuristics.
tic
QueryBypassedOpti The number of requests initially processed by the optimizer bypass and subsequently fully opti
mized mized by the optimizer.
QueryCachedPlans The number of query execution plans currently cached for the connection.
QueryHeapPages The number of cache pages used for query processing (hash and sort operations).
QueryJHToJNLOptUsed The number of times a hash join was converted to a nested loops join.
QueryLowMemoryStrat The number of times the server changed its execution plan during execution as a result of low
egy memory conditions. The strategy can change because less memory is currently available than
the optimizer estimated, or because the execution plan required more memory than the opti
mizer estimated.
QueryMemGrantFailed The total number of times a request waited for query memory, but failed to get it.
QueryMemGrantRe The total number of times any request attempted to acquire query memory.
quested
QueryMemGrantWaited The total number of times any request waited for query memory.
QueryReused The number of requests that have been reused from the plan cache.
QueryRowsFetched The number of rows that have been read from base tables, either by a sequential scan or an in
dex scan, for this connection.
QueryRowsMaterialized The number of rows that are written to work tables during query processing.
quoted_identifier Whether strings enclosed in double quotes are interpreted as identifiers (On), or if they are inter
preted as literal strings (Off). This property corresponds to the quoted_identifier option.
read_past_deleted Whether sequential scans at isolation levels 1 and 2 skip uncommitted deleted rows (On), or se
quential scans block on uncommitted deleted rows at isolation levels 1 and 2 (Off). This property
corresponds to the read_past_deleted option.
recovery_time The maximum length of time, in minutes, that the database server will take to recover from sys
tem failure. This property corresponds to the recovery_time option.
RecursiveIterationsHash The number of times recursive hash join used a hash strategy.
RecursiveIterations The number of times recursive hash join used a nested loops strategy.
Nested
RecursiveJNLMisses The number of index probe cache misses for recursive hash join.
RecursiveJNLProbes The number of times recursive hash join attempted an index probe.
remote_idle_timeout The time, in seconds, of inactivity that web service client procedures and functions will tolerate.
This property corresponds to the remote_idle_timeout option.
ReqCountActive The number of requests processed, or NULL if the RequestTiming server property is set to Off.
ReqCountBlockConten The number of times the connection waited for atomic access, or NULL if the -zt option was not
tion specified.
ReqCountBlockIO The number of times the connection waited for I/O to complete, or NULL if the -zt option was
not specified.
ReqCountBlockLock The number of times the connection waited for a lock, or NULL if the -zt option was not speci
fied.
ReqCountUnscheduled The number of times the connection waited for scheduling, or NULL if the -zt option was not
specified.
ReqStatus The status of the request. The value is Idle when the connection is not currently processing a
request. The value is Unscheduled* when the connection has work to do and is waiting for an
available database server worker. The value is BlockedIO* when the connection is blocked wait
ing for an I/O. The value is BlockedContention* when the connection is blocked waiting for ac
cess to shared database server data structures. The value is BlockedLock when the connection
is blocked waiting for a locked object. The value is Executing when the connection is executing a
request. The values marked with an asterisk (*) are only returned when logging of request timing
information has been turned on for the database server using the -zt server option. If request
timing information is not being logged (the default), the values are reported as Executing.
ReqTimeActive The amount of time, in seconds, spent processing requests, or NULL if the -zt option was not
specified.
ReqTimeBlockConten The amount of time, in seconds, spent waiting for atomic access, or NULL if the RequestTiming
tion server property is set to Off.
ReqTimeBlockIO The amount of time, in seconds, spent waiting for I/O to complete, or NULL if the -zt option was
not specified.
ReqTimeBlockLock The amount of time, in seconds, spent waiting for a lock, or NULL if the -zt option was not speci
fied.
ReqTimeUnscheduled The amount of unscheduled time, or NULL if the -zt option was not specified.
ReqType The type of the last request. If a connection has been cached by connection pooling, its ReqType
value is CONNECT_POOL_CACHE.
request_timeout The value of the request_timeout option, which indicates the maximum time a single request can
run.
RequestsReceived The number of client/server communication requests or round trips. It is different from Packets
Received in that multi-packet requests count as one request, and liveness packets are not in
cluded.
reserved_connections The number of connections that are reserved for standard connections. This property corre
sponds to the reserved_connections option.
reserved_keywords The value of the reserved_keywords option, which specifies a list of non-default reserved key
words that are enabled for the database.
re Whether DATE, TIME, and TIMESTAMP values are returned to applications as a string (On), or
turn_date_time_as_strin they are returned as a DATE, TIME, or TIMESTAMP data type (Off). This property corresponds to
g the return_date_time_as_string option.
rollback_on_deadlock Whether transaction are automatically rolled back if it encounters a deadlock (On) or not (Off).
This property corresponds to the rollback_on_deadlock option.
row_counts Whether the row count is always accurate (On), or the row count is usually an estimate (Off).
This property corresponds to the row_counts option.
scale The decimal and numeric scale for the connection. This property corresponds to the scale op
tion.
secure_feature_key This property is deprecated. The value of the secure_feature_key option, which stores the key
that is used to enable and disable features for a database server. Selecting the value of this prop
erty always returns an empty string.
ServerNodeAddress The node for the server in a client/server connection. When the client and server are both on the
same computer, an empty string is returned. The value is NA if the request that is currently exe
cuting is part of an event handler.
SessionCreateTime The time the HTTP session was created. If the database has the time_zone option set, then the
value is returned using the database's time zone.
SessionID The session ID for the connection if it exists, otherwise, an empty string.
SessionLastTime The time of the last request for the HTTP session. If the database has the time_zone option set,
then the value is returned using the database's time zone.
SessionTimeout The time, in minutes, the HTTP session persists during inactivity.
sort_collation The value of the sort_collation option. The value is Internal if the ORDER BY clause remains un
changed; otherwise, the value is the collation name or the collation ID.
sql_flagger_error_level The value of the sql_flagger_error_level option, which controls the response to any SQL that is
not part of the specified standard. This property corresponds to the sql_flagger_error_level op
tion.
sql_flagger_warn- The value of the sql_flagger_warning_level. This property corresponds to the sql_flagger_warn-
ing_level ing_level option.
st_geometry_asbi How spatial values are converted from a geometry to a binary format. This property corresponds
nary_format to the st_geometry_asbinary_format option.
st_geometry_astext_for How spatial values are converted from a geometry to text. This property corresponds to the
mat st_geometry_astext_format option.
st_geometry_asxml_for How spatial values are converted from a geometry to XML. This property corresponds to the
mat st_geometry_asxml_format option.
st_geometry_de How spatial values are described. This property corresponds to the st_geometry_describe_type
scribe_type option.
st_geometry_interpola The interpolation setting for ST_CircularString geometries. This property corresponds to st_ge
tion ometry_interpolation option.
st_geometry_on_invalid The behavior when a geometry fails surface validation. This property corresponds to the st_ge
ometry_on_invalid option.
StatementPostAnno The number of statements processed by the semantic query transformation phase.
tates
StatementPostAnnota The number of statements processed by the semantic query transformation phase, but that
tesSimple skipped some of the semantic transformations.
StatementPostAnnota The number of statements that have completely skipped the semantic query transformation
tesSkipped phase.
string_rtruncation Whether an error is raised when a string is truncated (On), or no error is not raised (Off) This
property corresponds to the string_rtruncation option.
subsume_row_locks Whether the database server acquires individual row locks for a table (On), or not (Off). This
property corresponds to the subsume_row_locks option.
suppress_tds_debug Whether TDS debugging information appears in the database server messages window (Off), or
ging the debugging information does not appear in the database server messages window (On). This
property corresponds to the suppress_tds_debugging option.
synchronize_mir Whether the database mirror server is synchronized on commit (On) or not (Off). This property
ror_on_commit corresponds to the synchronize_mirror_on_commit option.
tds_empty_string_is_nul Whether empty strings are returned as NULL for TDS connections (On), or if a string containing
l one blank character is returned for TDS connections (Off). This property corresponds to the
tds_empty_string_is_null option.
temp_space_limit_check Whether the database server checks the amount of temporary space available for a connection
(On), or the database server does not check the amount of space available for a connection
(Off). This property corresponds to the temp_space_limit_check option.
TempTablePages The number of pages in the temporary file used for temporary tables.
time_format The string format used for times retrieved from the database. This property corresponds to the
time_format option.
time_zone The time zone that the database uses for time zone calculations. This property corresponds to
the time_zone option.
time_zone_adjustment The number of minutes that must be added to the Coordinated Universal Time (UTC) to display
time local to the connection. This property corresponds to the time_zone_adjustment option.
timestamp_format The format for timestamps that are retrieved from the database. This property corresponds to
the timestamp_format option.
time The format for TIMESTAMP WITH TIME ZONE values retrieved from the database. This property
stamp_with_time_zone_ corresponds to the timestamp_with_time_zone_format option.
format
TimeZoneAdjustment The number of minutes that must be added to the Coordinated Universal Time (UTC) to display
time local to the connection.
TransactionStartTime The value is a string containing the time the database was first modified after a COMMIT or
ROLLBACK, or an empty string if no modifications have been made to the database since the
last COMMIT or ROLLBACK. If the database has the time_zone option set, then the value is re
turned using the database's time zone.
truncate_time Whether the number of decimal places used in the TIMESTAMP values is limited (On) or not
stamp_values (Off). This property corresponds to the truncate_timestamp_values option.
trusted_certificates_file The file that contains the list of trusted Certificate Authority certificates when the database
server acts as a client to an LDAP server. This property corresponds to the trusted_certifi-
cates_file option.
tsql_outer_joins Whether Transact-SQL outer joins can be used in DML statement (On) or not (Off). This property
corresponds to the value of the tsql_outer_joins option.
tsql_variables Whether you can use the @ sign instead of the colon as a prefix for host variable names in Em
bedded SQL (On) or not (Off). This property corresponds to the value of the tsql_variables op
tion.
updatable_state The isolation level (0, 1, 2, or 3) used by updatable statements when the isolation_level option is
ment_isolation set to Readonly-statement-snapshot. This property corresponds to the updatable_state
ment_isolation option.
update_statistics Whether the connection can send query feedback to the statistics governor (On) or the statistics
governor does not receive query feedback from the current connection (Off). This property cor
responds to the update_statistics option.
upgrade_database_ca This property is reserved for system use. Do not change the setting of this option.
pability
user_estimates The value that controls whether selectivity estimates in query predicates are respected or ig
nored by the query optimizer. This property corresponds to the user_estimates option.:
UserAppInfo The string specified by the AppInfo connection parameter in a connection string.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate01 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate02 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate03 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate04 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate05 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw01 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw02 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw03 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw04 defined by the client application.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw05 defined by the client application.
UtilCmdsPermitted Whether SQL statements such as CREATE DATABASE, DROP DATABASE, and RESTORE DATA
BASE are permitted for the connection or not. The value is an empty string if the specified con
nection ID is not for the current connection.
uuid_has_hyphens The format of unique identifier values when they are converted to strings. When the option is set
to On, the resulting strings contain four hyphens. This property corresponds to the uuid_has_hy
phens option.
verify_password_func The name of the function used for password verification if one has been specified. This property
tion corresponds to the verify_password_function.
wait_for_commit Whether the database does not check foreign key integrity until the next COMMIT statement
(On), or all foreign keys that are not created with the CHECK ON COMMIT clause are checked as
they are inserted, updated or deleted (Off). This property corresponds to the wait_for_commit
option.
WaitStartTime The time at which the connection started waiting (or an empty string if the connection is not
waiting). If the database has the time_zone option set, then the value is returned using the data
base's time zone.
WaitType The reason for the wait, if it is available. The value is lock when the connection is waiting on a
lock The value is waitfor when the connection is executing a waitfor statement. The value is an
empty-string when the connection is not waiting, or when the reason for the wait is not available.
webservice_name The hostname to be used as the XML namespace within generated WSDL documents if one has
space_host been specified. This property corresponds to the webservice_namespace_host option,
webservice_sessio The session identifier name that is used by the web server to determine whether session man
nid_name agement is being used. This property corresponds to the webservice_sessionid_name option.
You can retrieve the value of a specific database property or the values of all database properties. Database
properties apply to an entire database.
Retrieves the value of a database property. The following statement returns the page size of the current
database:
call sa_db_properties
In this section:
Related Information
Database server properties are available for each connection to a database. Use the PROPERTY system
function to retrieve the value for an individual property and use the sa_eng_properties system procedure to
retrieve the values of all database server properties.
Example
The following statement returns the number of cache pages used for global server data structures:
Use the sa_eng_properties system procedure to retrieve the values of all database server properties:
CALL sa_eng_properties;
ActiveReq The number of server workers that are currently handling client-side requests.
ApproximateCPUTime An estimate of the amount of CPU time accumulated by the database server, in seconds. The
value may differ from the actual value by as much as 50%, although typical variations are in
the 5-10% range. On multi-processor computers, each CPU (or hyperthread or core) accumu
lates time, so the sum of accumulated times for all connections may be greater than the
elapsed time.
AutoMultiProgrammingLe Whether the database server is automatically adjusting its multiprogramming level.
vel
AutoMultiProgrammingLe Whether messages about automatic adjustments to the database server's multiprogramming
velStatistics level are displayed in the database server message log.
BuildClient Reserved for system use. Do not change the setting of this property.
BuildProduction Whether the database server is compiled for production use (Yes) or whether the database
server is a debug build (No)..
BytesReceived The number of bytes received during client/server communications. This value is updated for
HTTP and HTTPS connections.
BytesReceivedUncomp The number of bytes that would have been received during client/server communications if
compression was disabled. (This value is the same as the value for BytesReceived if compres
sion is disabled.)
BytesSent The number of bytes sent during client/server communications. This value is updated for
HTTP and HTTPS connections.
BytesSentUncomp The number of bytes that would have been sent during client/server communications if com
pression was disabled. (This value is the same as the value for BytesSent if compression is dis
abled.)
CacheAllocated The number of cache pages that have been allocated for server data structures.
CacheFile The number of cache pages used to hold data from database files.
CacheFileDirty The number of cache pages that are dirty (needing a write).
CachePanics The number of times the cache manager has failed to find a page to allocate.
CacheReplacements The number of pages in the cache that have been replaced.
CacheScavenges The number of times the cache manager has scavenged for a page to allocate.
CacheScavengeVisited The number of pages visited while scavenging for a page to allocate.
CacheSizingStatistics Whether the database server is displaying cache sizing statistics when the cache is resized.
CarverHeapPages The number of heap pages used for short-term purposes, such as query optimization.
ClientStmtCacheHits The number of prepares that were not required because of the client statement cache. This is
the number of additional prepares that would be required if client statement caching was disa
bled.
ClientStmtCacheMisses The number of statements in the client statement cache that were prepared again. This is the
number of times a cached statement was considered for reuse, but could not be reused be
cause of a schema change, a database option setting, or a DROP VARIABLE statement.
CockpitDB The set of options currently being used by the database server, as well as the Temp parameter.
When Temp is set to yes, the Cockpit database is a temporary database.
CommandLine The command line arguments that were used to start the database server.
If the encryption key for a database was specified using the -ek option, the key is replaced with
a constant string of asterisks in the value for this property.
ConnCount The number of connections to the database server. This property value does not include con
nections used for internal operations, but does include connections used for events and exter
nal environment support.
ConnectedTime The total length of time, in seconds, that all connections have been connected to the database
server.
The value is updated only when a request completes for a connection or when a connection
disconnects. As a result, the value can lag behind for connections that are idle or busy execut
ing for a long time in the database server. The value includes time accrued by any connection,
including database events and background server connections (such as the database
cleaner).
ConsoleLogFile The name of the file where database server messages are logged if the -o option was specified.
If the -option was not specified, the value is an empty string.
ConsoleLogMaxSize The maximum size in bytes of the file used to log database server messages.
CurrentMirrorBackground The number of workers currently being used for database mirroring background tasks. These
Workers workers are separate from those controlled by the multiprogramming level.
CurrentMultiProgrammin The current number of tasks that the database server can process concurrently.
gLevel
CurrRead The current number of file reads that were issued by the database server, but that have not
completed yet.
CurrWrite The current number of file writes that were issued by the database server, but that have not
completed yet.
Cursor The number of declared cursors that are currently being maintained by the database server.
CursorOpen The number of open cursors that are currently being maintained by the database server.
DefaultCollation The collation that would be used for new databases if none is explicitly specified.
DefaultNcharCollation The name of the default NCHAR collation on the server computer (UCA if ICU is installed, and
UTF8BIN otherwise).
DiskReadHintScatterLimit The imposed limit on the size (in bytes) of a scatter read hint.
DiskSandbox Whether the read-write file operations of the database are restricted to the directory where
the main database file is located (On) or not (Off).
DiskWrite The number of modified pages that have been written to disk.
EventTypeDesc The description of the event type associated with a given event type ID.
EventTypeName The system event type name associated with a given event type ID.
ExchangeTasks The number of tasks currently being used for parallel execution of queries.
ExchangeTasksCompleted The total number of internal tasks that have been used for intra-query parallelism since the
database server started.
FipsMode Whether the -fips option was specified when the database server was started.
FirstOption The number that represents the first connection property that corresponds to a database op
tion.
FunctionMaxParms The maximum number of parameters that can be specified by a function. The function is iden
tified by the value specified by the <function-number>, which is a positive integer. For ex
ample:
FunctionMinParms The minimum number of parameters that must be specified by a function. The function is
identified by the value specified by the <function-number>, which is a positive integer. For
example:
FunctionName The name of the function identified by the value specified by the <function-number> (which
is a positive integer):
HasSecuredFeature Whether at least one feature of the all feature set is secured at the global server level. This
property has extensions that you can specify when querying the property value.
HasSecureFeatureKey Whether the database server has at least one secure feature key. This property has extensions
that you can specify when querying the property value.
HeapsCarver The number of heaps used for short-term purposes such as query optimization.
HeapsQuery The number of heaps used for query processing (hash and sort operations).
HttpAddresses A semicolon-delimited list of the IP addresses that the database server is listening to for HTTP
connections from clients. For example:
(::1):80;127.0.0.1:80
HttpListeners A semicolon-delimited list of <IP address>:<port> pairs that the database server is using
to listen for HTTP connections.
HttpNumActiveReq The number of HTTP connections that are actively processing an HTTP request. An HTTP con
nection that has sent its response is not included.
HttpNumConnections The number of HTTP connections that are currently open within the database server. They
may be actively processing a request or waiting in a queue of long lived (keep-alive) connec
tions.
HttpNumSessions The number of active and dormant HTTP sessions within the database server.
HttpPorts The HTTP port numbers for the web server as a comma-delimited list.
HttpQueueCount The total number of connections that have been queued since the database server started.
HttpQueueMaxCount The maximum number of connections that have been in the queue at one time.
HttpQueueTimedOut The total number of connections that have timed out after sitting in the queue.
HttpsAddresses A semicolon-delimited list of the IP addresses that the server is listening to for HTTPS connec
tions from clients. For example:
(::1):443;127.0.0.1:443
HttpsListeners A semicolon-delimited list of <IP address>:<port> pairs that the database server is using
to listen for HTTPS connections.
HttpsNumActiveReq The number of secure HTTPS connections that are actively processing an HTTPS request. An
HTTPS connection that has sent its response is not included.
HttpsNumConnections The number of HTTPS connections that are currently open within the database server. They
may be actively processing a request or waiting in a queue of long lived (keep-alive) connec
tions.
HttpsPorts The HTTPS port numbers for the web server as a comma-delimited list.
IPAddressMonitorPeriod The time in seconds that a database server checks for new IP addresses.
IsAesniAvailable Whether the database server computer's CPU supports the Intel AES-NI instruction set and
the computer is running a supported operating system.
IsNetworkServer Whether the connection is to a network database server (Yes), or to a personal database
server (No).
IsPortableDevice Whether the database server is running on a laptop, notebook, or other portable device.
VMWare is not taken into account, so the value is No for a database server running on a VM
that is running on a laptop.
On Windows, if it is not possible to determine whether the device is portable, the value is
NULL.
JavaVM An empty string if the database server uses one Java VM per database. If the database server
uses one Java VM for all databases, this property indicates the path to the JAVA executable.
LastOption The number that represents the last connection property that corresponds to a database op
tion.
LockedCursorPages The number of pages used to keep cursor heaps pinned in memory.
MachineName The name of the computer running a database server. Typically, this is the computer's host
name.
MainHeapBytes The number of bytes used for global server data structures.
MainHeapPages The number of pages used for global server data structures.
MapPhysicalMemoryEng The number of database page address space windows mapped to physical memory in the
cache using Address Windowing Extensions.
MaxConnections The maximum number of concurrent connections that the database server allows. For the net
work database servers the default depends upon your license. The default value can be low
ered using the -gm server option.
MaxMessage This property is deprecated. The current maximum line number that can be retrieved from the
database server messages window. This represents the most recent message displayed in the
database server messages window.
MaxMirrorBackground The highest number of workers used for database mirroring background tasks since the server
Workers started. These workers are separate from those controlled by the multiprogramming level.
MaxMultiProgrammingLe The maximum number of tasks that the database server can process concurrently. When Au
vel toMultiProgrammingLevel is enabled, the server may increase the multiprogramming level up
to this value.
Message,linenumber A line from the database server messages window, prefixed by the date and time the message
appeared. The second parameter specifies the line number. This property is deprecated.
The value for PROPERTY( "message" ) is the first line of output that was written to the
database server messages window. Calling PROPERTY( "message", n ) The nth line of
server output (with zero being the first line). The buffer is finite, so as messages are generated,
the first lines are dropped and may no longer be available in memory. In this case, the value is
NULL.
MessageCategoryLimit The minimum number of messages of each severity and category that can be retrieved using
the sa_server_messages system procedure. The default value is 400.
MessageText, This property is deprecated. The text associated with the specified line number in the data
linenumber base server messages window, without a date and time prefix. The second parameter specifies
the line number.
MessageTime, This property is deprecated. The date and time associated with the specified line number in
linenumber the database server messages window. The second parameter specifies the line number.
MessageWindowSize This property is deprecated. The maximum number of lines that can be retrieved from the da
tabase server messages window.
MinMultiProgrammingLe The minimum number of tasks that the server can process concurrently. When AutoMulti
vel ProgrammingLevel is enabled, the server may decrease the multiprogramming level down to
this value.
MultiProgrammingLevel The current maximum number of concurrent tasks the server will process simultaneously. Re
quests are queued if there are more concurrent tasks than this value. This can be changed
with the -gn server option.
Name The alternate name of the server used to connect to the database if one was specified, other
wise, The real server name.
If the client is connected to a copy node and specified NodeType=COPY in the connection
string, then the value of this property may be different than the database server name speci
fied in the client connection string by the ServerName connection parameter.
NativeProcessorArchitec A string that identifies the native processor type on which the software is running. For plat
ture forms where a processor can be emulated (such as x86 on x64), the actual processor type -
not the OS architecture type - is returned.
The value does not indicate whether the operating system is 32-bit or 64-bit.
X86 represents a 32-bit hardware architecture. X86_64 represents a 64-bit hardware architec
ture.
NumLogicalProcessors The number of logical processors (including cores and hyperthreads) enabled on the server
computer.
NumLogicalProcessor The number of logical processors the database server will use. On Windows, use the -gtc op
sUsed tion to change the number of logical processors used.
NumPhysicalProcessors The number of physical processors enabled on the server computer. This value is NumLogical
Processors divided by the number of cores or hyperthreads per physical processor. On some
non-Windows platforms, cores or hyperthreads may be counted as physical processors.
NumPhysicalProcessor The number of physical processors the database server will use. On Windows, you can use the
sUsed -gt option to change the number of physical processors used by the network database server.
ObjectType The type of database object. This value is used by the SYSOBJECT system view.
ODataAddresses A semicolon-delimited list of the TCP/IP addresses and ports that the OData server is using to
listen for OData connections.
ODataSecureAddresses A semicolon-delimited lists of TCP/IP address and ports that the OData server is using to lis
ten for secure OData connections.
PacketsReceived The number of client/server communication packets received. This value is not updated for
HTTP or HTTPS connections.
PacketsReceivedUncomp The number of packets that would have been received during client/server communications if
compression was disabled. (This value is the same as the value for PacketsReceived if com
pression is disabled.)
PacketsSent The number of client/server communication packets sent. This value is not updated for HTTP
or HTTPS connections.
PacketsSentUncomp The number of packets that would have been sent during client/server communications if
compression was disabled. (This value is the same as the value for PacketsSent if compres
sion is disabled.)
PageSize The size of the database server cache pages. This can be set using the -gp option, otherwise, it
is the maximum database page size of the databases specified on the command line.
ParameterizationPrepare The number of prepares for statements that have been automatically parameterized.
Count
PeakCacheSize The largest value the cache has reached in the current session, in kilobytes.
PlatformVer The operating system on which the software is running, including build numbers, service
packs, and so on.
PrepStmt The number of prepared statements currently being maintained by the database server for all
databases and connections.
ProcessCPU The CPU usage for the database server process. Values are in seconds. This property is sup
ported on Windows and Unix computers.
The value is cumulative since the database server was started. The value will not match the
instantaneous value displayed in applications such as the Windows Task Manager or the Win
dows Performance Monitor.
ProcessCPUSystem The CPU usage for the database server process CPU. This is the amount of CPU time that the
database server spent inside the operating system kernel. Values are in seconds. This property
is supported on Windows and Unix computers.
The value is cumulative since the database server was started. The value will not match the
instantaneous value displayed in applications such as the Windows Task Manager or the Per
formance Monitor.
ProcessCPUUser User CPU usage for the database server process. Values are in seconds. This excludes the
amount of CPU time that the database server spent inside the operating system kernel. This
property is supported on Windows and Unix computers.
The value is cumulative since the database server was started. The value will not match the
instantaneous value displayed in applications such as the Windows Task Manager or the Per
formance Monitor.
ProcessorAffinity The logical processors being used by the database server as specified by the -gta option or by
the sa_server_option system procedure and the ProcessorAffinity option.
ProcessorArchitecture A string that identifies the processor type that the current software was built for. Values in
clude:
X86 represents a 32-bit database server. X86_64 represents a 64-bit database server.
ProfileFilterConn The ID of the connection being monitored if procedure profiling for a specific connection is
turned on. If profiling is not turned on, the value is an empty string.
ProfileFilterUser The user ID being monitored if procedure profiling for a specific user is turned on. If procedure
profiling for a specific user is not turned on, the value is an empty string.
PropertyHistorySize Indicates either the minimum amount of time to store tracked property values or the maxi
mum amount of memory to use to store tracked property values.
PropertyHistorySizeBytes The current amount of memory, in bytes, that is currently being used for property history
tracking.
QueryHeapPages The number of cache pages used for query processing (hash and sort operations).
QueryMemActiveEst The database server's estimate of the steady state average of the number of requests actively
using query memory.
QueryMemActiveMax The maximum number of requests that are allowed to actively use query memory.
QueryMemExtraAvail The amount of memory available to grant beyond the base memory-intensive grant.
QueryMemGrantExtra The number of query memory pages that can be distributed among active memory-intensive
requests beyond QueryMemGrantBaseMI.
QueryMemGrantFailed The total number of times a request waited for query memory, but failed to get it.
QueryMemGrantRe The total number of times any request attempted to acquire query memory.
quested
QueryMemGrantWaited The total number of times any request waited for query memory.
QueryMemPages The amount of memory that is available for query execution algorithms, expressed as a num
ber of pages.
QueryMemPercentOfC The amount of memory that is available for query execution algorithms, expressed as a per
ache cent of maximum cache size.
QuittingTime The shutdown time for the server. If none is specified, the value is none. If the database has
the time_zone option set, then the value is returned using the database's time zone.
RememberLastPlan Whether the database server is recording the last query optimization plan returned by the op
timizer.
RememberLastStatement Whether the database server is recording the last statement prepared by each connection.
RemoteCapability The remote capability name associated with a given capability ID.
RemoteputWait The number of times the server had to block while sending a communication packet. Typically,
blocking only occurs if the database server is sending data faster than the client or network
can receive it. It does not indicate an error condition.
Req The number of times the server has been asked to handle a new request or continue process
ing an existing request.
ReqCountBlockContention The number of times that any connection has blocked due to contention for an internal server
resource.
ReqCountBlockIO The number of times that any connection has blocked while waiting for an IO request to com
plete.
ReqCountBlockLock The number of times that any connection has blocked while waiting for a row lock held by an
other connection.
ReqCountUnscheduled The number of times that any connection has blocked while waiting for a server thread to
process it.
ReqTimeActive The total amount of time that the server has spent directly servicing requests.
ReqTimeBlockContention The total amount of time that any connection has blocked due to contention for an internal
server resource.
ReqTimeBlockIO The total amount of time that any connection has blocked while waiting for an IO request to
complete.
ReqTimeBlockLock The total amount of time that any connection has blocked while waiting for a row lock held by
another connection.
ReqTimeUnscheduled The total amount of time that any connection has blocked while waiting for a server thread to
process it.
RequestFilterConn The ID of the connection that logging information is being filtered for. If no filtering is being
performed, the value is -1.
RequestFilterDB The ID of the database that logging information is being filtered for. If no filtering is being per
formed, the value is -1.
RequestLogFile The name of the request logging file, or an empty string if there is no request logging.
RequestLogging The current setting for request logging. Values can be one of SQL, PLAN, HOSTVARS, PROCE
DURES, TRIGGERS, OTHER, BLOCKS, REPLACE, ALL, or NONE.
RequestsReceived The number of client/server communication requests or round trips. It is different from Pack
etsReceived in that multi-packet requests count as one request, and liveness packets are not
included.
RequestTiming Whether logging of request timing information is turned on. The logging of request timing in
formation is turned on using the -zt database server option.
SendFail The number of times that the underlying communications protocols have failed to send a
packet.
ServerEdition A space-separated list of words describing the database server type. Values include:
● Evaluation
● Developer
● Web
● Educational
● Standard
● Advanced
● Workgroup
● OEM
● Authenticated
If you have a separate license for any of the following features, then the appropriate string(s)
are added to the license string value:
● HighAvailability
● InMemory
● FIPS
ServerName The real server name (never an alternate server name). You can use this value to determine
which of the operational servers is currently acting as primary in a database mirroring configu-
ration.
SharedMemoryListener Whether the database server is accepting shared memory connections, and No otherwise.
SingleCLR The version number of the CLR if the database server uses a single CLR external environment
for all databases or NONE if the database server uses one CLR external environment per data
base when executing CLR stored procedures.
SingleJVM Whether the database server uses a single Java VM for all databases running on the database
server (Yes), or whether the database server uses one Java VM per database when executing
Java stored procedures (No).
StartDBPermission The setting of the -gd server option, which can be one of DBA, all, or none.
StartTime The date/time that the server started. If the database has the time_zone option set, then the
value is returned using the database's time zone.
TcpIpAddresses A semicolon-delimited list of the IP addresses that the server is listening to for Command Se
quence and TDS connections from clients. For example:
(::1):2638;127.0.0.1:2638
TcpIpListeners A semicolon-delimited list of IP addresses and IP address:port pairs that the database server
is using to listen for TCP/IP connections.
TempDir The directory in which temporary files are stored by the server.
ThreadDeadlocksAvoided The number of times a thread deadlock error was detected but not reported to client applica
tions. When the database server starts, the value of this property is 0.
To avoid thread deadlock errors, the database server dynamically increases the multiprogram
ming level. If the multiprogramming level cannot be increased, a thread deadlock error is re
turned to the client application and the ThreadDeadlocksReported property is incremented.
ThreadDeadlocksReported The number of times a thread deadlock error was reported to client applications. When the da
tabase server starts, the value of this property is 0.
TimeZoneAdjustment The number of minutes that must be added to the Coordinated Universal Time (UTC) to dis
play time local to the server.
UniqueClientAddresses The number of unique client network addresses connected to a network server, excluding
shared memory and local TCP/IP connections. This is the number of seats currently used for
per-seat licensing.
UnschReq The number of requests that are currently queued up waiting for an available server worker.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate01 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the change in the value of the counter over time.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate02 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the change in the value of the counter over time.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate03 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the change in the value of the counter over time.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate04 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the change in the value of the counter over time.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Rate05 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the change in the value of the counter over time.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw01 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the absolute value of the counter.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw02 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the absolute value of the counter.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw03 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the absolute value of the counter.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw04 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the absolute value of the counter.
UserDefinedCounter- The current value of the user-defined performance counter. The semantics of this property are
Raw05 defined by the client application. This counter can also be accessed from the Performance
Monitor. The Performance Monitor displays the absolute value of the counter.
There are two mechanisms for creating user-defined functions in SAP IQ. You can use the SQL language to
write the function, or you can use the ESQL, ODBC, Java, Perl, or PHP external environments.
Do not confuse SQL UDFs with external C and C++ UDFs. External UDFs require a special license. For
information on external UDFs, see the SAP IQ Administration: User-Defined Functions manual.
In this section:
You can implement your own functions in SQL using the CREATE FUNCTION statement. The RETURN
statement inside the CREATE FUNCTION statement determines the data type of the function.
Once you have created a SQL user-defined function, you can use it anywhere a built-in function of the same
data type is used.
Note
Avoid using the CONTAINS predicate in a view that has a user-defined function, because the CONTAINS
criteria is ignored. Instead, use the LIKE predicate or issue the query outside of a view.
Although SQL functions are useful, Java classes provide a more powerful and flexible way of implementing
user-defined functions, with the additional advantage that you can move them from the database server to a
client application if desired.
Any class method of an installed Java class can be used as a user-defined function anywhere a built-in function
of the same data type is used.
Instance methods are tied to particular instances of a class, and so have different behavior from standard user-
defined functions.
For more information on creating Java classes, and on class methods, see Java in the Database in SAP IQ
Programming Reference.
Miscellaneous functions perform operations on arithmetic, string, or date/time expressions, including the
return values of other functions.
Compatibility
SAP Adaptive Server Enterprise supports only the COALESCE, ISNULL, and NULLIF functions.
Related Information
The function type, for example, Numeric or String, is indicated in brackets next to the function name.
The actual values of database object IDs, such as the object ID of a table or the column ID of a column, might
differ from the values shown in the examples.
In this section:
Syntax
ABS ( <numeric-expression> )
Parameters
numeric-expression
INT INT
FLOAT FLOAT
DOUBLE DOUBLE
NUMERIC NUMERIC
Example
Syntax
ACOS ( <numeric-expression> )
Parameters
numeric-expression
DOUBLE
Example
Related Information
Encrypts the specified values using the supplied encryption key, and returns a VARBINARY or LONG
VARBINARY.
Syntax
<string-expression> – the data to be encrypted. You can also pass binary values to AES_ENCRYPT. This
parameter is case-sensitive, even in case-insensitive databases.
<key> – the encryption key used to encrypt the <string-expression>. To obtain the original value, also use
the same key to decrypt the value. This parameter is case-sensitive, even in case-insensitive databases.
As you should for most passwords, choose a key value that is difficult to guess. Choose a value that is at least
16 characters long, contains a mix of uppercase and lowercase letters, and includes numbers and special
characters. You need this key each time you want to decrypt the data.
Caution
Protect your key; store a copy of your key in a safe location. If you lose your key, encrypted data becomes
completely inaccessible and unrecoverable.
Usage
AES_ENCRYPT returns a VARBINARY value, which is at most 31 bytes longer than the input <string-
expression>. The value returned by this function is the ciphertext, which is not human-readable. You can use
the AES_DECRYPT function to decrypt a <string-expression> that was encrypted with the AES_ENCRYPT
function. To successfully decrypt a <string-expression>, use the same encryption key and algorithm used
to encrypt the data. If you specify an incorrect encryption key, an error is generated.
If you are storing encrypted values in a table, the column should be of data type VARBINARY or VARCHAR, and
greater than or equal to 32 bytes, so that character set conversion is not performed on the data. (Character set
conversion prevents data decryption.) If the length of the VARBINARY or VARCHAR column is fewer than 32
bytes, the AES_DECRYPT function returns an error.
The result data type of an AES_ENCRYPT function may be a LONG BINARY. If you use AES_ENCRYPT in a
SELECT INTO statement, you must have an Unstructured Data Analytics Option license, or use CAST and set
AES_ENCRYPT to the correct data type and size.
Related Information
Decrypts the string using the supplied key, and returns, by default, a VARBINARY or LONG BINARY, or the
original plaintext type.
Syntax
Parameters
<string-expression> – the string to be decrypted. You can also pass binary values to this function. This
parameter is case sensitive, even in case-insensitive databases.
<key> – the encryption key required to decrypt the <string-expression>. To obtain the original value that
was encrypted, the key must be the same encryption key that was used to encrypt the <string-
expression>. This parameter is case-sensitive, even in case-insensitive databases.
Caution
Protect your key; store a copy of your key in a safe location. If you lose your key, the encrypted data
becomes completely inaccessible and unrecoverable.
<data-type> – this optional parameter specifies the data type of the decrypted <string-expression> and
must be the same data type as the original plaintext.
If you do not use a CAST statement while inserting data using the AES_ENCRYPT function, you can view the
same data using the AES_DECRYPT function by passing VARCHAR as the <data-type>. If you do not pass
<data-type> to AES_DECRYPT, VARBINARY data type is returned.
Usage
You can use the AES_DECRYPT function to decrypt a <string-expression> that was encrypted with the
AES_ENCRYPT function. This function returns a VARBINARY or LONG VARBINARY value with the same number
of bytes as the input string, if no data type is specified. Otherwise, the specified data type is returned.
To successfully decrypt a <string-expression>, you must use the same encryption key that was used to
encrypt the data. An incorrect encryption key returns an error.
Related Information
Syntax
Parameters
integer-expression
An expression of any data type passed into the function. All supplied expressions must be of the same data
type.
Using the value of the <integer-expression> as <n>, returns the <n>th argument (starting at 1) from the
remaining list of arguments.
Remarks
Using the value of <integer-expression> as <n> returns the <n>th argument (starting at 1) from the
remaining list of arguments. While the expressions can be of any data type, they must all be of the same data
type. The integer expression must be from one to the number of expressions in the list or NULL is returned.
Multiple expressions are separated by a comma.
Example
Syntax
ASCII ( <string-expression> )
Parameters
string-expression
The string.
SMALLINT
Remarks
If the string is empty, ASCII returns zero. Literal strings must be enclosed in quotes.
Example
The following statement returns the value 90, when the collation sequence is set to the default ISO_BINENG:
Syntax
ASIN ( <numeric-expression> )
Parameters
numeric-expression
DOUBLE
Example
Related Information
Syntax
ATAN ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Example
Related Information
Syntax
numeric-expression1
Returns
DOUBLE
Example
Related Information
Computes the average of a numeric expression for a set of rows, or computes the average of a set of unique
values.
Syntax
Parameters
numeric-expression
Computes the average of the unique values in <column-name>. This is of limited usefulness, but provides
for completeness
Returns
Remarks
This average does not include rows where <numeric-expression> is the NULL value. Returns the NULL
value for a group containing no rows.
Related Information
Extracts individual LONG BINARY and LONG VARCHAR cells to individual operating system files on the server.
The IQ data extraction facility includes the BFILE function, which allows you to extract individual LONG
BINARY and LONG VARCHAR cells to individual operating system files on the server. You can use BFILE with or
without the data extraction facility.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
In this section:
Extracts individual LONG BINARY and LONG VARCHAR cells to individual operating system files on the server.
Syntax
Parameters
file-name-expression
Returns
Remarks
If the LONG BINARY or LONG VARCHAR cell value is NULL, no file is opened and no data is written.
The file path is relative to where the server was started and the open and write operations execute with the
permissions of the server process. Tape devices are not supported for the BFILE output file.
LONG BINARY and LONG VARCHAR cells retrieved other than with the BFILE function (that is, retrieved
through the client/server database connection later) are limited in size to a maximum length of 2 GB. Use
SUBSTRING64 or BYTE_SUBSTR64 to retrieve LONG BINARY cells greater than 2 GB using a SELECT (SELECT,
OPEN CURSOR). Use SUBSTRING64 to retrieve LONG VARCHAR cells greater than 2 GB using a SELECT
(SELECT, OPEN CURSOR). Some connection drivers, for example ODBC, JDBC, and Open Client, do not allow
more than 2 GB to be returned in one SELECT.
You can use BFILE with or without the data extraction facility.
Examples
BEGIN
SET TEMPORARY OPTION
Temp_Extract_Name1 = LobA_data.txt';
SELECT rowid,
'row' + string(rowid) + '.' + 'col1',
'row' + string(rowid) + '.' + 'col2'
FROM LobA;
The file LobA_data.txt is created and contains this non-LOB data and these filenames:
1,row1.col1,row1.col2,
2,row2.col1,row2.col2,
SELECT
BFILE('row' + string(rowid) + '.' + 'col1',col1),
BFILE('row' + string(rowid) + '.' + 'col2',col2)
FROM LobA;
After the extraction, there is a file for each cell of LOB data extracted. For example, if table LobA contains
two rows of data with rowid values of 1 and 2, you have these files:
○ row1.col1
○ row1.col2
○ row2.col1
○ row2.col2
4. Reload the extracted data:
Syntax
BIGINTTOHEX ( <integer-expression> )
Parameters
integer-expression
BIGINTTOHEX accepts an integer expression that evaluates to BIGINT and returns the hexadecimal
equivalent. Returned values are left appended with zeros up to a maximum of 16 digits. All types of unscaled
integer data types are accepted as integer expressions.
Conversion is done automatically, if required. Constants are truncated, only if the fraction values are zero. A
column cannot be truncated, if the column is declared with a positive scale value. If conversion fails, SAP IQ
returns an error unless the CONVERSION_ERROR option is OFF. In that case, the result is NULL.
Examples
Related Information
Returns an unsigned 64-bit value containing the bit length of the column parameter.
Syntax
BIT_LENGTH( <column-name> )
column-name
Returns
INT
Remarks
BIT_LENGTH( <large-object-column> )
The BIT_LENGTH function supports all SAP IQ data types and LONG BINARY and LONG VARCHAR variables of
any size of data, and returns an unsigned 64-bit value containing the bit length of the large object column or
variable parameter.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Related Information
Syntax
BYTE_LENGTH ( <string-expression> )
Parameters
string-expression
Returns
INT
Remarks
If the string is in a multibyte character set, the BYTE_LENGTH value differs from the number of characters
returned by CHAR_LENGTH.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data. The BYTE_LENGTH function supports both LONG BINARY columns and variables and LONG
VARCHAR columns and variables, only if the query returns less than 2 GB. If the byte length of the returned
LONG BINARY or LONG VARCHAR data is greater than or equal to 2 GB, BYTE_LENGTH returns an error that
says you must use the BYTE_LENGTH64 function.
Example
Related Information
BYTE_LENGTH64 returns an unsigned 64-bit value containing the byte length of the LONG BINARY column
parameter.
BYTE_LENGTH64 also supports the LONG VARCHAR data type and LONG BINARY and LONG VARCHAR variables
of any data size.
BYTE_LENGTH64( <large-object-column> )
The BYTE_LENGTH64 function supports both LONG BINARY and LONG VARCHAR columns and LONG BINARY
and LONG VARCHAR variables of any size of data.
Replaces a string with another string, and returns the new result.
Syntax
Parameters
source-string
The string to be searched for and replaced by <replace-string>. <search-string> is limited to 255
bytes. If <search-string> is an empty string, then <source-string> is returned unchanged.
replace-string
Returns
LONG BINARY
Example
The following statement returns the binary value 0x78782e6465662e78782e676869 which is the
hexadecimal representation of the string xx.def.xx.ghi:
Returns a substring of a string. The substring is determined using bytes, not characters.
Syntax
Parameters
source-string
An integer expression indicating the start of the substring. A positive integer starts from the beginning of
the data, with the first byte being position 1. A negative integer specifies a substring starting from the end
of the data, the final byte being at position -1.
length
An integer expression indicating the length of the substring. A positive <length> specifies the number of
bytes to be taken starting at the start position. A negative <length> returns at most <length> bytes up
to, and including, the starting position, from the left of the starting position.
Returns
Remarks
Both <start-position> and <length> can be either positive or negative. Use appropriate combinations of
negative and positive numbers, to get a substring from either the beginning or end of the string. If <length> is
specified, the maximum length of the substring is ABS(<length>).
The argument <source-string> can be any data type that can be converted to a binary data type, and is
treated as a binary string.
The following statement returns the binary value 0x54657374 which is the hexadecimal representation of
Test:
BYTE_SUBSTR64 and BYTE_SUBSTR return the long binary byte substring of the LONG BINARY column
parameter.
The BYTE_SUBSTR64 and BYTE_SUBSTR functions also support the LONG VARCHAR data type and LONG
BINARY and LONG VARCHAR variables of any data size.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
In this section:
The BYTE_SUBSTR64 and BYTE_SUBSTR functions return the byte substring of the large object column or
variable parameter.
Syntax
Syntax 1
Syntax 2
large-object-column
An integer expression indicating the start of the substring. A positive integer starts from the beginning of
the string, with the first byte at position 1. A negative integer specifies a substring starting from the end of
the string, with the final byte at position -1.
length
An integer expression indicating the length of the substring. A positive length specifies the number of bytes
to return, starting at the <start> position. A negative length specifies the number of bytes to return,
ending at the <start> position.
Remarks
Nested operations of the functions BYTE_LENGTH64, BYTE_SUBSTR64, and BYTE_SUBSTR do not support
large object columns or variables.
The BYTE_SUBSTR64 and BYTE_SUBSTR functions support both LONG BINARY and LONG VARCHAR columns
and LONG BINARY and LONG VARCHAR variables of any size of data. Currently, a SQL variable can hold up to 2
GB - 1 in length.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Syntax
Parameters
expression
The data type to cast the expression into. Set the data type explicitly, or specify the %TYPE attribute to set
the data type to the data type of a column in a table or view, or to the data type of a variable.
Remarks
If you do not indicate a length for character string types, SAP IQ chooses an appropriate length. If neither
precision nor scale is specified for a DECIMAL conversion, the database server selects appropriate values.
Set the data type explicitly, or specify the %TYPE attribute to set the data type to the data type of a column in a
table or view, or to the data type of a variable. For example:
is described as:
A NUMERIC(1,0)
B NUMERIC(15,2)
Examples
● The following is the value of the expression 1 + 2. The data type to cast the expression into. Set the data
type is calculated, and the result cast into a single-character string, the length the data server assigns:
CAST( 1 + 2 AS CHAR )
Related Information
Returns the smallest integer greater than or equal to the specified expression.
Syntax
CEIL ( <numeric-expression> )
Parameters
numeric-expression
A column, variable, or expression with a data type that is either exact numeric, approximate numeric,
money, or any type that can be implicitly converted to one of these types. For other data types, CEIL
generates an error. The return value has the same data type as the value supplied.
Remarks
For a given expression, the CEIL function takes one argument. For example, CEIL (-123.45) returns -123.
CEIL (123.45) returns 124.
Related Information
Syntax
CEILING ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Remarks
Examples
Related Information
Syntax
CHAR ( <integer-expression> )
Parameters
integer-expression
The number to be converted to an ASCII character. The number must be in the range 0 to 255, inclusive.
VARCHAR
Remarks
The character in the current database character set corresponding to the supplied numeric expression modulo
256 is returned.
CHAR returns NULL for integer expressions with values greater than 255 or less than zero.
Examples
Syntax
CHAR_LENGTH ( <string-expression> )
Parameters
string-expression
Returns
INT
Remarks
If the string is in a multibyte character set, the CHAR_LENGTH value may be less than the BYTE_LENGTH value.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data. The CHAR_LENGTH function supports LONG VARCHAR columns and LONG VARCHAR variables of
any size of data. If the character length exceeds 2GB - 1 (2147483647), an error is returned.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Example
Related Information
The CHAR_LENGTH64 function returns an unsigned 64-bit value containing the character length of the LONG
VARCHAR column parameter, including the trailing blanks.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
In this section:
The CHAR_LENGTH64 function returns an unsigned 64-bit value containing the character length of the LONG
VARCHAR column or variable parameter, including the trailing blanks.
Syntax
CHAR_LENGTH64( <long-varchar-object> )
Parameters
long-varchar-object
CHAR_LENGTH64 supports LONG VARCHAR columns and LONG VARCHAR variables of any size of data.
Currently, a SQL variable can hold up to 2 GB - 1 in length.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Returns the position of the first occurrence of a specified string in another string.
Syntax
Parameters
string-expression1
The string for which you are searching. This string is limited to 255 bytes.
string-expression2
The string to be searched. The position of the first character in the string being searched is 1.
Returns
INT
Remarks
All the positions or offsets, returned or specified, in the CHARINDEX function are always character offsets and
may be different from the byte offset for multibyte data.
● Contains more than one instance of the specified string, CHARINDEX returns the position of the first
instance.
● Does not contain the specified string, CHARINDEX returns zero (0).
CHARINDEX returns a 32 bit signed integer position for CHAR and VARCHAR columns.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
Example
Surname GivenName
Klobucher James
Kuo Felicia
Kelly Moira
In this section:
Related Information
Syntax
Parameters
string-expression
Remarks
● All the positions or offsets, returned or specified, in the CHARINDEX function are always character offsets
and may be different from the byte offset for multibyte data.
● If the large object cell being searched contains more than one instance of <string-expression>,
CHARINDEX returns only the position of the first instance.
● If the column does not contain the string, the CHARINDEX function returns zero (0).
● Searching for a string longer than 255 bytes returns NULL.
● Searching for a zero-length string returns 1.
● If any of the arguments is NULL, the result is NULL.
● CHARINDEX supports searching LONG VARCHAR and LONG BINARY columns and LONG VARCHAR and
LONG BINARY variables of any size of data. Currently, a SQL variable can hold up to 2 GB - 1 in length.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Syntax
expression
Any expression.
Returns
ANY
Example
Related Information
Syntax
table-name
Example
Related Information
Syntax
table-id
Examples
The object ID of the Customers table is 100209, as returned by the OBJECT_ID function. The column ID is
stored in the column_id column of the syscolumn system table. The database ID of the iqdemo database
is 0, as returned by the DB_ID function.
● The following statement returns the column name city:
Related Information
Syntax
Parameters
integer-expression1
In most cases, it is more convenient to supply a string expression as the first argument. If you do supply
integer-expression1, it is the connection property ID. You can determine this using the
PROPERTY_NUMBER function.
string-expression
The connection property name. You must specify either the property ID or the property name.
integer-expression2
The connection ID of the current database connection. The current connection is used if this argument is
omitted.
Returns
VARCHAR
Remarks
Note
The following statement returns 4, the number of prepared statements being maintained:
Related Information
Syntax
Parameters
data-type
The data type to convert the expression into. Set the data type explicitly, or specify the %TYPE attribute to
set the data type to the data type of a column in a table or view, or to the data type of a variable.
expression
For converting strings to date or time data types and vice versa, format-style is a style code number that
describes the date format string to be used.
Remarks
The result data type of a CONVERT function is a LONG VARCHAR. If you use CONVERT in a SELECT INTO
statement, you must have an Unstructured Data Analytics Option license or use CAST and set CONVERT to the
correct data type and size.
With Century
Without Century (yy) (yyyy) Output
1 101 mm/dd/yy[yy]
2 102 [yy]yy.mm.dd
3 103 dd/mm/yy[yy]
4 104 dd.mm.yy[yy]
5 105 dd-mm-yy[yy]
8 108 hh:nn:ss
10 110 mm-dd-yy[yy]
11 111 [yy]yy/mm/dd
12 112 [yy]yymmdd
– 13 or 113 dd mmm yyyy hh:nn:ss:sss (24 hour clock, Europe default + millisec
onds, 4-digit year)
– 21 or 121 yyyy-mm-dd hh:nn:ss.sss (24 hour clock, ODBC canonical with millisec
onds, 4-digit year)
37 137 hh:nn:ss.ssssss
– 365 yyyyjjj (as a string or integer, where jjj is the Julian day number from 1 to
366 within the year)
Abbreviations and values for date parts in the CONVERT format style table:
hh hour 0 – 23
nn minute 0 – 59
ss second 0 – 59
dd day 1 – 31
mm month 1 – 12
Example
order_date
16.03.1993
20.03.1993
23.03.1993
25.03.1993
...
mar 16, 93
mar 20, 93
mar 23, 93
mar 25, 93
...
The following statements illustrate the use of the format style 365, which converts data of type DATE and
DATETIME to and from either string or integer type data:
The following statement illustrates conversion to an integer, and returns the value 5:
Related Information
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
DOUBLE
Remarks
The CORR function converts its arguments to DOUBLE, performs the computation in double-precision floating-
point, and returns a DOUBLE as the result. If applied to an empty set, then CORR returns NULL.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
● SQL – ISO/ANSI SQL compliant. SQL foundation feature outside of core SQL.
● SAP database products – compatible with SAP SQL Anywhere
Example
The following example performs a correlation to discover whether age is associated with income level. This
function returns the value 0.440227:
Related Information
Syntax
COS ( <numeric-expression> )
numeric-expression
Returns
This function converts its argument to DOUBLE, performs the computation in double-precision floating point,
and returns a DOUBLE as the result. If the parameter is NULL, the result is NULL.
Example
Related Information
Syntax
COT ( <numeric-expression> )
Parameters
numeric-expression
Returns
This function converts its argument to DOUBLE, performs the computation in double-precision floating point,
and returns a DOUBLE as the result. If the parameter is NULL, the result is NULL.
Example
Related Information
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then COVAR_POP returns NULL.
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
● SQL – ISO/ANSI SQL compliant SQL foundation feature outside of core SQL.
● SAP database products – compatible with SAP Adaptive Server Enterprise
Example
The following example measures the strength of association between employee age and salary. This function
returns the value 73785.840059:
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then COVAR_SAMP returns NULL.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
● SQL – ISO/ANSI SQL compliant SQL foundation feature outside of core SQL.
● SAP database products – compatible with SAP SQL Anywhere
Example
The following example measures the strength of association between employee age and salary. This function
returns the value 74782.946005:
Syntax
Parameters
Note
When the query results are displayed, the * is not displayed in the column header, and appears as:
Count()
expression
Returns the number of rows in each group where expression is not the NULL value.
DISTINCT column-name
Returns the number of different values in column-name. Rows where the value is the NULL value are not
included in the count.
Returns
UNSIGNED BIGINT
Example
Returns each unique city, and the number of rows with that city value:
Related Information
The CUME_DIST function is a rank analytical function that calculates the relative position of one value among a
group of rows. It returns a decimal value between 0 and 1.
Syntax
Parameters
window-spec
Returns
Remarks
SAP IQ calculates the cumulative distribution of a value of x in a set S of size N using CUME_DIST(x) = number
of values in S coming before and including x in the specified order / N
Composite sort-keys are not currently allowed in the CUME_DIST function. You can use composite sort-keys
with any of the other rank functions.
The <window-spec> parameter represents usage as a window function in a SELECT statement. As such, you
can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW clause in the
Note
Example
The following example returns a result set that provides a cumulative distribution of the salaries of employees
who live in California:
Syntax
DATALENGTH ( <expression> )
Parameters
expression
Returns
UNSIGNED INT
Remarks
SMALLINT 2
INTEGER 4
DOUBLE 8
Example
Returns the value 35, the longest string in the company_name column:
Related Information
Converts the expression into a date, and removes any hours, minutes, or seconds.
DATE ( <expression> )
Parameters
expression
Returns
DATE
Example
Returns the date produced by adding the specified number of the specified date parts to a date.
Syntax
Parameters
date-part
The number of date parts to be added to the date. <numeric-expression> can be any numeric type; the
value is truncated to an integer. The maximum microsecond in <numeric-expression> is 2147483647,
that is, 35:47.483647 (35 minutes 47 seconds 483647 microseconds).
date-expression
Returns
TIMESTAMP
Remarks
Related Information
Calculates a new date, time, or datetime value by increasing the provided value up to the nearest larger value of
the specified granularity.
Syntax
date-part
The date, time, or date-time expression containing the value you are evaluating.
multiple-expression
(Optional) A nonzero positive integer value expression specifying how many multiples of the units specified
by the date-part parameter to use within the calculation. For example, you can use multiple-expression to
specify that you want to regularize your data to 200-microsecond intervals or 10-minute intervals.
Remarks
This function calculates a new date, time, or datetime value by increasing the provided value up to the nearest
larger value with the specified granularity. If you include the optional <multiple-expression> parameter,
then the function increases the date and time up to the nearest specified multiple of the specified granularity.
The data type of the calculated date and time matches the data type of the <multiple-expression>
parameter.
● DayofYear
● WeekDay
● CalYearofWeek
● CalWeekofYear
● CalDayofWeek
If you specify a <multiple-expression> for the microsecond, millisecond, second, minute, or hour date
parts, SAP IQ assumes that the multiple applies from the start of the next larger unit of granularity:
For example, if you specify a multiple of two minutes, SAP IQ applies two-minute intervals starting at the
current hour.
For the microsecond, millisecond, second, minute, and hour date parts, specify a <multiple-expression>
value that divides evenly into the range of the specified date part:
If you specify a <multiple-expression> for the day, week, month, quarter, or year date parts, SAP IQ
assumes the intervals started at the smallest date value (0001-01-01), smallest time value
(00:00:00.000000), or smallest date-time value (0001-01-01.00:00:00.000000). For example, if you specify
a multiple of 10 days, SAP IQ calculates 10-day intervals starting at 0001-01-01.
For the day, week, month, quarter, or year date parts, you don't need to specify a multiple that divides evenly
into the next larger unit of time granularity.
If SAP IQ rounds to a multiple of the week date part, the date value is always Sunday.
Examples
● This statement returns the value August 13, 2009 10:32.35.456800 AM:
● This statement returns the value August 13, 2009 10:32.35.600000 AM:
Related Information
Syntax
Parameters
date-part
The starting date for the interval. This value is subtracted from <date-expression2> to return the
number of date parts between the two arguments.
date-expression2
The ending date for the interval. <date-expression1> is subtracted from this value to return the number
of date parts between the two arguments.
Returns
INT
This function calculates the number of date parts between two specified dates. The result is a signed integer
value equal to (date2 - date1), in date parts.
DATEDIFF results are truncated, not rounded, when the result is not an even multiple of the date part.
When you use day as the date part, DATEDIFF returns the number of midnights between the two times
specified, including the second date, but not the first. For example, the following statement returns the value 5.
Midnight of the first day 2003/08/03 is not included in the result. Midnight of the second day is included, even
though the time specified is before midnight:
When you use month as the date part, DATEDIFF returns the number of first-of-the-months between two
dates, including the second date but not the first. For example, both of the following statements return the
value 9:
The first date 2003/02/01 is a first-of-month, but is not included in the result of either query. The second date
2003/11/01 in the second query is also a first-of-month and is included in the result.
When you use week as the date part, DATEDIFF returns the number of Sundays between the two dates,
including the second date but not the first. For example, in the month 2003/08, the dates of the Sundays are
03, 10, 17, 24, and 31. The following query returns the value 4:
Assume you have two time values three seconds apart: 11:14:59 and 11:15:02. Notice how the time range
includes a minute boundary point (11:15:00).If you are requesting a difference unit type of MINUTE:
● IQ main store table – SAP IQ sees that the difference between the two time values is less than the MINUTE
unit type, and calculates a DATEDIFF of 0.
● IQ catalog store table – the system sees that the difference between the two time values is less than the
MINUTE unit type, but notes that the difference includes the minute boundary point (11:15:00), and
calculates a DATEDIFF of 1.
If you require IQ catalog store DATEDIFF behavior in an expression that can be executed against either IQ main
store or IQ catalog store tables, then execute the DATEDIFF over a CAST over a DATEFORMAT with an
appropriate format string (that doesn't include components smaller than the requested difference unit)
wrapped over each input:
DATEDIFF(MINUTE,
CAST(DATEFORMAT(t.col1, 'YYYY-MM-DD HH:NN’) AS TIMESTAMP),
CAST(DATEFORMAT(t.col2, 'YYYY-MM-DD HH:NN’) AS TIMESTAMP))
Examples
Calculates a new date, time, or datetime value by reducing the provided value down to the nearest lower value
of the specified multiple with the specified granularity.
Syntax
Parameters
date-part
The date, time, or date-time expression containing the value you are evaluating.
multiple-expression
(Optional) A nonzero positive integer value expression specifying how many multiples of the units specified
by date-part to use within the calculation. For example, you can use multiple-expression to specify that you
want to regularize your data to 200-microsecond intervals or 10-minute intervals
Remarks
This function calculates a new date, time, or datetime value by reducing the provided value down to the nearest
lower value with the specified granularity. If you include the optional <multiple-expression> parameter,
then the function reduces the date and time down to the nearest specified multiple of the specified granularity.
● DayofYear
● WeekDay
● CalYearofWeek
● CalWeekofYear
● CalDayofWeek
If you specify a <multiple-expression> for the microsecond, millisecond, second, minute, or hour date
parts, SAP IQ assumes that the multiple applies from the start of the next larger unit of granularity:
For example, if you specify a multiple of two minutes, SAP IQ applies two minute intervals starting at the
current hour.
For the microsecond, millisecond, second, minute, and hour date parts, specify a <multiple-expression>
value that divides evenly into the range of the specified date part:
If you specify a <multiple-expression> for the day, week, month, quarter, or year date parts, SAP IQ
assumes the intervals started at the smallest date value (0001-01-01), smallest time value
(00:00:00.000000), or smallest date-time value (0001-01-01.00:00:00.000000). For example, if you specify
a multiple of 10 days, then SAP IQ calculates 10-day intervals starting at 0001-01-01.
For the day, week, month, quarter, or year date parts, you don't need to specify a multiple that divides evenly
into the next larger unit of time granularity.
If SAP IQ rounds to a multiple of the week date part, the date value is always Sunday.
Examples
● This statement returns the value August 13, 2009 10:32:35.456600 AM:
● This statement returns the value August 13, 2009 10:32:35.400000 AM:
● This statement returns the value August 13, 2009 10:32:35.456789 AM:
Related Information
Syntax
datetime-expression
The date and time to be converted, in the form of a date, time, timestamp, or character string.
string-expression
Returns
VARCHAR
Remarks
The <datetime-expression> to convert must be a date, time, or timestamp data type, but can also be a
CHAR or VARCHAR character string. If the date is a character string, SAP IQ implicitly converts the character
string to date, time, or timestamp data type, so an explicit cast, as in the example above, is unnecessary.
Any allowable date format can be used for <string-expression>. Date format strings cannot contain any
multibyte characters. Only single-byte characters are allowed in a date/time/datetime format string, even
when the collation order of the database is a multibyte collation order like 932JPN.
Instead, move the multibyte character outside of the date format string using the concatenation operator:
Examples
● The following statement returns string values like “Jan 01, 1989”:
Returns the name of the specified part (such as the month “June”) of a date/time value, as a character string.
Syntax
Parameters
date-part
The date for which the date part name is to be returned. The date must contain the requested date-part.
Returns
VARCHAR
Remarks
DATENAME returns a character string, even if the result is numeric, such as 23, for the day.
Related Information
Syntax
Parameters
date-part
The date for which the part is to be returned. The date must contain the date-part field.
Returns
INT
Remarks
The DATE, TIME, and DTTM indexes do not support some date parts (Calyearofweek, Calweekofyear,
Caldayofweek, Dayofyear, Millisecond, Microsecond).
Examples
Calculates a new date, time, or datetime value by rounding the provided value up or down to the nearest
multiple of the specified value with the specified granularity.
Syntax
Parameters
date-part
The date, time, or date-time expression containing the value you are evaluating.
multiple-expression
(Optional).= A nonzero positive integer value expression specifying how many multiples of the units
specified by date-part to use within the calculation. For example, you can use multiple-expression to
specify that you want to regularize your data to 200-microsecond intervals or 10-minute intervals.
Remarks
This function calculates a new date, time, or datetime value by rounding the provided value up or down to the
nearest value with the specified granularity. If you include the optional <multiple-expression> parameter,
then the function rounds the date and time to the nearest specified multiple of the specified granularity.
● DayofYear
● WeekDay
● CalYearofWeek
● CalWeekofYear
● CalDayofWeek
If you specify a <multiple-expression> for the microsecond, millisecond, second, minute, or hour date
parts, SAP IQ assumes that the multiple applies from the start of the next larger unit of granularity:
For example, if you specify a multiple of two minutes, SAP IQ applies two minute intervals starting at the
current hour.
For the microsecond, millisecond, second, minute, and hour date parts, specify a <multiple-expression>
value that divides evenly into the range of the specified date part:
If you specify a <multiple-expression> for the day, week, month, quarter, or year date parts, SAP IQ
assumes the intervals started at the smallest date value (0001-01-01), smallest time value
(00:00:00.000000), or smallest date-time value (0001-01-01.00:00:00.000000). For example, if you specify
a multiple of 10 days, then SAP IQ calculates 10-day intervals starting at 0001-01-01.
For the day, week, month, quarter, or year date parts, you don't need to specify a multiple that divides evenly
into the next larger unit of time granularity.
If SAP IQ rounds to a multiple of the week date part, then the date value is always Sunday.
Examples
● This statement returns the value August 13, 2009 10:32:35.456600 AM:
● This statement returns the value August 13, 2009 10:32:35.456789 AM:
● This statement returns the value August 13, 2009 10:32:35.456400 AM:
Related Information
Syntax
DATETIME ( <expression> )
expression
The expression to be converted. The expression is usually a string. Conversion errors may be reported.
Returns
TIMESTAMP
Example
Returns an integer from 1 to 31 corresponding to the day of the month of the date specified.
Syntax
DAY ( <date-expression> )
Parameters
date-expression
The date.
SMALLINT
Example
Returns the name of the day of the week from the specified date.
Syntax
DAYNAME ( <date-expression> )
Parameters
date-expression
The date.
Returns
VARCHAR
Example
Returns the number of days since an arbitrary starting date, returns the number of days between two specified
dates, or adds the specified <integer-expression> number of days to a given date.
Syntax
DAYS ( <datetime-expression> )
| ( <datetime-expression>, <datetime-expression> )
| ( <datetime-expression>, <integer-expression> )
Parameters
datetime-expression
Returns
Examples
● The following statement returns the integer value -366, which is the difference between the two dates:
Related Information
Syntax
DB_ID ( [ <database-name> ] )
database-name
A string expression containing the database name. If database-name is a string constant, it must be
enclosed in quotes. If no database-name is supplied, the ID number of the current database is returned.
Returns
INT
Remarks
Note
Examples
Related Information
Syntax
DB_NAME ( [ <database-id> ] )
Parameters
database-id
Returns
VARCHAR
Remarks
Note
Returns the database name iqdemo, when executed against the iq_dummy database:
Related Information
Syntax
Parameters
property-id
The database ID number, as returned by DB_ID. Typically, the database name is used.
database-name
VARCHAR
Remarks
Note
Returns a string. The current database is used if the second argument is omitted.
Example
The following statement returns the page size of the current database, in bytes:
Related Information
Syntax
DEGREES ( <numeric-expression> )
Parameters
numeric-expression
An angle in radians.
Returns
DOUBLE
Example
Syntax
expression
A sort specification that can be any valid expression involving a column reference, aggregates, or
expressions invoking these items.
Returns
INTEGER
Remarks
DENSE_RANK is a rank analytical function. The dense rank of row R is defined as the number of rows preceding
and including R that are distinct within the groups specified in the OVER clause or distinct over the entire result
set. The difference between DENSE_RANK and RANK is that DENSE_RANK leaves no gap in the ranking sequence
when there is a tie. RANK leaves a gap when there is a tie.
DENSE_RANK requires an OVER (ORDER BY) clause. The ORDER BY clause specifies the parameter on which
ranking is performed and the order in which the rows are sorted in each group. This ORDER BY clause is used
only within the OVER clause and is not an ORDER BY for the SELECT. No aggregation functions in the rank
query are allowed to specify DISTINCT.
The OVER clause indicates that the function operates on a query result set. The result set is the rows that are
returned after the FROM, WHERE, GROUP BY, and HAVING clauses have all been evaluated. The OVER clause
defines the data set of the rows to include in the computation of the rank analytical function.
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is the
default.
DENSE_RANK is allowed only in the select list of a SELECT or INSERT statement or in the ORDER BY clause of
the SELECT statement. DENSE_RANK can be in a view or a union. The DENSE_RANK function cannot be used in
a subquery, a HAVING clause, or in the select list of an UPDATE or DELETE statement. Only one rank analytical
function is allowed per query.
Related Information
Compares two strings, evaluates the similarity between them, and returns a value from 0 to 4.
Syntax
Parameters
string-expression1
Returns
SMALLINT
Examples
Related Information
Returns a number from 1 to 7 representing the day of the week of the specified date, with Sunday=1,
Monday=2, and so on.
Syntax
DOW ( <date-expression> )
Parameters
date-expression
The date.
Returns
SMALLINT
Remarks
Use the DATE_FIRST_DAY_OF_WEEK option if you need Monday (or another day) to be the first day of the
week.
Example
Encrypts the specified value using the supplied encryption key and returns a LONG BINARY value.
Syntax
<algorithm-format> :
<algorithm> [ ( <format-clause> ) ]
<algorithm> :
AES
| AES256
| AES_FIPS
| AES256_FIPS
| RSA
| RSA_FIPS
<format-clause> :
FORMAT={ RAW[; <padding-clause> ] | INTERNAL }
<padding-clause> :
PADDING={ PKCS5
| ZEROES
| OAEP
| PKCS1
| ALL
| NONE }
Parameters
string-expression
The string to be decrypted. Binary values are supported. This parameter is case sensitive, even in case-
insensitive databases.
key
The encryption key (string) that is required to decrypt the <string-expression>. For AES, this value
must be the same encryption key that was used to encrypt the <string-expression> to obtain the
original value that was encrypted. This parameter is case sensitive, even in case-insensitive databases.
For strongly encrypted databases, store a copy of the key in a safe location. If you lose the encryption
key, there is no way to access the data, even with the assistance of Technical Support. The database
must be discarded and you must create a new database.
algorithm-format
This optional string parameter specifies the type of algorithm, format, and padding to use when encrypting
the <string-expression>.
algorithm
This optional string parameter specifies the type of algorithm used to encrypt the <string-
expression>. Specify one of the following formats:
AES
For the AES algorithm, <padding> can be PKCS5, ZEROES, or NONE. The default padding is
PKCS5.
AES256 The data is encrypted using the AES 256-bit algorithm. For AES256, <padding> can be
PKCS5, ZEROES, and NONE (if FORMAT=RAW).
AES_FIPS
The data is encrypted using the FIPS-certified version of the AES algorithm.
If the database server was started using the -fips server option, AES_FIPS is used as the default.
For AES_FIPS, <padding> can be PKCS5, ZEROES, and NONE (if FORMAT=RAW).
AES256_FIPS The data is encrypted using the FIPS-certified version of the AES 256-bit algorithm.
For AES256_FIPS, <padding> can be PKCS5, ZEROES, and NONE (if FORMAT=RAW).
RSA
For the RSA algorithm, when encrypting with a public key, <padding> can be PKCS1, OAEP, or
NONE. When encrypting with a private key, <padding> must be PKCS1. The default padding is
PKCS1.
If the RSA algorithm is specified, then the <initialization-vector> parameter is ignored and
FORMAT=RAW is ignored.
If a public key encrypts the message, then a private key must decrypt it. Using the same key for
encryption and decryption fails unless PADDING=NONE. However, if PADDING=NONE is set and
the incorrect key is supplied, then the function succeeds but returns meaningless data.
Note
The maximum message length for RSA encryption is equal to the key size minus 11 bytes for
PKCS1 padding and the key size minus 42 bytes for OAEP padding. If you specify
PADDING=NONE, then the message must be equal to the key size. Unlike AES, the length of
the output is not the same as the length of the input when using RSA encryption.
RSA_FIPS The same as RSA except that the data is encrypted using the FIPS-certified version of
the RSA algorithm.
Use the optional FORMAT clause to specify the storage format for the data. If the data was stored in
the proprietary storage format, then specify INTERNAL . If the encrypted data was stored as-is (that is,
it can be decrypted by any software that can decrypt the specified algorithm), then specify RAW. For
data stored as RAW, specify the <initialization-vector> parameter.
PADDING clause
Use the optional PADDING clause to specify the padding type for AES and RSA encryption. For AES
encryption, you must also specify FORMAT=RAW.
The padding type for decryption must match that used for encryption unless PADDING=ALL is used.
PKCS5
The data is padded by using the PKCS#5 algorithm. The encrypted data is 1-16 bytes longer than
the decrypted data. This option is only available for AES encryption.This is the default padding for
AES encryption.
ZEROES
The data is padded with zeros (0) before encryption. The encrypted data is 0-15 bytes longer than
the decrypted data. When the encrypted data is decrypted, the result is also padded with zeros.
OAEP The data is padded using Optimal Asymmetric Encryption Padding. This option is only
available for RSA encryption (RSA or RSA_FIPS).
PKCS1 The data is padded using the PKCS#1 algorithm. This option is only available for RSA
encryption (RSA or RSA_FIPS). This option is the default for RSA encryption (RSA or RSA_FIPS).
NONE
The data is not padded. The input data must be a multiple of the cipher block length (16-bytes) for
AES, or exactly equal to the key size for RSA.
initialization-vector
Specify <initialization-vector> when <format> is set to RAW. The string cannot be longer than 16
bytes. Any value less than 16 bytes is padded with 0 bytes. This string cannot be set to NULL.
<initialization-vector> is ignored when <format> is set to INTERNAL
Returns
LONG BINARY
Remarks
The LONG BINARY value returned by this function is up to 31 bytes longer than the input <string-
expression>. The value returned by this function is not human-readable. Use the DECRYPT function to
decrypt a <string-expression> that was encrypted with the ENCRYPT function. For AES, to successfully
decrypt a <string-expression>, use the same encryption key and algorithm that were used to encrypt the
data. If you specify an incorrect encryption key, then an error is generated. A lost key results in inaccessible
data, from which there is no recovery.
When FORMAT=RAW is specified, the data is encrypted using raw encryption. Specify the encryption key,
initialization vector, and, optionally, the padding format. These same values must be specified when decrypting
the data. The decryption can be performed outside of the database server or by using the DECRYPT function.
Do not use raw encryption when the data is to be encrypted and decrypted only within the database server
because you must specify the initialization vector and the padding, and the encryption key cannot be verified
during decryption.
Note
For the ISENCRYPTED function to return meaningful results, data must be encrypted using the ENCRYPT
function with AES/AES256 and must not use FORMAT=RAW.
Standards
Example
The following trigger encrypts the user_pwd column of the user_info table. This column contains users'
passwords, and the trigger fires whenever a password value is changed.
The following example updates the secret column with an encrypted version of the password column.
The data is encrypted using encryption key 'TheEncryptionKey', raw-format AES encryption, PKCS#5
padding (the default), and the initialization vector 'ThisIsTheIV'.
Provides the error message for the current error, or for a specified SQLSTATE or SQLCODE value.
Syntax
Parameters
sqlstate
String representing the SQLSTATE for which the error message is to be returned.
sqlcode
Integer representing the SQLCODE for which the error message is to be returned.
Returns
VARCHAR
Remarks
If no argument is supplied, the error message for the current state is supplied. Any substitutions (such as table
names and column names) are made.
If an argument is supplied, the error message for the supplied SQLSTATE or SQLCODE is returned, with no
substitutions. Table names and column names are supplied as placeholders ('???').
The ERRORMSG function returns SAP SQL Anywhere and SAP IQ error messages.
The following statement returns the error message for SQLCODE -813:
Syntax
EVENT_CONDITION ( <condition-name> )
Parameters
condition-name
The condition triggering the event. The possible values are preset in the database, and are case-insensitive.
Each condition is valid only for certain event types.
INT
Remarks
To define an event and its associated handler, use the CREATE EVENT statement.
Note
Example
Related Information
Syntax
EVENT_CONDITION_NAME ( <integer> )
Parameters
integer
Returns
VARCHAR
Remarks
You can use EVENT_CONDITION_NAME to obtain a list of all EVENT_CONDITION arguments by looping over
integers until the function returns NULL.
To define an event and its associated handler, use the CREATE EVENT statement.
Note
Related Information
Syntax
EVENT_PARAMETER ( <context-name> )
Parameters
context-name
One of the preset strings. The strings are case-insensitive, and carry the following information:
Returns
VARCHAR
Remarks
To define an event and its associated handler, use the CREATE EVENT statement.
Note
Related Information
Syntax
EXP ( <numeric-expression> )
Parameters
numeric-expression
The exponent.
Returns
DOUBLE
Calculates an exponential weighted moving average. Weightings determine the relative importance of each
quantity that makes up the average.
Syntax
Parameters
expression
A numeric expression specifying the period for which the average is to be computed.
window-spec
Remarks
Similar to the WEIGHTED_AVG function, the weights in EXP_WEIGHTED_AVG decrease over time. However,
weights in WEIGHTED_AVG decrease arithmetically, whereas weights in EXP_WEIGHTED_AVG decrease
exponentially. Exponential weighting applies more weight to the most recent values, and decreases the weight
for older values while still applying some weight.
S*C+(1-S)*PEMA
In the calculation above, SAP IQ applies the smoothing factor by multiplying the current closing price (C) by the
smoothing constant (S) added to the product of the previous day’s exponential moving average value (PEMA)
and 1 minus the smoothing factor.
SAP IQ calculates the exponential moving average over the entire period specified by the OVER clause.
<period-expression> specifies the moving range of the exponential moving average.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause. DISTINCT is not supported.
Example
The following example returns an exponential weighted average of salaries for employees in Florida with the
salary of recently hired employees contributing the most weight to the average. There are three rows used in
the weighting:
Related Information
Syntax
Parameters
expression
Returns
Remarks
FIRST_VALUE returns the first value in a set of values, which is usually an ordered set. If the first value in the
set is null, then the function returns NULL unless you specify IGNORE NULLS. If you specify IGNORE NULLS,
then FIRST_VALUE returns the first non-null value in the set, or NULL if all values are null.
The data type of the returned value is the same as that of the input value.
You cannot use FIRST_VALUE or any other analytic function for <expression>. That is, you cannot nest
analytic functions, but you can use other built-in function expressions for <expression>.
The <window-spec> parameter represents usage as a window function in a SELECT statement. As such, you
can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW clause in the
SELECT statement.
If the <window-spec> does not contain an ORDER BY expression, or if the ORDER BY expression is not precise
enough to guarantee a unique ordering, then the result is arbitrary. If there is no <window-spec>, then the
result is arbitrary.
Note
Example
The following example returns the relationship, expressed as a percentage, between each employee’s salary
and that of the most recently hired employee in the same department:
In this example, employee 1658 is the first row for department 500, indicating that employee 1658 is the most
recent hire in that department, and therefore receives a percentage of 100%. Percentages for the remaining
employees in department 500 are calculated relative to that of employee 1658. For example, employee 1570
earns approximately 139% of what employee 1658 earns.
Syntax
FLOOR ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Examples
Related Information
Syntax
GETDATE ()
Returns
TIMESTAMP
Remarks
Example
Returns the graphical query plan to Interactive SQL in an XML format string.
Syntax
GRAPHICAL_PLAN ( <string-expression>
[, <statistics-level>
[, <cursor-type>
[, <update-status> ]]])
Parameters
string-expression
SQL statement for which the plan is to be generated. string-expression is generally a SELECT statement,
but it can also be an UPDATE or DELETE, INSERT SELECT, or SELECT INTO statement.
statistics-level
A cursor type, expressed as a string. Possible values are: asensitive, insensitive, sensitive, or keyset-driven.
If cursor-type is not specified, asensitive is used by default.
update-status
A string parameter accepting one of the following values indicating how the optimizer should treat the
given cursor:
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use GRAPHICAL_PLAN in a SELECT INTO statement, you
must have an Unstructured Data Analytics Option license or use CAST and set GRAPHICAL_PLAN to the
correct data type and size.
Note
If you do not provide an argument to the GRAPHICAL_PLAN function, the query plan is returned to you from the
cache. If there is no query plan in the cache, then this message appears:
If a user needs access to the plan, a user with the SET ANY SYSTEM OPTION system privilege must set option
QUERY_PLAN_TEXT_ACCESS ON for that user.
If QUERY_PLAN_TEXT_ACCESS is ON, and the query plan for the string expression is available in the cache
maintained on the server, the query plan from the cache is returned to you.
If the query plan is not available in the cache and you are authorized to view plans on the client, then a query
plan with optimizer estimates (query plan with NOEXEC option ON) is generated and appears on the Interactive
SQL client plan window.
When a user requests a query plan that has not yet been executed, the query plan is not available in the cache.
Instead, a query plan with optimizer estimates is returned without QUERY_PLAN_AFTER_RUN statistics.
You cannot access query plans for stored procedures using the GRAPHICAL_PLAN function.
Users can view the query plan for cursors opened for SAP IQ queries. A cursor is declared and opened using
DECLARE CURSOR and OPEN CURSOR. To obtain the query plan for the most recently opened cursor, use:
SELECT GRAPHICAL_PLAN ( );
With the QUERY_PLAN_AFTER_RUN option OFF, the plan appears after OPEN CURSOR or CLOSE CURSOR.
However, if QUERY_PLAN_AFTER_RUN is ON, CLOSE CURSOR must be executed before you request the plan.
● The following example passes a SELECT statement as a string parameter and returns the plan for
executing the query. It saves the plan in the file gplan.xml:
Note
If you use the OUTPUT statement’s HEXADECIMAL clause set to ASIS to get formatted plan output, the
values of characters are written without any escaping, even if the value contains control characters.
ASIS is useful for text that contains formatting characters such as tabs or carriage returns.
● The following example returns the query plan from the cache, if available:
SELECT GRAPHICAL_PLAN ( );
Related Information
Identifies whether a column in a ROLLUP or CUBE operation result set is NULL because it is part of a subtotal
row, or NULL because of the underlying data.
Syntax
GROUPING ( <group-by-expression> )
Parameters
group-by-expression
An expression appearing as a grouping column in the result set of a query that uses a GROUP BY clause
with the ROLLUP or CUBE keyword. The function identifies subtotal rows added to the result set by a
ROLLUP or CUBE operation.
● 1 – indicates that <group-by-expression> is NULL because it is part of a subtotal row. The column is
not a prefix column for that row.
● 0 – indicates that <group-by-expression> is a prefix column of a subtotal row.
Remarks
SAP IQ does not support the PERCENTILE_CONT or PERCENTILE_DISC functions with GROUP BY CUBE
operations.
Related Information
Syntax
Parameters
group-name-string-expression
Identifies the user to be considered. If not supplied, then the current user name is assumed.
● 0 – returns 0 if the group does not exist, if the user does not exist, or if the user does not belong to the
specified group.
● 1 – returns an integer other than 0 if the user is a member of the specified group.
Syntax
HEXTOBIGINT ( <hexadecimal-string> )
Parameters
hexadecimal-string
The hexadecimal value to be converted to a big integer (BIGINT). Input can be in the following forms, with
either a lowercase or uppercase “0x” in the prefix, or no prefix:
0x<hex-string>
0X<hex-string>
<hex-string>
Remarks
The HEXTOBIGINT function accepts hexadecimal integers and returns the BIGINT equivalent. Hexadecimal
integers can be provided as CHAR and VARCHAR value expressions, as well as BINARY and VARBINARY
expressions.
The HEXTOBIGINT function accepts a valid hexadecimal string, with or without a “0x” or “0X” prefix, enclosed
in single quotes.
For data type conversion failure on input, an error is returned unless the CONVERSION_ERROR option is set to
OFF. When CONVERSION_ERROR is OFF, invalid hexadecimal input returns NULL.
An error is returned if a BINARY or VARBINARY value exceeds 8 bytes and a CHAR or VARCHAR value exceeds 16
characters, with the exception of the value being appended with ‘0x.’
Example
Related Information
Syntax
HEXTOINT ( <hexadecimal-string> )
hexadecimal-string
The string to be converted to an integer. Input can be in the following forms, with either a lowercase or
uppercase “x” in the prefix, or no prefix:
0x<hex-string>
0X<hex-string>
<hex-string>
Returns
The HEXTOINT function returns the platform-independent SQL INTEGER equivalent of the hexadecimal string.
The hexadecimal value represents a negative integer if the 8th digit from the right is one of the digits 8-9 and
the uppercase or lowercase letters A-F and the previous leading digits are all uppercase or lowercase letter F.
The following is not a valid use of HEXTOINT since the argument represents a positive integer value that cannot
be represented as a signed 32-bit integer:
INT
Remarks
For invalid hexadecimal input, SAP IQ returns an error unless the CONVERSION_ERROR option is OFF. When
CONVERSION_ERROR is OFF, invalid hexadecimal input returns NULL.
The database option ASE_FUNCTION_BEHAVIOR specifies that output of SAP IQ functions, including
INTTOHEX and HEXTOINT, is consistent with the output of SAP Adaptive Server Enterprise functions.
● SAP IQ HEXTOINT assumes input is a hexadecimal string of 8 characters; if the length is less than 8
characters long, the string is left padded with zeros.
● SAP IQ HEXTOINT accepts a maximum of 16 characters prefixed with 0x (a total of 18 characters); use
caution, as a large input value can result in an integer value that overflows the 32-bit signed integer output
size.
● The data type of the output of the SAP IQ HEXTOINT function is assumed to be a 32-bit signed integer.
● SAP IQ HEXTOINT accepts a 32-bit hexadecimal integer as a signed representation.
Example
Related Information
Returns a number from 0 to 23 corresponding to the hour component of the specified date/time.
Syntax
HOUR ( <datetime-expression> )
Parameters
datetime-expression
Returns
SMALLINT
Example
Returns the number of hours since an arbitrary starting date and time, the number of whole hours between two
specified times, or adds the specified integer-expression number of hours to a time.
Syntax
HOURS ( <datetime-expression>
| <datetime-expression>, <datetime-expression>
| <datetime-expression>, <integer-expression> )
Parameters
datetime-expression
INT
Remarks
The second syntax returns the number of whole hours from the first date/time to the second date/time. The
number might be negative.
Examples
● The following statement returns the value 4, to signify the difference between the two times:
Related Information
Syntax
HTML_DECODE( <string> )
Parameters
string
Returns
Note
The result data type is a LONG VARCHAR. If you use HTML_DECODE in a SELECT INTO statement, you
must have an Unstructured Data Analytics Option license or use CAST and set HTML_DECODE to the
correct data type and size.
Remarks
This function returns the string argument after making the appropriate substitutions. The following table
contains a sampling of the acceptable character entities.
Characters Substitution
" "
' '
& &
< <
> >
When a Unicode codepoint is specified, if the value can be converted to a character in the database character
set, it is converted to a character. Otherwise, it is returned uninterpreted.
SAP IQ supports all character entity references specified in the HTML 4.01 Specification.
Standards
Example
The following statement returns the string <p>The piano was made by 'Steinway & Sons'.</p>:
Syntax
HTML_ENCODE( <string> )
Parameters
string
Note
The result data type is a LONG VARCHAR. If you use HTML_ENCODE in a SELECT INTO statement, you
must have an Unstructured Data Analytics Option license or use CAST and set HTML_DECODE to the
correct data type and size.
Remarks
This function returns the string argument after making the following set of substitutions:
Characters Substitution
" "
' '
& &
< <
> >
Standards
Example
The following example returns the string '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML
4.01//EN"> '.
Syntax
HTML_PLAN ( <string-expression> )
Parameters
string-expression
SQL statement for which the plan is to be generated. It is primarily a SELECT statement but can be an
UPDATE or DELETE statement.
Remarks
Note
If you do not provide an argument to the HTML_PLAN function, the query plan is returned to you from the
cache. If there is no query plan in the cache, this message appears:
No plan available
The behavior of the HTML_PLAN function is controlled by database options QUERY_PLAN_TEXT_ACCESS and
QUERY_PLAN_TEXT_CACHING. If QUERY_PLAN_TEXT_ACCESS is OFF (the default), this message appears:
If QUERY_PLAN_TEXT_ACCESS is ON, and the query plan for the string expression is available in the cache
maintained on the server, the query plan from the cache is returned to you.
The HTML_PLAN function can be used to return query plans to Interactive SQL using SELECT, UPDATE, DELETE,
INSERT SELECT, and SELECT INTO.
Users can view the query plan for cursors opened for SAP IQ queries. To obtain the query plan for the most
recently opened cursor, use:
SELECT HTML_PLAN ( );
With QUERY_PLAN_AFTER_RUN option OFF, the plan appears after OPEN CURSOR or CLOSE CURSOR. However,
if QUERY_PLAN_AFTER_RUN is ON, CLOSE CURSOR must be executed before you request the plan.
Examples
● The following example passes a SELECT statement as a string parameter and returns the HTML plan for
executing the query. It saves the plan in the file hplan.html:
The OUTPUT TO clause HEXADECIMAL ASIS is useful for text that contains formatting characters such as
tabs or carriage returns. When set to ASIS, values are written as is, without any escaping, even if the values
contain control characters.
● The following example returns the HTML query plan from the cache, if available:
SELECT HTML_PLAN ( );
Related Information
Syntax
HTTP_DECODE( <string> )
string
Returns
Remarks
This function returns the string argument after replacing all character sequences of the form %<nn>, where
<nn> is a hexadecimal value, with the character with code <nn>. In addition, all plus signs (+) are replaced with
spaces.
Standards
Example
SELECT HTTP_DECODE('http%3A%2F%2Ftest.sap.com')
Encodes strings for use with HTTP. This is also known as URL encoding.
Syntax
HTTP_ENCODE( <string> )
string
Returns
Remarks
This function returns the string argument after making the following set of substitutions. In addition, all
characters with hexadecimal codes less than 20 or greater than 7E are replaced with %<nn>, where <nn> is
the character code.
Character Substitution
space %20
" %22
# %23
% %25
& %26
, %2C
; %3B
< %3C
> %3E
[ %5B
\ %5C
] %5D
` %60
{ %7B
| %7C
} %7D
character codes <nn> that are less than 0x20 and greater %<nn>
than 0x7f
Example
Syntax
Parameters
header-field-name
The instance of the header to retrieve. If more than one header has the same name, then the instance is
the number of the field instance. A value of 0 or NULL returns the most recent instance of the header. The
default is 0.
Returns
LONG VARCHAR.
Note
The result data type is a LONG VARCHAR. If you use HTTP_HEADER in a SELECT INTO statement, you
must have an Unstructured Data Analytics Option license or use CAST and set HTTP_HEADER to the
correct data type and size.
This function returns the value of the named HTTP request header field, or NULL if it does not exist or if it is not
called from an HTTP service. It is used when processing an HTTP request via a web service.
Some headers that may be of interest when processing an HTTP web service request include the following:
Cookie
The cookie value(s), if any, stored by the client, that are associated with the requested URI.
Referer
The Internet host name or IP address and port number of the resource being requested, as obtained from
the original URI given by the user or referring resource (for example, webserver.sample.com:8082).
User-Agent
The name of the client application (for example, Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0)
Gecko/20100101 Firefox/14.0).
Accept-Encoding
A list of encodings for the response that are acceptable to the client application (for example, gzip,
deflate).
More information about these headers is available at HTTP Header Field Definitions .
The following special headers allow access to the elements within the request line of a client request.
@HttpMethod
Returns the type of request being processed. Possible values include DELETE, HEAD, GET, PUT, or POST.
@HttpURI
The full URI of the request, as it was specified in the HTTP request (for example, /myservice?
&id=-123&version=109&lang=en).
@HttpVersion
Returns the query portion of the requested URI if it exists (for example,
id=-123&version=109&lang=en).
Standards
The following statement retrieves the fifth instance of the Cookie header value when used within a stored
procedure that is called by an HTTP web service:
The following statement displays the name and values of the HTTP request headers in the database server
messages window when used within a stored procedure that is called by an HTTP web service:
BEGIN
declare header_name long varchar;
declare header_value long varchar;
set header_name = NULL;
header_loop:
LOOP
SET header_name = NEXT_HTTP_HEADER( header_name );
IF header_name IS NULL THEN
LEAVE header_loop
END IF;
SET header_value = HTTP_HEADER( header_name );
MESSAGE 'HEADER: ', header_name, '=',
header_value TO CONSOLE;
END LOOP;
END;
Syntax
Parameters
header-field-name
The instance of the header to retrieve. If more than one header has the same name, then the instance is
the number of the field instance. A value of 0 or NULL returns the most recent instance of the header. The
default is 0.
LONG VARCHAR
Remarks
This function returns the value of the named HTTP response header field, or NULL if a header for the given
<header-field-name> does not exist or if it is not called from an HTTP service.
Some headers that may be of interest when processing an HTTP web service response include the following:
Connection
The Connection field allows the sender to specify options that are desired for that particular connection. In
a SAP IQ HTTP server response, the option is always "close".
Content-Length
The Content-Length field indicates the size of the response body, in decimal number of octets.
Content-Type
The Content-Type field indicates the media type of the body sent to the recipient. For example: text/xml
Date
The Date field represents the date and time at which the response was originated.
Expires
The Expires field gives the date and time after which the response is considered stale.
Location
The Location field is used to redirect the recipient to a location for completion of the request or
identification of a new resource.
Server
The Server field contains information about the software used by the origin server to handle the request. In
a SAP IQ HTTP server response, the web server name together with the version number is returned.
Transfer-Encoding
The Transfer-Encoding field indicates what (if any) type of transformation has been applied to the message
body to safely transfer it between the sender and the recipient.
User-Agent
The User-Agent field contains information about the user agent originating the request. In a SAP IQ HTTP
server response, the web server name together with the version number is returned.
WWW-Authenticate
More information about these headers is available at HTTP Header Field Definitions .
The following special header allows access to the status within the response of a server response.
@HttpStatus
Example
The following statement displays the name and values of the HTTP response headers in the database
server messages window when used within a stored procedure that is called by an HTTP web service:
BEGIN
declare header_name long varchar;
declare header_value long varchar;
set header_name = NULL;
header_loop:
LOOP
SET header_name = NEXT_HTTP_RESPONSE_HEADER( header_name );
IF header_name IS NULL THEN
LEAVE header_loop
END IF;
SET header_value = HTTP_RESPONSE_HEADER( header_name );
MESSAGE 'RESPONSE HEADER: ', header_name, '=', header_value TO CONSOLE;
END LOOP;
Syntax
Parameters
var-name
If more than one variable has the same name, the instance number of the field instance, or NULL to get the
first one. Useful for SELECT lists that permit multiple selections.
attribute
In a multi-part request, the attribute can specify a header field name which returns the value of the header
for the multi-part section.
When an attribute is not specified, the returned value is %-decoded and character-set translated to the
database character set. UTF %-encoded data is supported in this mode.
'@BINARY'
Returns a x-www-form-urlencoded binary data value. This mode indicates that the returned value is %-
decoded and not character-set translated. UTF-8 %-encoding is not supported in this mode since %-
encoded data are simply decoded into their equivalent byte representation.
'@TRANSPORT'
Returns the raw HTTP transport form of the value, where %-encodings are preserved.
Returns
LONG VARCHAR.
Note
The result data type is a LONG VARCHAR. If you use HTTP_VARIABLE in a SELECT INTO statement, you
must have an Unstructured Data Analytics Option license or use CAST and set HTTP_VARIABLE to the
correct data type and size.
Remarks
This function returns the value of the named HTTP variable. It is used when processing an HTTP request within
a web service.
When the web service request is a POST, and the variable data is posted as multipart/form-data, the HTTP
server receives HTTP headers for each individual variable. When the <attribute> parameter is specified, the
HTTP_VARIABLE function returns the associated multipart/form-data header value from the POST request for
the particular variable. For a variable representing a file, an attribute of Content-Disposition, Content-Type, and
@BINARY would return the filename, media-type, and file contents respectively.
Normally, all input data goes through character set translation between the client (for example, a browser)
character set, and the character set of the database. However, if @BINARY is specified for <attribute>, the
variable value is returned without going through character set translation or %-decoding. This may be useful
when receiving binary data, such as image data, from a client.
This function returns NULL when the specified instance does not exist or when the function is called from
outside of an execution of a web service.
Standards
The following statement retrieves the values of the HTTP variables indicated in the sample URL when used
within a stored procedure that is called by an HTTP web service:
-- http://sample.com/demo/ShowDetail?product_id=300&customer_id=101
BEGIN
DECLARE v_customer_id LONG VARCHAR;
DECLARE v_product_id LONG VARCHAR;
SET v_customer_id = HTTP_VARIABLE( 'customer_id' );
SET v_product_id = HTTP_VARIABLE( 'product_id' );
CALL ShowSalesOrderDetail( v_customer_id, v_product_id );
END;
The following statements request the Content-Disposition and Content-Type headers of the image variable
when used within a stored procedure that is called by an HTTP web service:
The following statement requests the value of the image variable in its current character set without going
through character set translation when used within a stored procedure that is called by an HTTP web
service:
Syntax
Parameters
expression1
The data type returned depends on the data type of <expression2> and <expression3>.
Remarks
If the first expression is the NULL value, then the value of the second expression is returned. If the first
expression is not NULL, the value of the third expression is returned. If the first expression is not NULL and
there is no third expression, then the NULL value is returned.
Examples
● The following statement returns NULL, because the first expression is not NULL and there is no third
expression:
Syntax
Parameters
table-name
A key in the index specified by <index-id>. This parameter specifies the column number in the index. For
a single column index, <key_#> is equal to 0. For a multicolumn index, <key_#> is equal to 0 for the first
column, 1 for the second column, and so on.
user-id
(Optional) The user ID of the owner of <table-name>. If <user-id> is not specified, this value defaults to
the caller’s user ID.
Related Information
Syntax
Parameters
numeric-expression
The position after which <string-expression2> is to be inserted. Use zero to insert a string at the
beginning.
string-expression1
Returns
LONG NVARCHAR or LONG VARCHAR, depending on the data type of the input expressions. This function
returns LONG NVARCHAR or LONG VARCHAR, even if the input expressions are BINARY.
Note
The result data type is a LONG VARCHAR. If you use INSERTSTR in a SELECT INTO statement, you must
have an Unstructured Data Analytics Option license or use CAST and set INSERTSTR to the correct data
type and size.
Example
Related Information
Syntax
INTTOHEX ( <integer-expression> )
integer-expression
Returns
VARCHAR
Remarks
If data conversion of input to INTTOHEX conversion fails, SAP IQ returns an error, unless the
CONVERSION_ERROR option is OFF. In that case, the result is NULL.
The database option ASE_FUNCTION_BEHAVIOR specifies that output of SAP IQ functions, including
INTTOHEX and HEXTOINT, be consistent with the output of SAP Adaptive Server Enterprise functions. The
default value of ASE_FUNCTION_BEHAVIOR is OFF.
Examples
Related Information
Syntax
ISDATE ( <string> )
Parameters
string
The string to be analyzed to determine whether the string represents a valid date.
Returns
INT
If a conversion is possible, the function returns 1; otherwise, it returns 0. If the argument is null, 0 is returned.
Example
The following example tests whether the birth_date column holds valid dates, returning invalid dates as
NULL, and valid dates in date format:
select
case when isdate(birth_date)=0 then NULL
else cast(birth_date as date)
end
from MyData;
------------------------------------
(NULL)
(NULL)
1990-12-09
Returns the value of the first non-NULL expression in the parameter list.
Syntax
expression
Returns
The return type for this function depends on the expressions specified. That is, when the database server
evaluates the function, it first searches for a data type in which all the expressions can be compared. When
found, the database server compares the expressions and then returns the result in the type used for the
comparison. If the database server cannot find a common comparison type, an error is returned.
Remarks
Example
Related Information
Syntax
ISNUMERIC ( <string> )
Parameters
string
The string to be analyzed to determine whether the string represents a valid numeric value.
Returns
INT
Remarks
If a conversion is possible, the function returns 1; otherwise, it returns 0. If the argument is null, 0 is returned.
Example
The following example tests whether the height_in_cms column holds valid numeric data, returning invalid
numeric data as NULL, and valid numeric data in int format:
data height_in_cms
------------------------
asde
asde
180
156
An interrow function that returns the value of an attribute in a previous row in the table or table partition.
Syntax
Parameters
value_expr
Table column or expression defining the offset data to return from the table.
offset
The number of rows above the current row, expressed as a non-negative exact numeric literal, or as a SQL
variable with exact numeric data. The permitted range is 0 to 231.
default
The value to return if the <offset> value goes beyond the scope of the cardinality of the table or partition.
window partition
(Optional) One or more value expressions separated by commas indicating how you want to divide the set
of result rows.
window ordering
Defines the expressions for sorting rows within window partitions, if specified, or within the result set if you
did not specify a window partition.
Remarks
The LAG function requires an OVER (ORDER_BY) window specification. The window partitioning clause in the
OVER (ORDER_BY) clause is optional. The OVER (ORDER_BY) clause must not contain a window frame ROWS/
RANGE specification.
You cannot define an analytic expression in <value_expr>. That is, you cannot nest analytic functions, but
you can use other built-in function expressions for <value_expr>.
The default value of <default> is NULL. The data type of <default> must be implicitly convertible to the data
type of the <value_expr> value or else SAP IQ generates a conversion error.
Example
The following example returns salary data from the Employees table, partitions the data by department ID, and
orders the data according to employee start date. The LAG function returns the salary from the previous row (a
physical offset of one row) and displays it under the LAG (Salary) column:
Related Information
Syntax
Parameters
expression
Returns
Remarks
LAST_VALUE returns the last value in a set of values, which is usually an ordered set. If the last value in the set
is null, then the function returns NULL unless you specify IGNORE NULLS. If you specify IGNORE NULLS, then
LAST_VALUE returns the last non-null value in the set, or NULL if all values are null.
The data type of the returned value is the same as that of the input value.
You cannot use LAST_VALUE or any other analytic function for expression. That is, you cannot nest analytic
functions, but you can use other built-in function expressions for expression.
The <window-spec> parameter represents usage as a window function in a SELECT statement. As such, you
can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW clause in the
SELECT statement.
If the <window-spec> does not contain an ORDER BY expression, or if the ORDER BY expression is not precise
enough to guarantee a unique ordering, then the result is arbitrary. If there is no <window-spec>, then the
result is arbitrary.
Note
Example
The following example returns the salary of each employee, plus the name of the employee with the highest
salary in their department:
Syntax
LCASE ( <string-expression> )
Parameters
string-expression
Returns
● CHAR
● NCHAR
● LONG VARCHAR
● VARCHAR
● NVARCHAR
Remarks
The result data type is a LONG VARCHAR. If you use LCASE in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set LCASE to the correct data type and size.
Related Information
An interrow function that returns the value of an attribute in a subsequent row in the table or table partition.
Syntax
Parameters
value_expr
Table column or expression defining the offset data to return from the table.
offset
The number of rows below the current row, expressed as a non-negative exact numeric literal, or as a SQL
variable with exact numeric data. The permitted range is 0 to 231.
default
The value to return if the <offset> value goes beyond the scope of the table or partition.
window partition
(Optional) One or more value expressions separated by commas indicating how you want to divide the set
of result rows.
window ordering
Remarks
The LEAD function requires an OVER (ORDER_BY) window specification. The window partitioning clause in the
OVER (ORDER_BY) clause is optional. The OVER (ORDER_BY) clause must not contain a window frame ROWS/
RANGE specification.
You cannot define an analytic expression in <value_expr>. That is, you cannot nest analytic functions, but
you can use other built-in function expressions for <value_expr>.
You must enter a non-negative numeric data type for <offset>. Entering 0 returns the current row. Entering a
negative number generates an error.
The default value of <default> is NULL. The data type of <default> must be implicitly convertible to the data
type of the <value_expr> value or else SAP IQ generates a conversion error.
Example
The following example returns salary data from the Employees table, partitions the data by department ID, and
orders the data according to employee start date. The LEAD function returns the salary from the next row (a
physical offset of one row) and displays it under the LEAD (Salary) column:
Syntax
Parameters
string-expression
The string.
numeric-expression
Returns
● LONG VARCHAR
● LONG NVARCHAR
Remarks
The result data type is a LONG VARCHAR. If you use LEFT in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set LEFT to the correct data type and size.
If the string contains multibyte characters, and the proper collation is being used, the number of bytes
returned may be greater than the specified number of characters.
Note
The result data type of a LEFT function is a LONG VARCHAR. If you use LEFT in a SELECT INTO statement,
you must have an Unstructured Data Analytics option license or use CAST and set LEFT to the correct data
type and size.
Example
Related Information
Takes one argument as an input of type BINARY or STRING and returns the number of characters, as defined
by the database's collation sequence, of a specified string expression, excluding trailing blanks.
Syntax
LEN ( <string_expr> )
Parameters
string_expr
The result may differ from the string’s byte length for multi-byte character sets.
BINARY and VARBINARY are also allowed, in which case LEN() returns the number of bytes of the input.
Example
Related Information
Syntax
LENGTH ( <string-expression> )
string-expression
The string.
Returns
INT
Remarks
If the string contains multibyte characters, and the proper collation is being used, LENGTH returns the number
of characters, not the number of bytes. If the string is of BINARY data type, the LENGTH function behaves as
BYTE_LENGTH.
Example
Related Information
Syntax
LN ( <numeric-expression> )
Parameters
numeric-expression
A column, variable, or expression with a data type that is either exact numeric, approximate numeric,
money, or any type that can be implicitly converted to one of these types. For other data types, the LN
function generates an error. The return value is of DOUBLE data type.
Remarks
Related Information
Syntax
Parameters
string-expression1
The string for which you are searching. This string is limited to 255 bytes.
numeric-expression
The character position in the string to begin the search. The first character is position 1. If the starting
offset is negative, the locate function returns the last matching string offset rather than the first. A
negative offset indicates how much of the end of the string is to be excluded from the search. The number
of bytes excluded is calculated as (-1 * offset) -1.
The <numeric-expression> is a 32 bit signed integer for CHAR, VARCHAR, and BINARY columns.
If <numeric-expression> is specified, the search starts at that offset into the string being searched.
If <numeric-expression> is not specified, LOCATE returns only the position of the first instance of the
specified string.
Returns
INT
Remarks
The first string can be a long string (longer than 255 bytes), but the second is limited to 255 bytes. A second
string longer than 255 bytes causes an error.
If the string does not contain the specified string, the LOCATE function returns zero (0).
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
Examples
SELECT LOCATE( 'office party this week – rsvp as soon as possible', 'party',
2 ) FROM iq_dummy
● In the second example, the <numeric-expression> starting offset for the search is a negative number:
18 c:\test\functions\locate.sql
18 d:\test\functions\trim.sql
In this section:
Related Information
The LOCATE function returns a 64-bit signed integer containing the position of the specified string in the large
object column or variable parameter. For CHAR and VARCHAR columns, LOCATE returns a 32-bit signed integer
position.
Syntax
Parameters
large-object-column
The name of the LONG VARCHAR or LONG BINARY column or variable to search.
string-expression
The character position or offset at which to begin the search in the string. The <numeric-expression> is
a 64-bit signed integer for LONG VARCHAR and LONG BINARY columns and is a 32-bit signed integer for
CHAR, VARCHAR, and BINARY columns. The first character is position 1. If the starting offset is negative,
LOCATE returns the last matching string offset, rather than the first. A negative offset indicates how much
of the end of the string to exclude from the search. The number of characters excluded is calculated as ( -1
* offset ) - 1.
Remarks
● All the positions or offsets, returned or specified, in the LOCATE function are always character offsets and
may be different from the byte offset for multibyte data.
● If the large object cell being searched contains more than one instance of the string:
○ If <numeric-expression> is specified, LOCATE starts the search at that offset in the string.
○ If <numeric-expression> is not specified, LOCATE returns only the position of the first instance.
● If the column does not contain the string, LOCATE returns zero (0).
● Searching for a string longer than 255 bytes returns NULL.
● Searching for a zero-length string returns 1.
● If any of the arguments is NULL, the result is NULL.
● LOCATE supports searching LONG VARCHAR and LONG BINARY columns and LONG VARCHAR and LONG
BINARY variables of any size of data. Currently, a SQL variable can hold up to 2 GB - 1 in length.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Syntax
LOG ( <numeric-expression> )
Parameters
numeric-expression
The number.
Returns
This function converts its argument to DOUBLE, performs the computation in double-precision floating point,
and returns a DOUBLE as the result. If the parameter is NULL, the result is NULL.
Remarks
LN is an alias of LOG.
Example
Syntax
LIST
( [ ALL | DISTINCT ] <string-expresssion>
[, '<delimiter-string>' ]
[ ORDER BY <order-by-expression> [ ASC | DESC ], ... ] )
Parameters
string-expresssion
A string expression, usually a column name. When ALL is specified (the default), for each row in the group,
the value of string-expression is added to the result string, with values separated by delimiter-string. When
DISTINCT is specified, only unique string-expression values are added.
delimiter-string
A delimiter string for the list items. The default setting is a comma. There is no delimiter if a value of NULL
or an empty string is supplied. The delimiter-string must be a constant.
order-by-expression
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use LIST in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set LIST to the correct data type and size.
The LIST function returns the concatenation (with delimiters) of all the non-NULL values of X for each row in
the group. If there does not exist at least one row in the group with a definite X-value, then LIST( X ) returns the
empty string.
NULL values and empty strings are ignored by the LIST function.
A LIST function cannot be used as a window function, but it can be used as input to a window function.
Order By
There is no comma <order-by-expression>, which makes it easy to use in the case where no delimiter-
string is supplied.
<order-by-expression> cannot be an integer literal. However, it can be a variable that contains an integer
literal.
When an ORDER BY clause contains constants, they are interpreted by the optimizer and then replaced by an
equivalent ORDER BY clause. For example, the optimizer interprets ORDER BY 'a' as ORDER BY expression.
A query block containing more than one aggregate function with valid ORDER BY clauses can be executed if the
ORDER BY clauses can be logically combined into a single ORDER BY clause. For example, the following
clauses:
SAP IQ supports SQL/2008 language feature F441, "Extended set function support", which permits operands
of aggregate functions to be arbitrary expressions that are not column references.
SAP IQ does not support optional SQL/2008 feature F442, "Mixed column references in set functions". SAP
IQdoes not permit the arguments of an aggregate function to include both a column reference from the query
block containing the LIST function, combined with an outer reference.
● This statement returns the value 487 Kennedy Court, 547 School Street:
● This statement lists employee IDs. Each row in the result set contains a comma-delimited list of employee
IDs for a single department:
LIST( EmployeeID )
102,105,160,243,247,249,266,278,...
129,195,299,467,641,667,690,856,...
148,390,586,757,879,1293,1336,...
184,207,318,409,591,888,992,1062,...
191,703,750,868,921,1013,1570,...
● This statement sorts the employee IDs by the last name of the employee:
Sorted IDs
1013,191,750,921,868,1658,...
1751,591,1062,1191,992,888,318,...
1336,879,586,390,757,148,1483,...
1039,129,1142,195,667,1162,902,...
160,105,1250,247,266,249,445,...
● This statement returns semicolon-separated lists. Note the position of the ORDER BY clause and the list
separator:
Sorted IDs
1013;191;750;921;868;1658;703;...
1751;591;1062;1191;992;888;318;...
1336;879;586;390;757;148;1483;...
1039;129;1142;195;667;1162;902; ...
160;105;1250;247;266;249;445;...
Be sure to distinguish the previous statement from the following statement, which returns comma-
separated lists of employee IDs sorted by a compound sort-key of ( Surname, ';' ):
Syntax
LOG10 ( <numeric-expression> )
Parameters
numeric-expression
The number.
Returns
This function converts its argument to DOUBLE, and performs the computation in double-precision floating
point. If the parameter is NULL, the result is NULL.
Related Information
Syntax
LOWER ( <string-expression> )
Parameters
string-expression
Returns
● CHAR
● NCHAR
● LONG VARCHAR
● VARCHAR
● NVARCHAR
Remarks
The result data type is a LONG VARCHAR. If you use LOWER in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set LOWER to the correct data type and size.
Example
Related Information
Left-pads a string with spaces, or a specified pattern, to make a string of a specified number of characters in
length.
Syntax
Syntax Elements
str
Description
Left-pads the end of <str> with spaces to make a string of <n> characters. If <pattern> is specified, then
<str> is padded using sequences of the given characters until the required length is met.
If the length of <str> is greater than <n>, then no padding is performed and the resulting value is truncated
from the right side to the length specified in <n>.
Examples
● The following example left-pads the start of string end with the pattern 12345 to make a string of 15
characters in length, and returns the value 1234512345hello:
● In the following example, <str> is longer than <n>, so no padding is performed and the result is <str>
truncated to the length of <n> (that is, he):
● By not specifying <pattern>, this example left-pads the start of string hello with a single blank
character (that is, " hello"):
Returns a string, trimmed of all the leading characters present in the trim character set.
Syntax
Parameters
string-expression
Returns
Trimmed string.
Remarks
If trim character set is not specified, all leading spaces in the string expression are trimmed.
● STANDARD function
Example
The following statement removes all leading a and b characters from the given string and returns the value
Aabend.
Related Information
Syntax
MAX ( <expression>
| DISTINCT <column-name> )
expression
The expression for which the maximum value is to be calculated. This is commonly a column name.
DISTINCT column-name
Returns
Remarks
Rows where <expression> is NULL are ignored. Returns NULL for a group containing no rows.
Example
The following statement returns the value 138948.000, representing the maximum salary in the Employees
table:
Related Information
Syntax
Syntax 1
Syntax 2
Parameters
expression
Remarks
The median is the number separating the higher half of a sample, a population, or a probability distribution,
from the lower half.
The data type of the returned value is the same as that of the input value. NULLs are ignored in the calculation
of the median value. You can use the optional keyword DISTINCT to eliminate duplicate values before the
aggregate function is applied. ALL, which performs the operation on all rows, is the default.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Note
The <window-spec> cannot contain a ROW, RANGE or ORDER BY specification; <window-spec> can only
specify a PARTITION clause. DISTINCT is not supported if a WINDOW clause is used.
Example
The following query returns the median salary for each department in Florida:
Related Information
Syntax
MIN ( <expression>
| DISTINCT <column-name> )
expression
The expression for which the minimum value is to be calculated. This is commonly a column name.
DISTINCT column-name
Returns
Remarks
Rows where <expression> is NULL are ignored. Returns NULL for a group containing no rows.
Example
The following statement returns the value 24903.000, representing the minimum salary in the Employees
table:
Related Information
Returns a number from 0 to 59 corresponding to the minute component of the specified date/time value.
Syntax
MINUTE ( <datetime-expression> )
Parameters
datetime-expression
Returns
SMALLINT
Example
Returns the number of minutes since an arbitrary date and time, the number of whole minutes between two
specified times, or adds the specified integer-expression number of minutes to a time.
Syntax
MINUTES ( <datetime-expression>
Parameters
datetime-expression
Returns
● INT
● TIMESTAMP
Remarks
The second syntax returns the number of whole minutes from the first date/time to the second date/time. The
number might be negative.
Examples
● Returns the value 240, to signify the difference between the two times:
Related Information
Syntax
Parameters
dividend
Returns
● SMALLINT
● INT
● NUMERIC
Division involving a negative <dividend> gives a negative or zero result. The sign of the <divisor> has no
effect.
Example
Related Information
Syntax
MONTH ( <date-expression> )
Parameters
date-expression
A date/time value.
SMALLINT
Example
Returns the name of the month from the specified date expression.
Syntax
MONTHNAME ( <date-expression> )
Parameters
date-expression
Returns
VARCHAR
Example
The following statement returns the value September, when the DATE_ORDER option is set to the default value
of <ymd>:
Returns the number of months since an arbitrary starting date/time or the number of months between two
specified date/times, or adds the specified integer-expression number of months to a date/time.
Syntax
MONTHS ( <date-expression>
| <date-expression, datetime-expression>
| <date-expression, integer-expression> )
Parameters
date-expression
Returns
● INT
● TIMSTAMP
The first syntax returns the number of months since an arbitrary starting date. This number is often useful for
determining whether two date/time expressions are in the same month in the same year.
Comparing the MONTH function would incorrectly include a payment made 12 months after the invoice was
sent.
The second syntax returns the number of months from the first date to the second date. The number might be
negative. It is calculated from the number of the first days of the month between the two dates. Hours, minutes
and seconds are ignored.
The third syntax adds <integer-expression> months to the given date. If the new date is past the end of
the month — such as MONTHS ('1992-01-31', 1) — the result is set to the last day of the month. If
<integer-expression> is negative, the appropriate number of months are subtracted from the date. Hours,
minutes and seconds are ignored.
Examples
● The following statement returns the value 2, to signify the difference between the two dates:
Related Information
Syntax
NEWID ( )
Parameters
Returns
UNIQUEIDENTIFIER
Remarks
The returned UUID value is a binary. A UUID is the same as a GUID (Globally Unique Identifier).
UUIDs can be used to uniquely identify objects in a database. The values are generated such that a value
produced on one computer does not match that produced on another, hence they can also be used as keys in
replication and synchronization environments.
You can use a value generated by the NEWID function as a column default value in a table.
Example
The following statement creates the table t1 and then updates the table, setting the value of the column
uid_col to a unique identifier generated by the NEWID function, if the current value of the column is NULL:
If you execute the following statement, the unique identifier is returned as a BINARY(16):
SELECT NEWID()
For example, the value might be 0xd3749fe09cf446e399913bc6434f1f08. You can convert this string into a
readable format using the UUIDTOSTR() function.
Related Information
Returns the next connection number, or the first connection if the parameter is NULL.
Syntax
connection-id
An integer representing one of the databases on the current server. If you supply no <database-id>, the
current database is used. If you supply NULL, then NEXT_CONNECTION returns the next connection
regardless of database.
Returns
INT
Remarks
Note
You can use NEXT_CONNECTION to enumerate the connections to a database. To get the first connection, pass
NULL; to get each subsequent connection, pass the previous return value. The function returns NULL when
there are no more connections.
NEXT_CONNECTION can be used to enumerate the connections to a database. Connection IDs are generally
created in monotonically increasing order. This function returns the next connection ID in reverse order.
To get the connection ID value for the most recent connection, enter NULL as the <connection-id>. To get
the subsequent connection, enter the previous return value. The function returns NULL when there are no
more connections in the order.
NEXT_CONNECTION is useful if you want to disconnect all the connections created before a specific time.
However, because NEXT_CONNECTION returns the connection IDS in reverse order, connections made after the
function is started are not returned. If you want to ensure that all connections are disconnected, prevent new
connections from being created before you run NEXT_CONNECTION.
● The following statement returns an identifier for the first connection on the current database. The identifier
is an integer value like 10:
SELECT NEXT_CONNECTION( 10 );
● The following call returns the next connection ID in reverse order from the specified <connection-id> on
the current database:
● The following call returns the next connection ID in reverse order from the specified <connection-id
>(regardless of database):
● The following call returns the next connection ID in reverse order from the specified <connection-id> on
the specified database:
● The following call returns the first (earliest) connection (regardless of database):
● The following call returns the first (earliest) connection on the specified database:
Returns the next database ID number, or the first database if the parameter is NULL.
Syntax
Parameters
database-id
INT
Remarks
Note
You can use NEXT_DATABASE to enumerate the databases running on a database server. To get the first
database, pass NULL; to get each subsequent database, pass the previous return value. The function returns
NULL when there are no more databases.
Examples
● The following statement returns the value 0, the first database value:
● The following statement returns NULL, indicating that there are no more databases on the server:
Related Information
Syntax
NEXT_HTTP_HEADER( <header-name> )
Parameters
header-name
The name of the previous request header. If header-name is NULL, this function returns the name of the
first HTTP request header.
Returns
LONG VARCHAR.
Note
The result data type is a LONG VARCHAR. If you use NEXT_HTTP_HEADER in a SELECT INTO statement,
you must have an Unstructured Data Analytics Option license or use CAST and set HTML_DECODE to the
correct data type and size.
Remarks
This function is used to iterate over the HTTP request headers returning the next HTTP header name. Calling it
with NULL causes it to return the name of the first header. Subsequent headers are retrieved by passing the
name of the previous header to the function. This function returns NULL when called with the name of the last
header, or when not called from a web service.
Calling this function repeatedly returns all the header fields exactly once, but not necessarily in the order they
appear in the HTTP request.
Standards
The following statement displays the name and values of the HTTP request headers in the database server
messages window when used within a stored procedure that is called by an HTTP web service:
BEGIN
declare header_name long varchar;
declare header_value long varchar;
set header_name = NULL;
header_loop:
LOOP
SET header_name = NEXT_HTTP_HEADER( header_name );
IF header_name IS NULL THEN
LEAVE header_loop
END IF;
SET header_value = HTTP_HEADER( header_name );
MESSAGE 'HEADER: ', header_name, '=',
header_value TO CONSOLE;
END LOOP;
END;
Syntax
NEXT_HTTP_VARIABLE( <var-name>)
Parameters
var-name
The name of the previous variable. If <var-name> is NULL, this function returns the name of the first HTTP
variable.
Returns
LONG VARCHAR.
Note
The result data type is a LONG VARCHAR. If you use NEXT_HTTP_VARIABLE in a SELECT INTO statement,
you must have an Unstructured Data Analytics Option license or use CAST and set NEXT_HTTP_HEADER
to the correct data type and size.
This function iterates over the HTTP variables included within a request. Calling it with NULL causes it to return
the name of the first variable. Subsequent variables are retrieved by passing the function the name of the
previous variable. This function returns NULL when called with the name of the final variable or when not called
from a web service.
Calling this function repeatedly returns all the variables exactly once, but not necessarily in the order they
appear in the HTTP request. The variables url or url1, url2, ..., url10 are included if URL PATH is set to ON or
ELEMENTS, respectively.
Standards
Example
The following statement returns the name of the first HTTP variable when used within a stored procedure
that is called by an HTTP web service:
BEGIN
DECLARE variable_name LONG VARCHAR;
DECLARE variable_value LONG VARCHAR;
SET variable_name = NULL;
SET variable_name = NEXT_HTTP_VARIABLE( variable_name );
SET variable_value = HTTP_VARIABLE( variable_name );
END;
Returns the current date and time. This is the historical syntax for CURRENT TIMESTAMP.
Syntax
NOW ( * )
Returns
TIMESTAMP
Example
Distributes query results into a specified number of buckets and assigns the bucket number to each row in the
bucket.
Syntax
NTILE ( <expression1> )
OVER ( ORDER BY <expression2> [ ASC | DESC ] )
Parameters
expression1
A sort specification that can be any valid expression involving a column reference, aggregates, or
expressions invoking these items.
ASC | DESC
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is
the default.
Remarks
NTILE is a rank analytical function that distributes query results into a specified number of buckets and
assigns the bucket number to each row in the bucket. You can divide a result set into one-hundredths
(percentiles), tenths (deciles), fourths (quartiles), or other numbers of groupings.
The OVER clause indicates that the function operates on a query result set. The result set is the rows that are
returned after the FROM, WHERE, GROUP BY, and HAVING clauses have all been evaluated. The OVER clause
defines the data set of the rows to include in the computation of the rank analytical function.
NTILE is allowed only in the select list of a SELECT or INSERT statement or in the ORDER BY clause of the
SELECT statement. NTILE can be in a view or a union. The NTILE function cannot be used in a subquery, a
HAVING clause, or in the select list of an UPDATE or DELETE statement. Only one NTILE function is allowed per
query.
Example
The following example uses the NTILE function to determine the sale status of car dealers. The dealers are
divided into four groups based on the number of cars each dealer sold. The dealers with ntile = 1 are in the
top 25% for car sales:
To find the top 10% of car dealers by sales, you specify NTILE(10) in the example SELECT statement.
Similarly, to find the top 50% of car dealers by sales, specify NTILE(2).
Syntax
Parameters
expression1
An expression to be compared.
expression2
An expression to be compared.
Returns
Remarks
If the first expression equals the second expression, NULLIF returns NULL.
If the first expression does not equal the second expression, or if the second expression is NULL, NULLIF
returns the first expression.
The NULLIF function provides a short way to write some CASE expressions. NULLIF is equivalent to:
Examples
Related Information
Generates numbers starting at 1 for each successive row in the results of the query.
Syntax
NUMBER ( * )
Returns
INT
Use the NUMBER function only in a select list or a SET clause of an UPDATE statement. For example, the
following statement updates each row of the seq_id column with a number 1 greater than the previous row.
The number is applied in the order specified by the ORDER BY clause:
update empl
set seq_id = number(*)
order by empl_id
In an UPDATE statement, if the NUMBER (*) function is used in the SET clause and the FROM clause specifies a
one-to-many join, NUMBER (*) generates unique numbers that increase, but may not increment sequentially
due to row elimination.
NUMBER can also be used to generate primary keys when using the INSERT from SELECT statement, although
using IDENTITY/AUTOINCREMENT is a preferred mechanism for generating sequential primary keys.
Note
A syntax error is generated if you use NUMBER in a DELETE statement, WHERE clause, HAVING clause,
ORDER BY clause, subquery, query involving aggregation, any constraint, GROUP BY, DISTINCT, a query
containing UNION ALL, or a derived table.
Example
SELECT NUMBER( * )
FROM Departments
WHERE DepartmentID > 10
number(*)
Syntax
OBJECT_ID ( <object-name> )
Parameters
object-name
Example
The following statement returns the object ID 100209 of the <Customers> table:
Related Information
Syntax
Parameters
object-id
Example
Related Information
Returns an unsigned 64-bit value containing the byte length of the column parameter.
Syntax
OCTET_LENGTH( <column-name> )
Parameters
column-name
Remarks
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data. The OCTET_LENGTH function supports all SAP IQ data types and LONG VARCHAR and LONG
BINARY variables of any size of data. Currently, a SQL variable can hold up to 2 GB - 1 in length.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
SAP database products – not supported by SAP Adaptive Server Enterprise or SAP SQL Anywhere
Related Information
Syntax
Parameters
pattern
The pattern for which you are searching. This string is limited to 126 bytes for patterns with wildcards. If
the leading percent wildcard is omitted, PATINDEX returns one (1) if the pattern occurs at the beginning of
the string, and zero if not. If <pattern> starts with a percent wildcard, then the two leading percent
wildcards are treated as one.
INT
Remarks
PATINDEX returns the starting position of the first occurrence of the pattern. If the string being searched
contains more than one instance of the string pattern, PATINDEX returns only the position of the first instance.
The pattern uses the same wildcards as the LIKE comparison. This table lists the pattern wildcards:
Wildcard Matches
All the positions or offsets, returned or specified, in the PATINDEX function are always character offsets and
may be different from the byte offset for multibyte data.
PATINDEX returns a 32-bit unsigned integer position for CHAR and VARCHAR columns.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
Examples
In this section:
Related Information
The PATINDEX function returns a 64-bit unsigned integer containing the position of the first occurrence of the
specified pattern in a LONG VARCHAR column or variable. For CHAR and VARCHAR columns, PATINDEX returns a
32-bit unsigned integer position.
Syntax
Parameters
%pattern%
The pattern for which you are searching. This string is limited to 126 bytes for patterns with wildcards. If
you omit the leading percent wildcard, PATINDEX returns one (1) if the pattern occurs at the beginning of
the column value, and zero (0) if the pattern does not occur at the beginning of the column value. Similarly,
if you omit the trailing percent wildcard, the pattern should occur at the end of the column value. The
pattern uses the same wildcards as the LIKE comparison.
Patterns without wildcards — percent (%) and underscore (_) — can be up to 255 bytes in length.
long-varchar-column
● All the positions or offsets, returned or specified, in the PATINDEX function are always character offsets
and may be different from the byte offset for multibyte data.
● If the LONG VARCHAR cell being searched contains more than one instance of the string pattern, PATINDEX
returns only the position of the first instance.
● If the column does not contain the pattern, PATINDEX returns zero (0).
● Searching for a pattern longer than 126 bytes returns NULL.
● Searching for a zero-length pattern returns 1.
● If any of the arguments is NULL, the result is zero (0).
● PATINDEX supports LONG VARCHAR variables of any size of data. Currently, a SQL variable can hold up to
2GB - 1 in length. PATINDEX does not support LONG BINARY variables or searching LONG BINARY
columns.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Computes the (fractional) position of one row returned from a query with respect to the other rows returned by
the query, as defined by the ORDER BY clause.
Syntax
Parameters
The ORDER BY clause specifies the parameter on which ranking is performed and the order in which the
rows are sorted in each group.
expression
A sort specification that can be any valid expression involving a column reference, aggregates, or
expressions invoking these items.
ASC | DESC
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is
the default.
Returns
Remarks
PERCENT_RANK is a rank analytical function. The percent rank of a row R is defined as the rank of a row in the
groups specified in the OVER clause minus one divided by the number of total rows in the groups specified in
the OVER clause minus one. PERCENT_RANK returns a value between 0 and 1. The first row has a percent rank
of zero.
The PERCENT_RANK of a row is calculated as follows, where <Rx> is the rank position of a row in the group and
<NtotalRow> is the total number of rows in the group specified by the OVER clause:
(Rx - 1) / (NtotalRow - 1)
PERCENT_RANK requires an OVER (ORDER BY) clause. The ORDER BY clause specifies the parameter on
which ranking is performed and the order in which the rows are sorted in each group. This ORDER BY clause is
used only within the OVER clause and is not an ORDER BY for the SELECT. No aggregation functions in the rank
query are allowed to specify DISTINCT.
The OVER clause indicates that the function operates on a query result set. The result set is the rows that are
returned after the FROM, WHERE, GROUP BY, and HAVING clauses have all been evaluated. The OVER clause
defines the data set of the rows to include in the computation of the rank analytical function.
PERCENT_RANK is allowed only in the select list of a SELECT or INSERT statement or in the ORDER BY clause of
the SELECT statement. PERCENT_RANK can be in a view or a union. The PERCENT_RANK function cannot be
used in a subquery, a HAVING clause, or in the select list of an UPDATE or DELETE statement. Only one rank
analytical function is allowed per query.
Example
Given a percentile, returns the value that corresponds to that percentile. Assumes a continuous distribution
data model.
Note
If you are simply trying to compute a percentile, use the NTILE function instead, with a value of 100.
Syntax
PERCENTILE_CONT ( <expression1> )
WITHIN GROUP ( ORDER BY <expression2> [ ASC | DESC ] )
Parameters
expression1
A constant of numeric data type and range from 0 to 1 (inclusive). If the argument is NULL, a “wrong
argument for percentile” error is returned. If the argument value is less than 0 or greater than 1, a “data
value out of range” error is returned
expression2
A sort specification that must be a single expression involving a column reference. Multiple expressions are
not allowed and no rank analytical functions, set functions, or subqueries are allowed in this sort
expression.
Remarks
The inverse distribution analytical functions return a k-th percentile value, which can be used to help establish a
threshold acceptance value for a set of data. The function PERCENTILE_CONT takes a percentile value as the
function argument, and operates on a group of data specified in the WITHIN GROUP clause, or operates on the
entire data set. The function returns one value per group. If the GROUP BY column from the query is not
present, the result is a single row. The data type of the results is the same as the data type of its ORDER BY
item specified in the WITHIN GROUP clause. The data type of the ORDER BY expression for PERCENTILE_CONT
must be numeric.
The ORDER BY clause, which must be present, specifies the expression on which the percentile function is
performed and the order in which the rows are sorted in each group. For the PERCENTILE_CONT function, the
The WITHIN GROUP clause distributes the query result into an ordered data set from which the function
calculates a result. The WITHIN GROUP clause must contain a single sort item. If the WITHIN GROUP clause
contains more or less than one sort item, an error is reported.
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is the
default.
Example
The following example uses the PERCENTILE_CONT function to determine the 10th percentile value for car
sales in a region.
The result of the SELECT statement lists the 10th percentile value for car sales in a region:
region percentile_cont
Related Information
Given a percentile, returns the value that corresponds to that percentile. Assumes a discrete distribution data
model.
Note
If you are simply trying to compute a percentile, use the NTILE function instead, with a value of 100.
Syntax
PERCENTILE_DISC ( <expression1> )
WITHIN GROUP ( ORDER BY <expression2> [ ASC | DESC ] )
Parameters
expression1
A constant of numeric data type and range from 0 to 1 (inclusive). If the argument is NULL, then a “wrong
argument for percentile” error is returned. If the argument value is less than 0 or greater than 1, then a
“data value out of range” error is returned.
expression2
A sort specification that must be a single expression involving a column reference. Multiple expressions are
not allowed and no rank analytical functions, set functions, or subqueries are allowed in this sort
expression.
Remarks
The inverse distribution analytical functions return a k-th percentile value, which can be used to help establish a
threshold acceptance value for a set of data. The function PERCENTILE_DISC takes a percentile value as the
function argument and operates on a group of data specified in the WITHIN GROUP clause or operates on the
The ORDER BY clause, which must be present, specifies the expression on which the percentile function is
performed and the order in which the rows are sorted in each group. This ORDER BY clause is used only within
the WITHIN GROUP clause and is not an ORDER BY for the SELECT.
The WITHIN GROUP clause distributes the query result into an ordered data set from which the function
calculates a result. The WITHIN GROUP clause must contain a single sort item. If the WITHIN GROUP clause
contains more or less than one sort item, an error is reported.
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is the
default.
Example
The following example uses the PERCENTILE_DISC function to determine the 10th percentile value for car
sales in a region.
The result of the SELECT statement lists the 10th percentile value for car sales in a region:
region percentile_cont
Northeast 900
Northwest 800
South 500
Related Information
Syntax
PI ( * )
Returns
DOUBLE
Syntax
Parameters
numeric-expression1
The base.
numeric-expression2
The exponent.
Returns
DOUBLE
Remarks
Syntax
Parameters
property-id
An integer that is the property-number of the server-level property. This number can be determined from
the PROPERTY_NUMBER function. The <property-id> is commonly used when looping through a set of
properties.
property-name
Returns
VARCHAR
Remarks
Note
Each property has both a number and a name, but the number is subject to change between versions, and
should not be used as a reliable identifier for a given property.
Example
The following statement returns the name of the current database server:
Related Information
Syntax
Parameters
property-id
An integer that is the property number of the property. This number can be determined from the
PROPERTY_NUMBER function. The <property-id> is commonly used when looping through a set of
properties.
property-name
VARCHAR
Remarks
Note
Each property has both a number and a name, but the number is subject to change between releases, and
should not be used as a reliable identifier for a given property.
Example
Returns whether or not you can maintain historical data for the specified database server property by storing
its tracked values.
Syntax
PROPERTY_IS_TRACKABLE( <property-ID> )
Parameters
property-ID
Returns
Remarks
Example
The following example returns all database server properties that are trackable:
Returns the name of the property with the supplied property number.
Syntax
PROPERTY_NAME ( <property-id> )
Parameters
property-id
Returns
VARCHAR
Note
Example
The following statement returns the property associated with property number 126:
The actual property to which this refers changes from version to version.
Related Information
Returns the property number of the property with the supplied property name.
Syntax
PROPERTY_NUMBER ( <property-name> )
property-name
A property name.
Returns
INT
Remarks
Note
Example
Related Information
Returns a number indicating the quarter of the year from the supplied date expression.
Syntax
QUARTER( <date-expression> )
Parameters
date-expression
A date.
Returns
INT
Remarks
This table lists the dates in the quarters of the year. Function assumes as starting quarter of 1 (January). For a
user-defined starting quarter, use QUARTERSTR.
1 January 1 to March 31
2 April 1 to June 30
3 July 1 to September 30
4 October 1 to December 31
Example
With the DATE_ORDER option set to the default of <ymd>, the following statement returns the value 2:
Related Information
Returns a number indicating the quarter of the year from the supplied date expression and quarter start
month.
Syntax
Parameters
date-expression
A date.
quarter_start_month Any integer 1 to 12. If not specified, default value is 1 (January).
Returns
A string in the format YYYY-QN where YYYY is year and N is quarter number.
● Nonstandard function.
Example
With the DATE_ORDER option set to the default of <ymd>, the following statement returns the value 1998-Q4:
Syntax
RADIANS ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Returns a DOUBLE precision, random number x, where 0 <= x <1, with an optional seed.
Syntax
RAND ( [ <integer-expression> ] )
Parameters
integer-expression
The optional seed used to create a random number. This argument allows you to create repeatable random
number sequences.
Returns
DOUBLE
Remarks
If RAND is called with a FROM clause and an argument in a query containing only tables in IQ stores, the function
returns an arbitrary but repeatable value.
When no argument is called, RAND is a non-deterministic function. Successive calls to RAND might return
different values. The query optimizer does not cache the results of the RAND function
Note
The values returned by RAND vary depending on whether you use a FROM clause or not and whether the
referenced table was created in SYSTEM or in an IQ store.
Examples
SELECT AVG(table1.number_of_cars),
AVG(table1.number_of_tvs)FROM table1 WHERE
RAND(ROWID(table1)) < .05 and table1.income < 50000;
Syntax
Parameters
expression
A sort specification that can be any valid expression involving a column reference, aggregates, or
expressions invoking these items.
Returns
INTEGER
RANK is a rank analytical function. The rank of row R is defined as the number of rows that precede R and are
not peers of R. If two or more rows are not distinct within the groups specified in the OVER clause or distinct
over the entire result set, then there are one or more gaps in the sequential rank numbering. The difference
between RANK and DENSE_RANK is that DENSE_RANK leaves no gap in the ranking sequence when there is a tie.
RANK leaves a gap when there is a tie.
RANK requires an OVER (ORDER BY) clause. The ORDER BY clause specifies the parameter on which ranking
is performed and the order in which the rows are sorted in each group. This ORDER BY clause is used only
within the OVER clause and is not an ORDER BY for the SELECT. No aggregation functions in the rank query are
allowed to specify DISTINCT.
The PARTITION BY window partitioning clause in the OVER (ORDER BY) clause is optional.
The ASC or DESC parameter specifies the ordering sequence ascending or descending. Ascending order is the
default.
The OVER clause indicates that the function operates on a query result set. The result set is the rows that are
returned after the FROM, WHERE, GROUP BY, and HAVING clauses have all been evaluated. The OVER clause
defines the data set of the rows to include in the computation of the rank analytical function.
RANK is allowed only in the select list of a SELECT or INSERT statement or in the ORDER BY clause of the
SELECT statement. RANK can be in a view or a union. The RANK function cannot be used in a subquery, a
HAVING clause, or in the select list of an UPDATE or DELETE statement. Only one rank analytical function is
allowed per query.
Example
Related Information
Reads data from the specified file on the server and returns the full or partial contents of the file as a LONG
BINARY value.
Syntax
Parameters
filename
LONG VARCHAR value indicating the path and name of the file on the server.
start
The start position of the file to read, in bytes. The first byte in the file is at position 1. A negative starting
position specifies the number of bytes from the end of the file rather than from the beginning.
● If <length> is not specified, the function reads from the starting position to the end of the file.
● If <length> is positive, the function reads at most <length> bytes beginning at the starting position.
● If <length> is negative, the function returns at most <length> ending at the starting position.
Returns
LONG BINARY
This function returns the full or partial (if <start> and/or <length> are specified) contents of the named file
as a LONG BINARY value. If the file does not exist or cannot be read, NULL is returned.
The READ_SERVER_FILE function supports reading files larger than 2GB. However, the returned content is
limited to 2GB. If the returned content exceeds this limit, a SQL error is returned.
If the data file is in a different character set, you can use the CSCONVERT function to convert it. You can also
use the CSCONVERT function to address the character set conversion requirements you may have when using
the READ_SERVER_FILE server function.
If disk sandboxing is enabled, the file referenced in <filename> must be in an accessible location.
Privileges
Standards
Example
The following statement reads 20 bytes in a file, starting from byte 100 of the file.
Syntax
Syntax 1
Parameters
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_AVGX returns NULL.
AVG (x)
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
The following example calculates the average of the dependent variable, employee age:
Related Information
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_AVGY returns NULL.
AVG(y)
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
The following example calculates the average of the independent variable, employee salary:
Returns an integer that represents the number of non-NULL number pairs used to fit the regression line.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
INTEGER
Remarks
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
The following example returns a value that indicates the number of non-NULL pairs that were used to fit the
regression line:
Related Information
Computes the y-intercept of the linear regression line that best fits the dependent and independent variables.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, REGR_INTERCEPT returns NULL.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
Computes the coefficient of determination (also referred to as R-squared or the goodness-of-fit statistic) for
the regression line.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_R2 returns NULL.
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
Related Information
Computes the slope of the linear regression line, fitted to non-NULL pairs.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_SLOPE returns NULL.
COVAR_POP(x, y) / VAR_POP(y)
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
Related Information
Computes the slope of the linear regression line, fitted to non-NULL pairs.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_SXX returns NULL.
REGR_COUNT(y, x) * VAR_POP(x)
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
Related Information
Returns the sum of products of the dependent and independent variables. Use REGR_SXY to evaluate the
statistical validity of a regression model.
Syntax
Syntax 1
Syntax 2
Parameters
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then it returns NULL.
The function is applied to the set of (dependent-expression and <independent-expression>) pairs after
eliminating all pairs for which either dependent-expression or <independent-expression> is NULL. The
function is computed simultaneously during a single pass through the data. After eliminating NULL values, the
following computation is made, where y represents the dependent-expression and x represents the
<independent-expression>:
REGR_COUNT(x, y) * COVAR_POP(x, y)
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Example
Related Information
Returns values that can evaluate the statistical validity of a regression model.
Syntax
Syntax 1
REGR_SYY(<dependent-expression>, <independent-expression>)
Syntax 2
REGR_SYY(<dependent-expression>, <independent-expression>)
OVER (<window-spec>)
dependent-expression
Returns
DOUBLE
Remarks
This function converts its arguments to DOUBLE, performs the computation in double-precision floating-point,
and returns a DOUBLE as the result. If applied to an empty set, then REGR_SYY returns NULL.
REGR_COUNT(x, y) * VAR_POP(y)
Note
ROLLUP and CUBE are not supported in the GROUP BY clause with Syntax 1. DISTINCT is not supported.
Syntax 2 – The <window-spec> parameter represents usage as a window function in a SELECT statement. As
such, you can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW
clause in the SELECT statement.
Related Information
Syntax
Parameters
dividend
Returns
● INTEGER
● NUMERIC
Remarks
Example
Related Information
Syntax
Parameters
string-expression
The number of times the string is to be repeated. If <integer-expression> is negative, an empty string
is returned.
● LONG VARCHAR
● LONG NVARCHAR
Remarks
Note
The result data type is a LONG VARCHAR. If you use REPEAT in a SELECT INTO statement, you must have
an Unstructured Data Analytics Option license or use CAST and set REPEAT to the correct data type and
size.
Example
Related Information
Syntax
original-string
The string to be searched for and replaced with <replace-string>. This string is limited to 255 bytes. If
<search-string> is an empty string, the original string is returned unchanged.
replace-string
The replacement string, which replaces <search-string>. This can be any length. If <replace-
string> is an empty string, all occurrences of <search-string> are deleted.
Returns
● LONG VARCHAR
● LONG NVARCHAR
Note
The result data type is a LONG VARCHAR. If you use REPLACE in a SELECT INTO statement, you must have
an Unstructured Data Analytics Option license or use CAST and set REPLACE to the correct data type and
size.
Remarks
The result data type of a REPLACE function is a LONG VARCHAR. If you use REPLACE in a SELECT INTO
statement, you must have an Unstructured Data Analytics Option license, or use CAST and set REPLACE to the
correct data type and size.
● Use CAST:
Examples
● The following statement generates a result set containing ALTER PROCEDURE statements, which when
executed, repair stored procedures that reference a table that has been renamed (to be useful, the table
name must be unique):
SELECT REPLACE(
replace(proc_defn,'OldTableName','NewTableName'),
'create procedure',
'alter procedure')
FROM SYS.SYSPROCEDURE
WHERE proc_defn LIKE '%OldTableName%'
● Use a separator other than the comma for the LIST function:
Related Information
Syntax
Parameters
string-expression
Returns
● LONG VARCHAR
● LONG NVARCHAR
Note
The result data type is a LONG VARCHAR. If you use REPLICATE in a SELECT INTO statement, you must
have an Unstructured Data Analytics Option license or use CAST and set REPLICATE to the correct data
type and size.
Example
Related Information
Takes one argument as an input of type BINARY or STRING and returns the specified string with characters
listed in reverse order.
Syntax
expression
A character or binary-type column name, variable, or constant expression of CHAR, VARCHAR, NCHAR,
NVARCHAR, BINARY, or VARBINARY type.
Returns
LONG VARCHAR
LONG NVARCHAR
Note
The result data type is a LONG VARCHAR. If you use REVERSE in a SELECT INTO statement, you must have
an Unstructured Data Analytics Option license or use CAST and set REVERSE to the correct data type and
size.
Remarks
Examples
select reverse("abcd")
select reverse(0x12345000)
Syntax
Parameters
string-expression
Returns
● LONG VARCHAR
● LONG NVARCHAR
Remarks
If the string contains multibyte characters, and the proper collation is being used, the number of bytes
returned might be greater than the specified number of characters.
The result data type is a LONG VARCHAR. If you use RIGHT in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set RIGHT to the correct data type and size.
Example
Related Information
Rounds the <numeric-expression> to the specified <integer-expression> number of places after the
decimal point.
Syntax
numeric-expression
A positive integer specifies the number of significant digits to the right of the decimal point at which to
round. A negative expression specifies the number of significant digits to the left of the decimal point at
which to round.
Returns
NUMERIC
When ROUND_TO_EVEN database option is set ON, the ROUND function rounds data from an SAP IQ table half to
the nearest even number to the integer-expression, matching the behavior of SAP SQL Anywhere table data.
When the option is set to OFF, the ROUND function rounds SAP IQ data half away from zero.
Example
Additional results of the ROUND function are shown in the following table:
0.0000 round(a.n,-3)
● In the following examples, the ROUND_TO_EVEN settings affects the value returned.
ROUND_TO_EVEN
ROUND (Value) ON ROUND_TO_EVEN OFF Note
Related Information
A ranking function that returns a unique row number for each row in a window partition, restarting the row
numbering at the start of every window partition.
Syntax
Parameters
window partition
(Optional) One or more value expressions separated by commas indicating how you want to divide the set
of result rows.
window ordering
Defines the expressions for sorting rows within window partitions, if specified, or within the result set if you
did not specify a window partition.
If no window partitions exist, the function numbers the rows in the result set from 1 to the cardinality of the
table.
The ROW_NUMBER function requires an OVER (ORDER_BY) window specification. The window partitioning clause
in the OVER (ORDER_BY) clause is optional. The OVER (ORDER_BY) clause must not contain a window frame
ROWS/RANGE specification.
Example
The following example returns salary data from the Employees table, partitions the result set by department ID,
and orders the data according to employee start date. The ROW_NUMBER function assigns each row a row
number, and restarts the row numbering for each window partition:
Returns the internal row ID value for each row of the table.
Syntax
table-name
The name of the table. Specify the name of the table within the parentheses with either no quotes or with
double quotes.
Returns
UNSIGNED BIGINT
Remarks
You can use the ROWID function with other clauses to manipulate specific rows of the table.
Examples
rowid(Products)
10
● The following statement returns the product ID and row ID value of all rows with a product ID value less
than 400:
ID rowid (Products)
300 1
301 2
302 3
● The following statement deletes all rows with row ID values greater than 50:
Right-pads a string with spaces or a specified pattern to make a string that is a specified number of characters
in length.
Syntax
Syntax Elements
str
Right-pads the end of <str> with spaces or characters to make a string of <n> characters in length. If
<pattern> is specified, then <str> is padded using sequences of the given characters until the required
length is met.
If the length of <str> is greater than <n>, then no padding is performed and the resulting value is truncated
from the right side to the length specified in <n>.
Examples
● The following example right-pads the end of string hello with the pattern 12345 to make a string of 15
characters in length and returns the value "hello1234512345":
● In this example, <str> is longer than <n>, so no padding is performed; the result is <str> truncated to the
length of <n> (that is, "he"):
● By not specifying <pattern>, this example right-pads the end of string hello with a single blank
character (that is, "hello "):
Returns a string, trimmed of all the trailing characters present in the trim character set.
Syntax
Parameters
string-expression
Trimmed string.
Remarks
If trim character set is not specified, all training spaces in the string expression are trimmed.
● STANDARD function
Example
The following statement removes all trailing a and b characters from the given string and returns the value
babababAabend.
Related Information
Returns a number from 0 to 59 corresponding to the second component of the given date/time value.
Syntax
SECOND ( <datetime-expression> )
datetime-expression
Returns
SMALLINT
Example
Related Information
Returns the number of seconds since an arbitrary starting date and time, the number of seconds between two
times, or adds an integer amount of seconds to a time.
Syntax
SECONDS ( <datetime-expression>
| <datetime-expression>, <datetime-expression>
| <datetime-expression>, <integer-expression> )
Parameters
datetime-expression
Returns
● INTEGER
● TIMESTAMP
Remarks
The second syntax returns the number of whole seconds from the first date/time to the second date/time. The
number might be negative.
● The following statement returns the value 14400, to signify the difference between the two times:
Syntax
SIGN ( <numeric-expression> )
Parameters
numeric-expression
Returns
SMALLINT
Remarks
Example
Returns an integer between 0 and 100 representing the similarity between two strings.
Syntax
Parameters
string-expression1
Returns
SMALLINT
Remarks
The function returns an integer between 0 and 100 representing the similarity between the two strings. The
result can be interpreted as the percentage of characters matched between the two strings. A value of 100
indicates that the two strings are identical.
Example
Syntax
SIN ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Example
Related Information
Generates values that can be used to sort character strings based on alternate collation rules.
Syntax
Parameters
string-expression
The string expression must contain characters that are encoded in the character set of the database and
must be STRING data type.
If string-expression is NULL, the SORTKEY function returns a null value. An empty string has a different
sort-order value than a null string from a database column.
A variable, integer constant, or string that specifies the ID number of the sort order to use. This parameter
applies only to SAP ASE collations, which can be referred to by their corresponding collation ID.
collation-name
A string or character variable that specifies the name of the sort order to use. You can also specify the alias
char_collation, or, equivalently, db_collation, to generate sort-keys as used by the CHAR collation in use by
the database.
Similarly, you can specify the alias NCHAR_COLLATION to generate sort-keys as used by the NCHAR
collation in use by the database. However, SAP IQ does not support NCHAR_COLLATION for SAP IQ-
specific objects. NCHAR_COLLATION is supported for SAP SQL Anywhere objects on an SAP IQ server.
collation-tailoring-string
'UCA(locale=es;case=LowerFirst;accent=respect)'
The syntax for specifying these options is identical to the COLLATION clause of the CREATE DATABASE
statement.
Note
All of the collation tailoring options are supported for SAP SQL Anywhere databases, when specifying
the Unicode Collation Algorithm (UCA) collation. For all other collations, only case-sensitivity tailoring
is supported.
Returns
BINARY
Remarks
The SORTKEY function generates values that can be used to order results based on predefined sort order
behavior. This allows you to work with character sort order behaviors that may not be available from the
database collation. The returned value is a binary value that contains coded sort order information for the input
string that is retained from the SORTKEY function.
You instead store the value returned by SORTKEY in a column with the source character string. To retrieve the
character data in the required order, the SELECT statement needs to include only an ORDER BY clause on the
column that contains the results of running the SORTKEY function:
The SORTKEY function guarantees that the values it returns for a given set of sort order criteria work for the
binary comparisons that are performed on VARBINARY data types.
Generating sort-keys for queries can be expensive. As an alternative for frequently requested sort-keys,
consider creating a computed column to hold the sort-key values, and then referencing that column in the
ORDER BY clause of the query.
If you do not specify a collation name or collation ID, the default is Default Unicode multilingual.
● To see collations that are supported by SAP IQ, listed by label, execute iqinit -l.
● The SAP ASE collations are listed in the table below.
With respect to collation tailoring, full sensitivity is generally the intent when creating sort-keys, so when you
specify a non-UCA collation, the default tailoring applied is equivalent to case=Respect. For example, the
following two statements are equivalent:
If the database was created without specifying tailoring options, the following two clauses may generate
different sort orders, even if the database collation name is specified for the SORTKEY function:
ORDER BY string-expression
Different sort orders may be generated, because the default tailoring settings used for database creation and
for the SORTKEY function are different. To get the same behavior from SORTKEY as for the database collation,
either provide a tailoring syntax for <collation-tailoring-string> that matches the settings for the
database collation, or specify db_collation for collation-name. For example:
Note
Sort-key values created using a version of SAP IQ earlier than 15.0 do not contain the same values created
using version 15.0 and later. This may be a problem for your applications if your pre- 15.0 database has sort-
key values stored within it, especially if sort-key value comparison is required by your application.
Regenerate any sort-key values in your database that were generated using a version of SAP IQ earlier than
15.0.
Example
The following statement queries the Employees table and returns the FirstName and Surname of all
employees, sorted by the sort-key values for the Surname column using the dict collation (Latin-1, English,
French, German dictionary):
Syntax
SOUNDEX ( <string-expression> )
Parameters
string-expression
The string.
Returns
SMALLINT
Remarks
The SOUNDEX function value for a string is based on the first letter and the next three consonants other than H,
Y, and W. Doubled letters are counted as one letter. The following example is based on the letters A, P, L, and S:
Although it is not perfect, SOUNDEX normally returns the same number for words that sound similar and that
start with the same letter.
The following statement returns two numbers, representing the sound of each name. The SOUNDEX value for
each argument is 3827:
Related Information
Returns an integer value indicating whether the invoking user has been granted a specified system privilege or
user-defined role. When used for privilege checking within user-defined stored procedures, SP_HAS_ROLE
returns an error message when a user fails a privilege check.
Syntax
Parameters
rolename
Valid values are: ADMIN and NO ADMIN. If NULL or not specified, NO ADMIN is used by default.
throw_error
● 1 – display error message specified if system privilege or user-defined role is not granted to invoking
user.
● 0 – (default) do not display error message if specified system privilege or user-defined role is not
granted to invoking user.
Value Description
0 or Permission denied: you System privilege or user-defined role is not granted to invoking user. The error
do not have permission to message replaces the value 0 when the throw_error argument is set to 1.
execute this command/
procedure.
-1 The system privilege or user-defined role specified does not exist. No error mes
sage appears, even if the throw_error argument is set to 1.
Remarks
If the value of the grant_type argument is ADMIN, the function checks whether the invoking user has
administrative privileges for the system privilege. If the value of the grant_type argument is NO ADMIN, the
function checks whether the invoking user has privileged use of the system privilege or role.
If the grant_type argument is not specified, NO ADMIN is used by default and output indicates only whether
the invoking user has been granted, either directly or indirectly, the specified system privilege or user-defined
role.
If the rolename and grant_type arguments are both NULL and the throw_error argument is 1, you see an
error message. You may find this useful for those stored procedures where an error message appears after
certain values are read from the catalog tables rather than after the checking the presence of certain system
privileges for the invoking user.
Note
A permission denied error message is returned if the arguments rolename and grant_type are set to
NULL and throw_error is set to 1, or if all three arguments are set to NULL.
Examples
● u1 has been granted the CREATE ANY PROCEDURE system privilege with the WITH NO ADMIN OPTION
clause.
● u1 has not been granted the CREATE ANY TABLE system privilege.
● u1 has been granted the user-defined role Role_A with the WITH ADMIN ONLY OPTION clause.
● Role_B exists, but has not been granted to u1
● The role Role_C does not exist.
● The following example returns the value 1, which indicates u1 has been granted the CREATE ANY
PROCEDURE system privilege:
● The following example returns the value 0, which indicates u1 has not been granted the CREATE ANY
TABLE system privilege:
Even though u1 has been granted the CREATE ANY PROCEDURE system privilege, u1 has not been
granted administrative rights to the system privilege.
● The following example returns the value 1, which indicates u1 has been granted role Role_A:
sp_has_role 'Role_A'
● The following example returns the value 1, which indicates u1 has been granted role Role_A with
administrative rights:
sp_has_role 'Role_A','admin',1
● The following example returns the value 0, which indicates u1 has not been granted the role ROLE_B:
sp_has_role 'Role_B'
sp_has_role 'Role_C'
● The following example returns the value -1, which indicates the role ROLE_C does not exist:
sp_has_role 'Role_C',NULL,1
Syntax
SPACE ( <integer-expression> )
integer-expression
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use SPACE in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license or use CAST and set SPACE to the correct data type and size.
Example
Syntax
Parameters
sql-standard-string
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use SQLFLAGGER in a SELECT INTO statement, you must
have an Unstructured Data Analytics Option license or use CAST and set SQLFLAGGER to the correct data
type and size.
Remarks
You can also use the iqsqlpp SQL Preprocessor Utility to flag any Embedded SQL that is not part of a specified
set of SQL92. See iqsqlpp SQL Preprocessor Utility in the Utility Guide.
Examples
● The following statement shows an example of the message that is returned when a disallowed extension is
found:
This statement returns the message '0AW03 Disallowed language extension detected in
syntax near 'top' on line 1'.
● The following statement returns '00000' because it contains no disallowed extensions:
Syntax
SQRT ( <numeric-expression> )
Parameters
numeric-expression
Returns
DOUBLE
Example
Syntax
SQUARE ( <numeric-expression> )
Parameters
numeric-expression
Is a column, variable, or expression with a data type that is either exact numeric, approximate numeric,
money, or any type that can be implicitly converted to one of these types. For other data types, the
SQUARE function generates an error. The return value is of DOUBLE data type.
Remarks
SQUARE function takes one argument. For example, SQUARE (<12.01>) returns 144.240100.
Returns information about the stack trace for the current statement.
Syntax
sa_stack_trace(
[ stack_frames
[, detail_level
[, connection_id ] ] ]
)
stack_frames
'procedure'
Return procedures but not the outer-most statement. This is the default behavior.
'caller'
Return only the outer-most statement (the statement that arrived from the client).
'procedure+caller', 'caller+procedure'
'stack'
Include procedure names and line numbers. This is the default behavior.
'stack+sql', 'sql+stack'
Include the procedure names and line numbers, as well as the SQL text of the statement being
executed at each level.
connection_id
Use the connection_id option to filter the results returned to the specified connection ID.
Returns
Remarks
The result contains lines of text delimited by line feed (\n) characters. Each line of the returned value contains
the qualified procedure name or batch type, followed by the line number of the statement. The last line of the
returned value is not terminated by a line feed character. The first line of the stack trace represents the line
where the function was invoked. If a compound statement is not part of a procedure, function, trigger, or event,
then the type of batch (watcom_batch or tsql_batch) is returned instead of the procedure name.
This function returns line numbers as found in the proc_defn column of the SYSPROCEDURE system table for
the procedure. These line numbers might differ from those of the source definition used to create the
procedure.
This function returns the same information as the sa_stack_trace system procedure.
Example
Results:
line_num row_value
------------------------------------------------------------------------------
--------------------------------------------------
1 "DBA"."proc3" : 5 : select
sa_split_list.line_num,sa_split_list.row_value from
sa_split_list(STACK_TRACE('caller+procedure','stack
+sql'),'\x0A')
2 "DBA"."proc2" : 3 : call
proc3()
3 "DBA"."proc1" : 3 : call
proc2()
4 call proc1(
Related Information
Syntax
Parameters
expression
Returns
DOUBLE
Remarks
STDDEV returns a result of data type DOUBLE precision floating-point. If applied to the empty set, the result is
NULL, which returns NULL for a one-element input set.
STDDEV does not support the keyword DISTINCT. A syntax error is returned if you use DISTINCT with STDDEV.
Salary
51432.000
57090.000
42300.000
43700.00
36500.000
138948.000
31200.000
58930.00
75400.00
Name UnitPrice
Syntax
Parameters
expression
The expression (commonly a column name) that has a population-based standard deviation that is
calculated over a set of rows.
Returns
DOUBLE
Remarks
Computes the population standard deviation of the provided <value expression> evaluated for each row of
the group or partition (if DISTINCT was specified, then each row that remains after duplicates have been
eliminated), defined as the square root of the population variance.
Example
The following statement lists the average and variance in the number of items per order in different time
periods:
Related Information
Syntax
Parameters
expression
Returns
DOUBLE
Remarks
Note
Computes the sample standard deviation of the provided <value expression> evaluated for each row of the
group or partition (if DISTINCT was specified, then each row that remains after duplicates have been
eliminated), defined as the square root of the sample variance.
Standard deviations are computed according to the following formula, which assumes a normal distribution:
Example
The following statement lists the average and variance in the number of items per order in different time
periods:
Related Information
Syntax
Parameters
numeric-expression
The number of characters to be returned (including the decimal point, all digits to the right and left of the
decimal point, the sign, if any, and blanks). The default is 10 and the maximum length is 255.
decimal
The number of digits to the right of the decimal point to be returned. The default is 0.
Returns
VARCHAR
If the integer portion of the number cannot fit in the length specified, then the result is NULL.
Examples
● The following statement returns a string of six spaces followed by 1234, for a total of 10 characters:
● The following statement returns NULL because the integer portion of the number cannot fit in the specified
length:
Takes three arguments as input of type BINARY or STRING and replaces any instances of the second string
expression (<string_expr2>) that occur within the first string expression (<string_expr1>) with a third
expression (<string_expr3>).
Syntax
Parameters
string_expr1
The source string, or the string expression to be searched, expressed as CHAR, VARCHAR, UNICHAR,
UNIVARCHAR, VARBINARY, or BINARY data type.
string_expr2
The replacement string expression, expressed as CHAR, VARCHAR, UNICHAR, UNIVARCHAR, VARBINARY, or
BINARY data type.
Remarks
result_length = ((s/p)*(r-p)+s)
WHERE
s = length of source string
p = length of pattern string
r = length of replacement string
IF (r-p) <= 0, result length = s
● If SAP IQ cannot calculate the result length because the argument values are unknown when the
expression is compiled, the result length used is 255.
● RESULT_LEN never exceeds 32767.
Examples
● Replaces the string <def> within the string <cdefghi> with <yyy>:
● Accepts NULL in the third parameter and treats it as an attempt to replace <string_expr2> with NULL,
effectively turning STR_REPLACE into a “string cut” operation. Returns “abcghijklm”:
Related Information
Syntax
STRING ( <string-expression> [ , … ] )
Parameters
string-expression
A string. If only one argument is supplied, it is converted into a single expression. If more than one
argument is supplied, they are concatenated into a single string. A NULL is treated as an empty string ('').
● LONG BINARY
● LONG NVARCHAR
● LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use STRING in a SELECT INTO statement, you must have
an Unstructured Data Analytics Option license or use CAST and set STRING to the correct data type and
size.
Remarks
Numeric or date parameters are converted to strings before concatenation. You can also use the STRING
function to convert any single expression to a string by supplying that expression as the only parameter.
Example
Syntax
STRTOUUID ( <string-expression> )
string-expression
Returns
UNIQUEIDENTIFIER
Remarks
You can use STRTOUUID to insert UUID values into an SAP IQ database.
Example
CREATE TABLE T (
pk uniqueidentifier primary key,
c1 int);
INSERT INTO T (pk, c1)
VALUES (STRTOUUID
('12345678-1234-5678-9012-123456789012'), 1);
Related Information
Deletes a number of characters from one string and replaces them with another string.
Syntax
Parameters
string-expression1
The character position at which to begin deleting characters. The first character in the string is position 1.
length
The string to be inserted. To delete a portion of a string using the STUFF function, use a replacement string
of NULL
Returns
LONG VARCHAR or LONG NVARCHAR, depending on the data type of the input expressions.
Remarks
To delete a portion of a string using STUFF, use a replacement string of NULL. To insert a string using STUFF,
use a length of zero.
The STUFF function will return a NULL result in the following situations:
Example
Related Information
Syntax
Parameters
string-expression
The start position of the substring to return, in characters. A negative starting position specifies a number
of characters from the end of the string instead of the beginning. The first character in the string is at
position 1.
length
● A positive <length> specifies that the substring ends <length> characters to the right of the starting
position.
Returns
● LONG BINARY
● LONG NVARCHAR
● LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use STRING in a SELECT INTO statement, you must have
an Unstructured Data Analytics Option license or use CAST and set STRING to the correct data type and
size.
Remarks
If <length> is specified, the substring is restricted to that length. If no length is specified, the remainder of the
string is returned, starting at the <start> position.
Both <start> and <length> can be negative. Using appropriate combinations of negative and positive
numbers, you can get a substring from either the beginning or end of the string.
When the ansi_substring database option is set to ON (default), negative values are invalid.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
Example
In this section:
Related Information
The SUBSTRING function returns a variable-length character string of the LONG VARCHAR column or variable
parameter. If any of the arguments are NULL, SUBSTRING returns NULL.
Syntax
Parameters
long-varchar-column
An integer expression indicating the start of the substring. A positive integer starts from the beginning of
the string, with the first character at position 1. A negative integer specifies a substring starting from the
end of the string, with the final character at position -1.
length
Remarks
SUBSTRING supports LONG VARCHAR variables of any size of data. Currently, a SQL variable can hold up to 2
GB - 1 in length. SUBSTRING does not support LONG BINARY variables or searching LONG BINARY columns.
When the ansi_substring database option is set to ON (default), negative values are invalid.
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
The SUBSTRING64 function returns a variable-length character string of the large object column or variable
parameter.
SUBSTRING64 supports searching LONG VARCHAR and LONG BINARY columns and LONG VARCHAR and LONG
BINARY variables of any size of data. Currently, a SQL variable can hold up to 2 GB – 1 in length.
If you are licensed to use the Unstructured Data Analytics functionality, you can use this function with large
object data.
In this section:
The SUBSTRING64 function returns a variable-length character string of the large object column or variable
parameter.
Syntax
large-object-column
An 8-byte integer indicating the start of the substring. SUBSTRING64 interprets a negative or zero
<start> offset as if the string were padded on the left with "non-characters." The first character starts at
position 1.
length
An 8-byte integer indicating the length of the substring. If <length> is negative, an error is returned.
Example
Values returned by SUBSTRING64, given a column named col1 that contains the string ("ABCDEFG"):
Remarks
See Function Support of Large Object Data in SAP IQ Administration: Unstructured Data Analytics.
Returns the total of the specified expression for each group of rows.
Syntax
expression
Computes the sum of the unique values in <column-name> for each group of rows. This is of limited
usefulness, but is included for completeness.
Returns
● INTEGER
● DOUBLE
● NUMERIC
● BIGINT (SIGNED or UNSIGNED)
Remarks
Example
Related Information
Syntax
SUSER_ID ( [ <user-name> ] )
Parameters
user-name
Returns
INT
Standards
Examples
Syntax
SUSER_NAME ( [ <user-id> ] )
Parameters
user-id
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use SUSER_NAME in a SELECT INTO statement, you must
have an Unstructured Data Analytics Option license or use CAST and set SUSER_NAME to the correct data
type and size.
Related Information
Syntax
TAN ( <numeric-expression> )
Parameters
numeric-expression
An angle, in radians.
Returns
DOUBLE
Example
Related Information
Returns the current date. This is the historical syntax for CURRENT DATE.
Syntax
TODAY ( * )
Returns
DATE
Standards
The following statement returns the current day according to the system clock:
Returns a string, trimmed of all the leading and trailing characters present in the trim character set.
Syntax
Parameters
string-expression
Returns
Trimmed string.
Standards
● STANDARD function.
Example
The following statement removes all leading and trailing a and b characters from the given string and returns
the value Aabend.
Syntax
Parameters
numeric-expression
A positive integer specifies the number of significant digits to the right of the decimal point at which to
round. A negative expression specifies the number of significant digits to the left of the decimal point at
which to round.
Returns
NUMERIC
Remarks
This function is the same as TRUNCATE, but does not cause keyword conflicts.
You can use combinations of ROUND, FLOOR, and CEILING to provide similar functionality.
Examples
Related Information
Syntax
UCASE ( <string-expression> )
Parameters
string-expression
Returns
● LONG NVARCHAR
● LONG VARCHAR
● NVARCHAR
● VARCHAR
The result data type is a LONG VARCHAR. If you use UCASE in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license, or use CAST and set UCASE to the correct data type and size.
Example
Related Information
Syntax
UPPER ( <string-expression> )
string-expression
Returns
● LONG NVARCHAR
● LONG VARCHAR
● NVARCHAR
● VARCHAR
Note
The result data type is a LONG VARCHAR. If you use UPPER in a SELECT INTO statement, you must have an
Unstructured Data Analytics Option license, or use CAST and set UPPER to the correct data type and size.
Example
Related Information
Syntax
USER_ID ( [ <user-name> ] )
Parameters
user-name
Returns
INT
Standards
Examples
Related Information
Syntax
USER_NAME ( [ <user-id> ] )
Parameters
user-id
Returns
LONG VARCHAR
Note
The result data type is a LONG VARCHAR. If you use USER_NAME in a SELECT INTO statement, you must
have an Unstructured Data Analytics Option license, or use CAST and set USER_NAME to the correct data
type and size.
Standards
Examples
Related Information
Converts a unique identifier value (UUID, also known as GUID) to a string value.
Syntax
UUIDTOSTR ( <uuid-expression> )
Parameters
uuid-expression
Returns
VARCHAR
Remarks
Example
To convert a unique identifier value into a readable format, execute a query similar to:
CREATE TABLE T3 (
pk uniqueidentifier primary key,c1 int);
INSERT INTO T3 (pk, c1)
values (0x12345678123456789012123456789012, 1)
SELECT UUIDTOSTR(pk) FROM T3
Related Information
Syntax
Parameters
expression
The expression (commonly a column name) that has a population-based variance that is calculated over a
set of rows.
DOUBLE
Remarks
Computes the population variance of the provided <value expression> evaluated for each row of the group
or partition (if DISTINCT was specified, then each row that remains after duplicates have been eliminated),
defined as the sum of squares of the difference of <value expression>, from the mean of <value
expression>, divided by the number of rows (remaining) in the group or partition.
Standards
Example
The following statement lists the average and variance in the number of items per order in different time
periods:
Syntax
Parameters
expression
The expression (commonly a column name) that has a sample-based variance that is calculated over a set
of rows.
Returns
DOUBLE
Remarks
Note
Computes the sample variance of <value expression> evaluated for each row of the group or partition (if
DISTINCT was specified, then each row that remains after duplicates have been eliminated), defined as the
sum of squares of the difference of <value expression>, from the mean of <value expression>, divided
by one less than the number of rows (remaining) in the group or partition.
Variances are computed according to the following formula, which assumes a normal distribution:
Example
The following statement lists the average and variance in the number of items per order in different time
periods:
Related Information
Returns 1 if a user-defined variable exists with the specified name. Returns 0 if no such variable exists.
Syntax
variable-name-string
Returns
INT
Standards
Example
The following IF statement checks to see if a variable called start_time exists. If it doesn't, then the
database server creates a connection-scope variable with that name, and sets its value to the current time.
The following IF statement checks to see if a database-scope variable named run_time owned by user ID
jsmith exists. If it doesn't, then the database server creates the variable, and sets its value to the current
time.
Syntax
expression
The expression (commonly a column name) whose sample-based variance is calculated over a set of rows.
Returns
DOUBLE
Remarks
VARIANCE returns a result of data type double-precision floating-point. If applied to the empty set,
the result is NULL, which returns NULL for a one-element input set.
VARIANCE does not support the keyword DISTINCT. A syntax error is returned if DISTINCT is used with
VARIANCE.
Standards
Examples
51432.000
57090.000
42300.000
43700.00
36500.000
138948.000
31200.000
58930.00
75400.00
UnitPrice
9.00
14.00
14.00
Related Information
Returns the number of weeks since an arbitrary starting date/time, returns the number of weeks between two
specified date/times, or adds the specified integer-expression number of weeks to a date/time.
Syntax
Syntax 1: Return the number of years between year 0000 and a TIMESTAMP value
WEEKS( <timestamp-expression> )
Parameters
timestamp-expression
Returns
Remarks
Weeks are defined as going from Sunday to Saturday, as they do in a North American calendar. The number
returned by the first syntax is often useful for determining if two dates are in the same week:
Examples
● The following statement returns the value 9, to signify the difference between the two dates:
Related Information
Syntax
Parameters
expression
Remarks
A weighted average is an average in which each quantity to be averaged is assigned a weight. Weightings
determine the relative importance of each quantity that make up the average.
Use the WEIGHTED_AVG function to create a weighted moving average. In a weighted moving average, weights
decrease arithmetically over time. Weights decrease from the highest weight for the most recent data points,
down to zero.
To exaggerate the weighting, you can average two or more weighted moving averages together, or use an
EXP_WEIGHTED_AVG function instead.
The <window-spec> parameter represents usage as a window function in a SELECT statement. As such, you
can specify elements of <window-spec> either in the function syntax (inline), or with a WINDOW clause in the
SELECT statement.
Example
The following example returns a weighted average of salaries by department for employees in Florida, with the
salary of recently hired employees contributing the most weight to the average:
Related Information
For a given expression, the WIDTH_BUCKET function returns the bucket number that the result of this
expression will be assigned after it is evaluated.
Syntax
Parameters
expression
The expression for which the histogram is being created. This expression must evaluate to a numeric or
datetime value or to a value that can be implicitly converted to a numeric or datetime value. If <expr>
evaluates to null, then the expression returns null.
min_value
An expression that resolves to the end points of the acceptable range for <expr>. Must also evaluate to
numeric or datetime values and cannot evaluate to null.
max_value
An expression that resolves to the end points of the acceptable range for <expr>. Must also evaluate to
numeric or datetime values and cannot evaluate to null.
num_buckets
Is an expression that resolves to a constant indicating the number of buckets. This expression must
evaluate to a positive integer.
Remarks
You can generate equi-width histograms with the WIDTH_BUCKET function. Equi-width histograms divide data
sets into buckets whose interval size (highest value to lowest value) is equal. The number of rows held by each
bucket will vary. A related function, NTILE, creates equi-height buckets.
Equi-width histograms can be generated only for numeric, date or datetime data types; therefore, the first
three parameters should be all numeric expressions or all date expressions. Other types of expressions are not
allowed. If the first parameter is NULL, the result is NULL. If the second or the third parameter is NULL, an error
message is returned, as a NULL value cannot denote any end point (or any point) for a range in a date or
numeric value dimension. The last parameter (number of buckets) should be a numeric expression that
evaluates to a positive integer value; 0, NULL, or a negative value will result in an error.
Buckets are numbered from 0 to (n+1). Bucket 0 holds the count of values less than the minimum. Bucket(n+1)
holds the count of values greater than or equal to the maximum specified value.
Example
The following example creates a 10-bucket histogram on the credit_limit column for customers in
Massachusetts in the sample table and returns the bucket number (“Credit Group”) for each customer.
Customers with credit limits greater than the maximum value are assigned to the overflow bucket, 11:
When the bounds are reversed, the buckets are open-closed intervals. For example: WIDTH_BUCKET
(<credit_limit>, <5000>, <0>, <5>). In this example, bucket number 1 is (4000, 5000), bucket number 2 is
(3000, 4000), and bucket number 5 is (0, 1000). The overflow bucket is numbered 0 (5000, +infinity), and the
underflow bucket is numbered 6 (-infinity, 0).
Syntax
YEAR ( <timestamp-expression> )
Parameters
timestamp-expression
SMALLINT
Remarks
Standards
Example
Related Information
Syntax
Syntax 1: Return the Number of Years Between Year 0000 and a TIMESTAMP Value
YEARS( <timestamp-expression> )
Parameters
timestamp-expression
Returns
Remarks
For syntax 2, the value of YEARS is computed by counting the number of first days of the year between the two
dates. The number might be negative. Hours, minutes, and seconds are ignored.
Syntax 3 adds an <integer-expression> number of years to the given date. If the new date is past the end
of the month (such as SELECT YEARS ( CAST ( ‘1992-02-29’ AS TIMESTAMP ), 1 )), the result is set to the last
Examples
● The following statement returns the value 2, to signify the difference between the two dates.
Related Information
Returns a date value corresponding to the given year, month, and day of the month.
Syntax
Parameters
integer-expression1
The year.
integer-expression2
The number of the month. If the month is outside the range 1–12, the year is adjusted accordingly.
integer-expression3
The day number. The day is allowed to be any integer, the date is adjusted accordingly.
Returns
DATE
Standards
Examples
● If the values are outside their normal range, the date adjusts accordingly. For example, the following
statement returns the value 1993-03-01:
Use the system-supplied stored procedures in SAP IQ databases to retrieve system information.
In this section:
There are two security models under which privileged system procedures can run. Each model grants the
ability to run the system procedure differently.
Note
The following information applies to SAP IQ privileged system procedures only, not user-defined stored
procedures.
The first model, called the SYSTEM PROCEDURE DEFINER model, runs a privileged system procedure with the
privileges of its owner, typically dbo. The second model, called the SYSTEM PROCEDURE INVOKER model, runs
a privileged system procedure with the privileges of the person executing it.
To run a privileged system procedure using the SYSTEM PROCEDURE DEFINER model, grant explicit EXECUTE
object-level privilege on the procedure. Any system privileges required to run any underlying authorized tasks
of the system procedure are automatically inherited from the owner (definer of the system procedure).
For privileged system procedures using the SYSTEM PROCEDURE INVOKER model, the EXECUTE object-level
privilege is granted to the PUBLIC role, and since, by default, every user is a member of the PUBLIC role, every
user automatically inherits the EXECUTE object-level privilege. However, since the PUBLIC role is not the owner
of the system procedures, and is not granted any system privileges, the system privileges required to run any
underlying authorized tasks must be granted directly or indirectly to the user.
By default, a database created in versions 16.x and later runs all privileged system procedures using the
SYSTEM PROCEDURE INVOKER model. A database created in versions earlier than 16.x and upgraded to
versions 16.x and later runs privileged system procedures using a combination of both the SYSTEM
PROCEDURE DEFINER and SYSTEM PROCEDURE INVOKER models. In the combined model, all pre- 16.x
privileged system procedures use the SYSTEM PROCEDURE DEFINER model, and any privileged system
procedures introduced with 16.x (or any future release) use the SYSTEM PROCEDURE INVOKER model. You
can override the default security model when creating or upgrading a database, or any time thereafter.
However, SAP recommends that you not do so, as it may result in loss of functionality on custom stored
procedures and applications.
When running privileged system procedures using the SYSTEM PROCEDURE DEFINER model, the DBO system
role is typically the owner of the procedures. By default, the dbo system role is granted the
SYS_AUTH_DBA_ROLE compatibility role. This ensures that the role is indirectly granted all privileges
necessary to execute system procedures. Migrating the SYS_AUTH_DBA_ROLE compatibility role can result in
the dbo system role losing the ability to execute privileged system procedures. See Implications of Migrating
Compatibility Roles [page 574] for details.
In this section:
You cannot revoke the underlying system privileges of a compatibility role; you must first migrate it to a user-
defined role. Only then can you revoke individual underlying system privileges from the new role and grant
them to other user-defined roles per the organization's security requirements. This enforces separation of
duties.
You can migrate compatibility roles automatically or manually. The method of migration can impact the ability
of a system role or the DBO system role to continue performing authorized tasks.
Regardless of the migration method used, once a compatibility role or the SYS_AUTH_DBA_ROLE role is
dropped, if you revoke a system privilege from the new user-defined role and grant it to another user-defined
role, you must do one of the following to ensure that system roles especially the dbo system role, retains all the
system privileges required to execute the applicable privileged tasks or multiplex management stored
procedures:
● Grant each system privilege revoked from the migrated user-defined role directly to the applicable system
roles or dbo role; or
● Grant membership in the user-defined role to which the system privileges are granted to the applicable
system roles or dbo system role.
The system roles that are members of compatibility roles, and might potentially be impacted by migration
include:
dbo SYS_AUTH_DBA_ROLE
SYS_AUTH_RESOURCE_ROLE
SYS_RUN_REPLICATION_ROLE SYS_AUTH_DBA_ROLE
Automatic Migration
The ALTER ROLE statement creates a new user-defined role, automatically grants all underlying system
privileges of the compatibility role to the new user-defined role, makes each member of the compatibility role a
member of the new user-defined role, then drops the compatibility role.
Automatic migration assumes that the destination user-defined role does not already exist and that all system
privileges are migrated to the same new user-defined role.
Use the CREATE ROLE statement to create a new user-defined role. Use the GRANT statement to grant each
underlying system privilege to one or more users or roles. Use the DROP statement to drop the compatibility
role once all underlying system privileges are granted to at least one other user or role.
Members of the migrated compatibility role are not automatically granted membership in the new user-defined
role. As a result, members of some system roles may no longer be able to perform the expected privileged
tasks once the compatibility role is dropped. You must grant membership in the new user-defined role to the
affected system roles or directly grant the required system privileges to affected members.
The process by which you grant the ability to run a privileged system procedure is dependent on the security
model under which it runs.
For a privileged system procedure using the SYSTEM PROCEDURE DEFINER model, grant EXECUTE object-
level privilege on the system procedure to the user:
For a privileged system procedure using the SYSTEM PROCEDURE INVOKER model, grant the underlying
system privileges required by the system procedure to the user. Use sp_proc_priv() to identify the system
privileges required to run a system procedure.
GRANT <system_privilege_name>
TO <grantee> [,...]
Example
This statement grants the EXECUTE privilege on procedure sp_test1 to user Joe. sp_test1 uses the
SYSTEM PROCEDURE DEFINER model:
This statement identifies the system privileges necessary to run procedure sp_test2:
sp_proc_priv sp_test2;
Results:
proc_name privilege
The process by which you revoke the ability to run a privileged system procedure is dependent on the security
model under which it runs.
For a privileged system procedure using the SYSTEM PROCEDURE DEFINER model, revoke the EXECUTE
object-level privilege on the system procedure from the user:
For a privileged system procedure using the SYSTEM PROCEDURE INVOKER model, revoke the underlying
system privileges required by the system procedure from the user:
REVOKE <system_privilege_name>
FROM <grantee> [,...]
Related Information
select IF ((HEXTOINT(substring(db_property('Capabilities'),
1,length(db_property('Capabilities'))-20)) & 8) = 8)
THEN 1
ELSE 0
END IF
Where:
You cannot configure a new or upgraded database version 16.0 or later to run all system procedures using the
SYSTEM PROCEDURE DEFINER model.
For these privileged system procedures, if the database is configured to use SYSTEM PROCEDURE DEFINER,
you only need EXECUTE object-level privilege on the procedure to run it. If the database is configured to use
SYSTEM PROCEDURE INVOKER, you also need the individual system privileges required by each procedure.
Refer to SAP IQ SQL Reference for the system privileges required to run each system procedure.
● sp_iqdbsize ● sp_iqmpxincheartbeatinfo
These pre-16.x privileged system procedures run with the privileges of the user who is running the procedure,
not the owner of the procedure, regardless of the security model setting. Therefore, in addition to the EXECUTE
object-level privilege on the system procedure, which is, by default, granted through membership in PUBLIC
role, you must also be granted the additional system privileges required by the system procedure. See the SAP
IQ SQL Reference for the system privileges required to run each system procedure.
● sa_describe_shapefile
● sa_get_user_status
● sa_locks
● sa_performance_diagnostics
● sa_report_deadlocks
● sa_text_index_stats
Some variations are permitted because the product supports both SAP IQ SQL and Transact-SQL syntax. If you
need Transact-SQL compatibility, be sure to use Transact-SQL syntax.
<procedure_name> <param> Transact-SQL If you omit quotes around parameters, you must also omit
parentheses.
Note
Quotes are always required around parameters when
the owner is specified. For example, assuming the owner
is <dba>, sp_iqtablesize 'dba.emp1' requires
quotes around the parameters. sp_iqtablesize
emp1 does not.
<procedure_name> SAP IQ or Transact- Use this syntax if you run a procedure with no parameters di
SQL rectly in Interactive SQL, and the procedure has no parame
ters.
call <procedure_name> SAP IQ Use this syntax to call a procedure that passes a parameter
(<param>='<value>') value.
When you use Transact-SQL stored procedures, you must use the Transact-SQL syntax.
This means that you get a snapshot view. For example, a report column that lists space in use by a connection
shows only the space in use at the instant the procedure executes, not the maximum space used by that
connection.
To monitor SAP IQ usage over an extended period, use the SAP IQ monitor, which collects and reports statistics
from the time you start the monitor until you stop it, at an interval you specify.
Tip
SAP SQL Anywhere stored procedures do not contain "iq" in the procedure name.
The sa_get_table_definition procedure is only supported for SAP SQL Anywhere tables. If run against an
SAP IQ table, the procedure returns the error not implemented for IQ tables.
System stored procedures carry out System Administrator tasks in the IQ main store.
Note
By default, the maximum length of column values displayed by Interactive SQL Classic is 30 characters.
This might be inadequate for displaying output of stored procedures such as sp_iqstatus. To avoid
truncated output, increase the length by selecting Command Options from the Interactive SQL
menu, then select and enter a higher value for Limit Display Columns, Limit Output Columns, or both.
In this section:
Note
Though sp_iqaddlogin is still supported for backwards compatibility, use CREATE USER to create new
users.
Syntax
Syntax 1
Syntax 3
Go to:
● Remarks
● Privileges
● Side Effects
● Examples
Parameters
(back to top)
username_in
The user’s login name. Login names must conform to the rules for identifiers.
pwd
The user's password. Passwords must conform to rules for passwords, that is, they must be valid
identifiers.
password_expiry_on_next_login
(Optional) Specifies whether user’s password expires as soon as this user’s login is created. Default setting
is OFF (password does not expire).
policy_name
(Optional) Creates the user under the named login policy. If unspecified, user is created under the root
login policy.
Remarks
(back to top)
Adds a new SAP IQ user account, assigns a login policy to the user and adds the user to the ISYSUSER system
table. If the user already has a user ID for the database but is not in ISYSUSER, (for example, if the user ID was
added using the GRANT CONNECT statement or SAP IQ Cockpit), sp_iqaddlogin adds the user to the table.
If you do not specify a login policy name when calling the procedure, SAP IQ assigns the user to the root login
policy.
If the maximum number of logins for a login policy is unlimited, then a user belonging to that login policy
can have an unlimited number of connections.
The first user login forces a password change and assigns a login policy to the newly created user.
A <username_in> and <pwd> created using sp_iqaddlogin and set to expire in one day is valid all day
tomorrow and not valid on the following day. That is, a login created today and set to expire in <n> days is not
usable once the date changes to the <(n+1)>th day.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY USER System privilege GRANT System Privilege Statement [page 1511]
Side Effects
(back to top)
None
Example
(back to top)
These calls add the user rose with a password irk324 under the login policy named expired_password.
This example assumes the expired_password login policy already exists:
Related Information
Syntax
sp_iqbackupdetails <backup_id>
Parameters
backup_id
Returns
● "Full"
● "Incremental since incremental"
● "Incremental since full"
● "All inclusive"
● "All RW files in RW dbspaces"
● "Set of RO dbspace/file"
depends_on_id Identifier for previous backup that the backup depends on.
dbfile_name The logical file name, if it was not renamed after the backup operation. If renamed,
"null."
dbfile_path The dbfile path from SYSBACKUPDETAIL, if it matches the physical file path
("file_name") in SYSDBFILE for a given dbspace_id and the dbfile_id. Otherwise "null."
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
dbfile_backup_size dbfile_path
2884 C:\\Documents and Settings\\All Users\\IQ\\demo\\iqdemo.db
Related Information
Syntax
Parameters
timestamp or backup_id
(Optional) The interval for which to report backup operations. If you specify <timestamp> or
<backup_id>, only those records with backup_time greater than or equal to the time you enter are
returned. If you specify no timestamp, the procedure returns all the backup records in
ISYSIQBACKUPHISTORY.
● "Full"
● "Incremental since incremental"
● "Incremental since Full"
● "PITR"
● "All inclusive"
● "All RW files in RW dbspaces"
● "Set of RO dbspace/file"
● "Non-virtual"
● "Decoupled"
● "Encapsulated"
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Related Information
Determines the minimum memory requirements for row-level versioning of the given table.
Syntax
Go to:
● Returns
● Remarks
● Privileges
● Side Effects
● Examples
Parameters
(back to top)
table_name
Name of the table for which the user wants to determine in-memory storage. If not specified, default value
of '%' is used.
max_subfragments
Maximum number of subfragments to assume for in-memory storage. If not specified, default value of 1 is
used.
num_rows
The size (in bytes) of the first array allocation for fixed length data type columns in the RLV in-memory
store. It is used as a starting size by all fixed block allocation strategies. If not specified, default value of
4096 bytes (KB) is used.
rv_fixed_blocksize
The size (in bytes) of every subsequent array allocation for fixed length data type columns in the RLV in-
memory store. It is used by the Constant allocation strategy. If not specified, default value of 16777216
bytes (16 MB) is used.
rv_delta_increase
The delta size increase size (in Bytes) for subsequent array allocation for fixed length data type columns in
the RLV in-memory store. The nth block size is the value of (n-1)th block size +
RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE. This is used by the Delta Increase allocation strategy. If
not specified, default value of 1024 bytes (1 KB) is used.
rv_percent_increase
The percentage size increase for subsequent array allocation for fixed length data type columns in the RLV
in-memory store. The nth block size is the value of (n-1)th block size + ((n-1)th block size *
RV_PERCENT_INCREASE_IN_FIX_DATA_BLOCKSIZE / 100). This is used by the Percent Increase
allocation strategy. If not specified, default value of 100 (%) is used.
Returns
(back to top)
num_rows BIGINT Number of rows expected in the RLV store; the default is 1.
● Fixed-width datatypes
● NBIT columns
● Internal columns
Remarks
(back to top)
Displays the estimated minimum memory requirements for given allocation strategies to convert a table to
row-level-versioning.
The parameter values can be adjusted to calculate different memory requirements for the RLV store. Each
parameter corresponds to a database option.
<rv_initial_blocksize> RV_INITIAL_FIXED_DATA_BLOCKSIZE
<rv_fixed_blocksize> RV_FIXED_DATA_BLOCKSIZE
<rv_delta_increase> RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE
<rv_percent_increase> RV_PERCENT_INCREASE_IN_FIX_DTA_BLOCKSIZE
Parameter values are used by the stored procedure. They are not utilized by the actual RLV store. To change
the memory usage of the RLV store, modify the corresponding database option.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
(back to top)
None
(back to top)
● The following example example returns the in-memory usage for the RLV-enabled table tab1 using 1
subfragment with 10 million rows:
max_subfrag
table_name table_id is_rlv ments num_rows fixed_Columns
● The following example returns the in-memory usage for the RLV-enabled table tab2 using 10
subfragments with 1 million rows.
Note
Syntax
Parameters
table_name
Name of the table owner. If this parameter is not specified, then the procedure looks for a table owned by
the current user.
script
The script:
● table_name
● table_owner
● column_name
● cardinality
● index_type
● index recommendation
Remarks
If you do not specify any parameters, then SAP IQ displays create_index SQL statements for all columns in
all tables owned by the current user.
If you specify <script>, you can redirect the output to generate the script file:
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● CREATE ANY INDEX System privileges GRANT System Privilege Statement [page 1511]
SELECT ANY TABLE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
Index Recommen
table_name table_owner column_name cardinality index type dation
Related Information
Checks validity of the current database. Optionally corrects allocation problems for dbspaces or databases.
sp_iqcheckdb does not check a partitioned table if partitioned data exists on offline dbspaces.
Syntax
<mode> ::=
{ allocation
| check
| verify }
| dropleaks
<target> ::=
[ indextype <index-type> […] ] database
| database resetclocks
| { [ indextype <index-type> ] […] table <table-name> [ partition
<partition-name> ] […]
| index <index-name>
| […] dbspace <dbspace-name>}
| cache <main-cache-name>
Go to:
● Remarks
● Privileges
● Side Effects
● Examples
Parameters
(back to top)
database
One of the following index types: FP, CMP, HG, HNG, WD, DATE, TIME, DTTM, TEXT.
If the specified <index-type> does not exist in the target, an error message is returned. If multiple index
types are specified and the target contains only some of these index types, the existing index types are
processed by sp_iqcheckdb.
index-name
If <owner> is not specified, current user and database owner (dbo) are substituted in that order. If
<table> is not specified, <index-name> must be unique.
table-name
If <owner> is not specified, current user and database owner (dbo) are substituted in that order. <table-
name> cannot be a temporary or pre-join table.
Note
If either the table name or the index name contains spaces, enclose the <table-name> or <index-
name> parameter in double quotation marks:
partition-name
The partition filter causes sp_iqcheckdb to examine a subset of the corresponding table’s rows that
belong to that partition. A partition filter on a table and table target without the partition filter are
semantically equivalent when the table has only one partition.
dbspace-name
The dbspace target examines a subset of the database's pages that belong to that dbspace. The dbspace
must be online. The dbspace and database target are semantically equivalent when the table has only one
dbspace.
resource-percent
The input parameter <resource-percent> must be an integer greater than zero. The resources
percentage allows you to limit the CPU utilization of the database consistency checker by controlling the
number of threads with respect to the number of CPUs. If <resource-percent> = 100 (the default
value), then one thread is created per CPU. If <resource-percent> > 100, then there are more threads
than CPUs, which might increase performance for some machine configurations. The minimum number of
threads is one.
main-cache-name
The cache target compares pages in the main cache dbspace against the original pages in the IQ main
store.
Note
The sp_iqcheckdb parameter string must be enclosed in single quotes and cannot be greater than 255
bytes in length.
Remarks
(back to top)
If an error is found, sp_iqcheckdb reports the name of the object and the type of error. sp_iqcheckdb does
not update the free list if errors are detected.
sp_iqcheckdb also allows you to check the consistency of a specified table, index, index type, or the entire
database.
Note
sp_iqcheckdb is the user interface to the SAP IQ database consistency checker (DBCC) and is
sometimes referred to as DBCC.
There are three modes for checking database consistency, and one for resetting allocation maps. If mode and
target are not both specified in the parameter string, SAP IQ returns the error message:
sp_iqcheckdb checks the allocation of every block in the database and saves the information in the current
session until the next sp_iqdbstatistics procedure is issued. sp_iqdbstatistics displays the latest
result from the most recent execution of sp_iqcheckdb.
sp_iqcheckdb can perform several different functions, depending on the parameters specified.
Note
See Database Verification for detailed information on checking database consistency with sp_iqcheckdb.
Allocation Checks allocation with blockmap information for the entire database, a specific index, a specific
index type, a specific partition, specific table, or a specific dbspace. Does not check index consis
tency.
Detects duplicate blocks (blocks for which two or more objects claim ownership) or extra blocks
(unallocated blocks owned by an object).
Note
If sp_iqcheckdb detects a block ownership conflict, it adds a **Blocks with
Multiple Owners*** section to the report, listing the implicated object names, and
physical block numbers. Block ownership conflicts are only analyzed if the target of
sp_iqcheckdb is either a database or a dbspace. Example of a block ownership conflict:
This section of the report can help you recover from corruption caused by block ownership
conflicts. In the event of a block ownership conflict, contact SAP Support for advice on how to
resolve the reported conflicts.
Detects leaked blocks (allocated blocks unclaimed by any object in the specified target) for data
base or dbspace targets.
Note
sp_iqcheckdb cannot check all allocation problems if you specify the name of a single in
dex, index type, or table in the input parameter string.
● To detect duplicate or unowned blocks (use database or specific tables or indexes as the tar
get)
● If you encounter page header errors
The DBCC option resetclocks is used only with allocation mode. resetclocks is used with
forced recovery to convert a multiplex secondary server to a coordinator. For information on multi
plex capability, see SAP IQ Administration: Multiplex. resetclocks corrects the values of inter
nal database versioning clocks, in the event that these clocks are behind. Do not use the
resetclocks option for any other purpose, unless you contact SAP IQ Technical Support.
The resetclocks option must be run in single-user mode and is allowed only with the DBCC
statement allocation database. The syntax of resetclocks is:
Check Verifies that all database pages can be read for the entire database, main cache, specific index,
specific index type, specific table, specific partition, or specific dbspace. If the table is partitioned,
then check mode will check the table’s partition allocation bitmaps.
Run in check mode if metadata, null count, or distinct count errors are returned when running a
query.
Verify Verifies the contents of non-FP indexes with their corresponding FP indexes for the entire data
base, main cache, a specific index, a specific index type, specific table, specific partition, or spe
cific dbspace. If the specified target contains all data pages for the FP and corresponding non-FP
indexes, then verify mode detects the following inconsistencies:
● Missing key – a key that exists in the FP but not in the non-FP index.
● Extra key – a key that exists in the non-FP index but not in the FP index.
● Missing row – a row that exists in the FP but not in the non-FP index.
● Extra row – a row that exists in the non-FP index but not in the FP index.
If the specified target contains only a subset of the FP pages, then verify mode can detect only the
following inconsistencies:
● Missing key
● Missing row
If the target is a partitioned table, then verify mode also verifies that each row in the table or table
partition has been assigned to the correct partition.
Run in verify mode if metadata, null count, or distinct count errors are returned when running a
query.
Note
sp_iqcheckdb does not check referential integrity or repair referential integrity violations.
Dropleaks When the SAP IQ server runs in single-node mode, you can use dropleaks mode with either a data
base or dbspace target to reset the allocation map for the entire database or specified dbspace
targets. If the target is a dbspace, then the dropleaks operation must also prevent read-write oper
ations on the named dbspace. All dbspaces in the database or dbspace list must be online.
On a multiplex coordinator node, dropleaks mode also detects leaked blocks, duplicate blocks, or
extra blocks across the multiplex.
DBCC Performance
The execution time of DBCC varies, depending on the size of the database for an entire database check, the
number of tables or indexes specified, and the size of the machine. Checking only a subset of the database
(that is, only specified tables, indexes, or index types) requires less time than checking an entire database.
This table summarizes the actions and output of the four sp_iqcheckdb modes.
Output
Depending on the execution mode, sp_iqcheckdb output includes summary results, errors, informational
statistics, and repair statistics. The output may contain as many as three results sets, if you specify multiple
modes in a single session. Error statistics are indicated by asterisks (*****), and appear only if errors are
detected.
The output of sp_iqcheckdb is also copied to the SAP IQ message file .iqmsg. If the DBCC_LOG_PROGRESS
option is ON, sp_iqcheckdb sends progress messages to the IQ message file, allowing the user to follow the
progress of the DBCC operation as it executes.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
ALTER DATABASE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
(back to top)
None
(back to top)
● Performs a detailed check on indexes i1, i2, and dbo.t1.i3. If you do not specify a new mode,
sp_iqcheckdb applies the same mode to the remaining targets, as shown in the following command:
● You can combine all modes and run multiple checks on a database in a single session. Perform a quick
check of partition p1 in table t2, a detailed check of index i1, and allocation checking for the entire
database using half of the CPUs:
● Verifies the FP and HG indexes in the table t1 and the HNGindexes in the table t2:
Note
LVC is a VARCHAR or VARBINARY column with a width greater than 255. LONG BINARY (BLOB) and
LONG VARCHAR (CLOB) also use LVC.
===================================================================
DBCC Allocation Mode Report
===================================================================
DBCC Status No Errors Detected
===================================================================
Allocation Summary
===================================================================
Blocks Total 25600
Blocks in Current Version 5917
Blocks in All Versions 5917
Blocks in Use 5917
% Blocks in Use 23
===================================================================
Allocation Statistics
===================================================================
Marked Logical Blocks 8320
Marked Physical Blocks 5917
Marked Pages 520
Blocks in Freelist 2071196
Imaginary Blocks 2014079
Highest PBN in Use 1049285
Total Free Blocks 19683
Usable Free Blocks 19382
% Total Space Fragmented 1
% Free Space Fragmented 1
Max Blocks Per Page 16
1 Block Page Count 165
3 Block Page Count 200
4 Block Page Count 1
10 Block Page Count 1
16 Block Page Count 153
2 Block Hole Count 1
3 Block Hole Count 19
6 Block Hole Count 12
7 Block Hole Count 1
10 Block Hole Count 1
15 Block Hole Count 1
16 Block Hole Count 1220
Partition Summary
Database Objects Checked 2
Blockmap Identity Count 2
Bitmap Count 2
===================================================================
Connection Statistics
===================================================================
Sort Records 3260
Sort Sets 2
===================================================================
DBCC Info
===================================================================
DBCC Work units Dispatched 197
DBCC Work units Completed 197
DBCC Buffer Quota 255
DBCC Per-Thread Buffer Quota 255
Max Blockmap ID found 200
Max Transaction ID found 404
Note
The report may indicate leaked space. Leaked space is a block that is allocated according to the
database free list (an internal allocation map), but DBCC finds that the block is not part of any
database object.
Syntax
Parameters
table_name
Remarks
When you create a table, SAP IQ assigns a default index to each new column. This procedure checks these
indexes and diagnoses corrupted tables and columns.
All parameters are optional. You may run the procedure in several ways:
call sp_iqcheckfpconsistency()
Note
call sp_iqcheckfpconsistency('<table_name>')
The procedure returns a result table that provides a summary report. If SAP IQ detects errors, it returns
detailed error messages in the SAP IQ message file.
Currently there is no consistency verification for columns with BIT data types. The report returns "No Errors
Detected," but does not actually verify them.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● This statement checks consistency for all columns in the customer table:
call sp_iqcheckfpconsistency('customer')
For the connected user, sp_iqcheckoptions displays a list of the current value and the default value of
database and server startup options that have been changed from the default.
Syntax
sp_iqcheckoptions
Returns
User_name The name of the user or role for whom the option has been set. At database creation, all options
are set for the PUBLIC role. Any option that has been set for a role or user other than PUBLIC is
displayed.
Returns one row for each option that has been changed from the default value. The output is sorted by option
name, then by user name.
For the connected user, the sp_iqcheckoptions stored procedure displays a list of the current value and the
default value of database and server startup options that have been changed from the default.
sp_iqcheckoptions considers all SAP IQ and SAP SQL Anywhere database options. SAP IQ modifies some
SAP SQL Anywhere option defaults, and these modified values become the new default values. Unless the new
SAP IQ default value is changed again, sp_iqcheckoptions does not list the option.
When sp_iqcheckoptions is run, the DBA sees all options set on a permanent basis for all roles and users
and sees temporary options set for DBA. Users who are not DBAs see their own temporary options. All users
see nondefault server startup options.
The DBA user sees all options set on a permanent basis for all roles and users, along with temporary options
set for the DBA. Users who are not DBAs see their own temporary options. All users see nondefault server
startup options.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
In these examples, the temporary option APPEND_LOAD is set to ON and the role myrole has the option
MAX_WARNINGS set to 9. The user joel has a temporary value of 55 set for MAX_WARNINGS.
Allows a client application to determine the SAP IQ user account responsible for a particular data stream, as
observed in a network analyzer originating from a specific client IP address/port.
Syntax
Parameters
IPaddress
Remarks
The sp_iqclient_lookup procedure takes the client IP address and port number and returns a single row
containing Number (the connection ID), IPaddress, Port, and UserID:
sp_iqclient_lookup '158.76.235.71',3360
sp_iqclient_lookup
If a client application is not using TCP/IP or for internal connections, the address appears as 127.0.0.1.
Note
This information is available for logged on users only. No historical login data is kept on the server for this
purpose.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● SELECT ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
● DROP CONNECTION
● SERVER OPERATOR
Side Effects
The sp_iqclient_lookup stored procedure may impact server performance, which varies from one
installation to another. Finding the login name entails scanning through all current active connections on the
server; therefore, the impact may be greater on servers with large numbers of connections. Furthermore, this
information cannot be cached as it is dynamic — sometimes highly dynamic. It is, therefore, a matter for the
local system administrator to manage the use of this stored procedure, as well as monitor the effects on the
server, just as for any other client application that uses server facilities.
sp_iqclient_lookup '162.66.131.36'
Note
Related Information
Syntax
Syntax 1
Syntax 2
sp_iqcolumn [ table_name='<table_name>' ],
[ table_owner='<tableowner>' ],[ table_loc='<table_loc>' ]
Parameters
table_name
Returns
width The precision of numeric data types that have precision and scale or the storage width of numeric
data types without scale; the width of character data types.
● 'Y' – if the column belongs to a partitioned table and has one or more partitions that have a
dbspace is different from the table partition’s dbspace
● 'N' – if the column's table is not partitioned or each partition of the column resides in the
same dbspace as the table partition.
Remarks
Displays information about columns in a database. Specifying the <table_name> parameter returns the
columns only from tables with that name. Specifying the table_owner parameter returns only tables owned by
Syntax 1
If you specify <table_owner> without specifying <table_name>, you must substitute NULL for
<table_name>. For example, sp_iqcolumn NULL,DBA.
Syntax 2
The parameters can be specified in any order. Enclose '<table_name>' and '<table_owner>' in single quotes.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● The following variations in syntax both return all of the columns in the table Departments:
sp_iqcolumn Departments
● The following variation in syntax returns all of the columns in all of the tables owned by table owner DBA:
sp_iqcolumn table_owner='DBA'
Syntax
Parameters
table.name
Remarks
sp_iqcolumnmetadata reads the index metadata to return details about column indexes in both base and
global temporary tables. Index metadata reported for a global temporary table is for the individual instance of
that table.
Include the optional <table.name> parameter to generate details for that table. Omit the <table.name>
parameter to generate details for all tables in the database.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● ALTER ANY INDEX sys System privileges GRANT System Privilege Statement [page 1511]
tem privilege
● ALTER ANY OBJECT
system privilege
● REFERENCE permis
sions on the table
Side Effects
None
Syntax
sp_iqcolumnuse
Returns
UID Column unique identifier, a number assigned by the system that uniquely identifies the instance of
the column (where instance is defined when an object is created).
Remarks
Tip
The INDEX_ADVISOR option generates messages suggesting additional column indexes that may improve
performance of one or more queries.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
The following example shows sample output from the sp_iqcolumnuse procedure (the long numbers are
temporary IDs):
Shows information about connections and versions, including which users are using temporary dbspace, which
users are keeping versions alive, what the connections are doing inside SAP IQ, connection status, database
version status, and so on.
Syntax
sp_iqconnection [ <connhandle> ]
Go to:
● Returns
● Remarks
● Privileges
● Side Effects
Parameters
(back to top)
connhandle
Returns
(back to top)
Name The connection name specified by the ConnectionName (CON) connection parame
ter.
LastReqTime The time at which the last request for the specified connection started.
IQCmdType SAP IQ side, if any. The command type reflects commands defined at the implemen
tation level of the engine. These commands consist of transaction commands, DDL
and DML commands for data in the IQ store, internal IQ cursor commands, and spe
cial control commands such as The current command executing on the OPEN and
CLOSE, BACKUP DATABASE, RESTORE DATABASE, and others.
LastIQCmdTime The time the last IQ command started or completed on the IQ side of the SAP IQ en
gine on this connection.
LowestIQCursorState The IQ cursor state, if any. If multiple cursors exist on the connection, the state that
appears is the lowest cursor state of all the cursors; that is, the furthest from comple
tion. Cursor state reflects internal SAP IQ implementation detail and is subject to
change in the future. Cursor states are:
As suggested by the names, the cursor state changes at the end of the operation. A
state of PREPARED, for example, indicates that the cursor is executing.
TxnID The transaction ID of the current transaction on the connection. This is the same as
the transaction ID in the .iqmsg file by the BeginTxn, CmtTxn, and PostCmtTxn mes
sages, as well as the Txn ID Seq logged when the database is opened.
TempTableSpaceKB The number of kilobytes of IQ temporary store space in use by this connection for
data stored in IQ temp tables. threads currently assigned to the connection. Some
threads may be assigned but idle. This column can help you determine which connec
tions are using the most resources. threads currently assigned to the connection.
Some threads may be assigned but idle. This column can help you determine which
connections are using the most resources.nullnullnull
TempWorkSpaceKB nullThe number of kilobytes of IQ temporary store space in use by this connection for
working space such as sorts, hashes, and temporary bitmaps. Space used by bitmaps
or other objects that are part of indexes on SAP IQ temporary tables are reflected in
TempTableSpaceKB.
IQConnID The 10-digit connection ID included as part of all messages in the .iqmsg file. This is
a monotonically increasing integer unique within a server session.
satoiq_count An internal counter used to display the number of crossings from the SAP SQL
Anywhere side to the IQ side of the SAP IQ engine. This might be occasionally useful
in determining connection activity. Result sets are returned in buffers of rows and do
not increment satoiq_count or iqtosa_count once per row.
iqtosa_count An internal counter used to display the number of crossings from the IQ side to the
SAP SQL Anywhere side of the SAP IQ engine. This might be occasionally useful in de
termining connection activity.
CommLink The communication link for the connection. This is one of the network protocols sup
ported by SAP IQ, or is local for a same-machine connection.
MPXServerName If an INC connection, the VARCHAR(128) value contains the name of the multiplex
server where the INC connection originates. NULL if not an INC connection.
LSName The logical server name of the connection. NULL if logical server context is unknown
or not applicable.
INCConnName The name of the underlying INC connection for a user connection. The data type for
this column is VARCHAR(255). If sp_iqconnection shows an INC connection
name for a suspended user connection that user connection has an associated INC
connection that is also suspended.
INCConnSuspended The value "Y" in this column indicates that the underlying INC connection for a user
connection is in a suspended state. The value "N" indicates that the connection is not
suspended.
Remarks
(back to top)
<connhandle> is equal to the Number connection property and is the ID number of the connection. The
connection_property system function returns the connection ID:
When called with an input parameter of a valid <connhandle>, sp_iqconnection returns the one row for
that connection only.
sp_iqconnection returns a row for each active connection. The columns ConnHandle, Name, Userid,
LastReqTime, ReqType, CommLink, NodeAddr, and LastIdle are the connection properties Number, Name,
Userid, LastReqTime, ReqType, CommLink, NodeAddr, and LastIdle, respectively, and return the same
The column MPXServerName stores information related to internode communication (INC), as shown:
In Java applications, specify SAP IQ-specific connection properties from TDS clients in the RemotePWD field.
This example, where myconnection becomes the IQ connection name, shows how to specify IQ-specific
connection parameters:
p.put("RemotePWD",",,CON=myconnection");
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● DROP CONNECTION System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
● SERVER OPERATOR
Side Effects
(back to top)
None
Lists referential integrity constraints defined using CREATE TABLE or ALTER TABLE for the specified table or
column.
Syntax
Parameters
table-name
Remarks
If table name and column name are omitted, reports all referential integrity constraints for all tables including
temporary ones in the current connected database. The information includes unique or primary key constraint,
referential constraint, and associated role name that are defined by the CREATE TABLE and/or ALTER TABLE
statements.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
This is sample output that displays all primary key/foreign key pairs where either the candidate key or foreign
key contains column ck1 for owner bob in all tables:
call sp_iqconstraint('','ck1','bob')
Related Information
Tracks and displays, by connection, information about statements that are currently executing.
Syntax
sp_iqcontext [ <connhandle> ]
Parameter
connhandle
Returns
numIQCursors If column 1 is CONNECTION the number of cursors open on this connection. If column
1 is:
IQthreads The number of IQ threads currently assigned to the connection. Some threads may
be assigned but idle. For DQP threads, indicates the number of threads assigned to
the DQP worker.
TxnID The transaction ID of the current transaction. In the case of a worker thread, indicates
the leader’s transaction ID.
ConnOrCurCreateTime The time this connection, cursor, or DQP worker was created.
IQConnID The connection ID displayed as part of all messages in the .iqmsg file. This is a mo
notonically increasing integer unique within a server session.
IQGovernPriority A value that indicates the order in which the queries of a user are queued for execu
tion. 1 indicates high priority, 2 (the default) medium priority, and 3 low priority. A
value of -1 indicates that IQGovernPriority does not apply to the operation. Set the IQ
GovernPriority value with the database option IQGOVERN_PRIORITY.
Remarks
The input parameter <connhandle> is equal to the Number connection property and is the ID number of the
connection. For example, SELECT CONNECTION_PROPERTY('NUMBER').
When called with an input parameter of a valid <connhandle>, sp_iqcontext returns the information only
for that connection.
sp_iqcontext lets the DBA determine what statements are running on the system at any given moment, and
identify the user and connection that issued the statement. With this information, you can use this utility to:
● Match the statement text with the equivalent line in sp_iqconnection to get resource usage and
transactional information about each connection
● Match the statement text to the equivalent line in the SQL log created when the -zr server option is set to
ALL or SQL
● Use connection information to match the statement text in sp_iqcontext to the equivalent line in
the .iqmsg file, which includes the query plan, when SAP IQ can collect it
● Match statement text to an SAP IQ stack trace (stktrc-yyyymmdd-hhnnss_#.iq), if one is produced
● Collate this information with an operating system stack trace that might be produced, such as pstack on
Sun Solaris
The maximum size of statement text collected is the page size of the catalog store.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● MANAGE ANY USER System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
Side Effects
None
Example
The following example shows an excerpt from output when sp_iqcontext is issued with no parameter,
producing results for all current connections. Column names are truncated due to space considerations:
ConnOrCu.. ConnHandle Name UserId numIQ.. IQthr.. TxnID Conn.. IQcon.. IQGov..
Cmd.. Attributes
CONNECTION 2 sun7bar dbo 0 0 0 2010-08-04 15:15:40.0 15 No command NO COMMAND
CONNECTION 7 sun7bar dbo 0 0 0 2010-08-04 15:16:00.0 32 No command NO COMMAND
CONNECTION 10 sun7bar dbo 0 0 0 2010-08-04 15:16:21.0 46 No command NO COMMAND
...
CONNECTION 229 sun7bar DBA 0 0 1250445 2010-08-05 18:28:16.0 50887 2 select
server_name,
inc_state, coordinator_failover from sp_iqmpxinfo() order by server_name
...
DQP 0 dbsrv2873_node_c1DBA 0 1 10000 2010-08-05 18:28:16.0 no command no command
Query ID:
12345; Condition: c1 > 100;
DQP 0 dbsrv2873_node_c1DBA 0 1 10001 2010-08-05 18:28:16.0 no command no command
Query ID:
12346; Node #12 Join (Hash);
The first line of output shows connection 2 (IQ connection ID 15). This connection is on server sun7bar, user
dbo. This connection was not executing a command when sp_iqcontext was issued.
Connection 229 shows the user command being executed (the command contains less than the maximum
4096 characters the column can display). The 2 before the user command fragment indicates that this is a
medium priority query.
The connection handle (2 for the first connection in this example) identifies results in the -zr log. The IQ
connection ID (15 for the first connection in this example) identifies results in the .iqmsg file. On UNIX
systems, you can use grep to locate all instances of the connection handle or connection ID, making it easy to
correlate information from all sources.
The second-last line (TxnID 10000) shows a DQP worker thread. The worker connection is running two
invariant conditions.
The last line (TxnID 10001) shows connection is running a hash join.
Syntax
Syntax 1
Syntax 2
Parameters
existing-policy-name
A CHAR(128) parameter that specifies the name of the new login policy to create.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY LOGIN POL System privilege GRANT System Privilege Statement [page 1511]
ICY
None
Example
The following example creates a new login policy named <lockeduser> by copying the login policy option
values from the existing login policy named "root":
Related Information
Syntax
Parameters
cursor-name
The name of the cursor. If only this parameter is specified, sp_iqcursorinfo returns information about
all cursors that have the specified name in all connections.
conn-handle
An integer representing the connection ID. If only this parameter is specified, sp_iqcursorinfo returns
information about all cursors in the specified connection.
IQConnID The 10-digit connection ID displayed as part of all messages in the .iqmsg file. This number is a
monotonically increasing integer that is unique within a server session.
UserID User ID (or user name) for the user who created and ran the cursor.
NumFetch The number of times the cursor fetches a row. The same row can be fetched more than once.
NumUpdate The number of times the cursor updates a row, if the cursor is updatable. The same row can be
updated more than once.
NumDelete The number of times the cursor deletes a row, if the cursor is updatable.
NumInsert The number of times the cursor inserts a row, if the cursor is updatable.
RWTabOwner The owner of the table that is opened in RW mode by the cursor.
RWTabName The name of the table that is opened in RW mode by the cursor.
CmdLine The first 4096 characters of the command the user executed.
Remarks
The sp_iqcursorinfo procedure can be invoked without any parameters. If no parameters are specified,
sp_iqcursorinfo returns information about all cursors currently open on the server. If both parameters are
specified, sp_iqcursorinfo reports information about all of the cursors that have the specified name and are
in the specified connection.
If you do not specify the first parameter, but specify the second parameter, you must substitute NULL for the
omitted parameter. For example, sp_iqcursorinfo NULL, 1.
The sp_iqcursorinfo stored procedure displays detailed information about cursors currently open on the
server. The sp_iqcursorinfo procedure enables database administrators to monitor cursor status using just
one stored procedure and view statistics such as how many rows have been updated, deleted, and inserted.
If you specify one or more parameters, the result is filtered by the specified parameters. For example, if
<cursor-name> is specified, only information about the specified cursor is displayed. If <conn-handle> is
specified, sp_iqcursorinfo returns information only about cursors in the specified connection. If no
parameters are specified, sp_iqcursorinfo displays information about all cursors currently open on the
server.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● The following example displays information about all cursors currently open on the server:
sp_iqcursorinfo
Name ConnHandle IsUpd IsHold IQConnID UserID
---------------------------------------------------------------------
crsr1 1 Y N 118 DBA
crsr2 3 N N 118 DBA
CreateTime CurrentRow NumFetch NumUpdate
----------------------------------------------------------------
2009-06-26 15:24:36.000 19 100000000 200000000
2009-06-26 15:38:38.000 20000 200000000
NumDelete NumInsert RWTabOwner RWTabName CmdLine
----------------------------------------------------------------------
20000000 3000000000 DBA test1 call proc1()
call proc2()
● The following example displays information about all cursors currently open on the server:
sp_iqcursorinfo
● The following example displays information about the all cursors named cursor1 in all connections:
sp_iqcursorinfo 'cursor1'
sp_iqcursorinfo NULL, 3
● The following example displays information about all the cursors named cursor2 in connection 4:
sp_iqcursorinfo 'cursor2', 4
Displays information about system data types and user-defined data types.
Syntax
Parameters
type-name
● SYSTEM – displays information about system defined data types (data types owned by user SYS or
dbo) only
● ALL – displays information about user and system data types
● Any other value – displays information about user data types
Returns
nulls Y indicates the user-defined data type allows nulls; N indicates the data type does not allow nulls
and U indicates the null value for the data type is unspecified.
width Displays the length of string columns, the precision of numeric columns, and the number of bytes
of storage for all other data types.
scale Displays the number of digits after the decimal point for numeric data type columns and zero for
all other data types.
Remarks
The sp_iqdatatype procedure can be invoked without any parameters. If no parameters are specified, only
information about user-defined data types (data types not owned by dbo or SYS) is displayed by default.
If you do not specify either of the first two parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example, sp_iqdatatype NULL, NULL, SYSTEM and
sp_iqdatatype NULL, user1.
The sp_iqdatatype stored procedure displays information about system and user-defined data types in a
database. User-defined data types are also referred to as domains. Predefined domain names are not included
in the sp_iqdatatype output.
If you specify one or more parameters, the sp_iqdatatype result is filtered by the specified parameters. For
example, if <type-name> is specified, only information about the specified data type is displayed. If <type-
owner> is specified, sp_iqdatatype only returns information about data types owned by the specified owner.
If no parameters are specified, sp_iqdatatype displays information about all the user-defined data types in
the database.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● The following example displays information about the user-defined data type country_t:
sp_iqdatatype country_t
type_name creator nulls width scale "default" "check"
● The following example displays information about all user-defined data types in the database:
sp_iqdatatype
● The following example displays information about the user-defined data type named country_t:
sp_iqdatatype country_t
● In the following example, no rows are returned, as the data type non_existing_type does not exist:
sp_iqdatatype non_existing_type
● The following example displays information about all user-defined data types owned by DBA:
● The following example displays information about the data type country_t owned by DBA:
● In the following example, rowid is a system-defined data type. If there is no user-defined data type also
named rowid, no rows are returned. (By default, only user-defined data types are returned.):
sp_iqdatatype rowid
● In the following example, no rows are returned, as the data type rowid is not a user-defined data type (by
default, only user-defined data types are returned):
● The following example displays information about all system defined data types (owned by dbo or SYS):
● The following example displays information about the system data type rowid:
● The following example displays information about the user-defined and system data types:
Related Information
Syntax
sp_iqdbsize ( [ main ] )
Returns
PhysicalBlocks The total database size in blocks. An IQ database consists of one or more dbspaces. Each dbspace
has a fixed size, which is originally specified in units of megabytes. This megabyte quantity is con
verted to blocks using the IQ page size and the corresponding block size for that IQ page size. The
Physical Blocks column reflects the cumulative total of each SAP IQ dbspace size, represented in
blocks.
KBytes The total size of the database, in kilobytes. This value is the total size of the database in blocks
(PhysicalBlocks in the previous sp_iqdbsize column) multiplied by the block size. The
block size depends on the IQ page size.
Pages The total number of IQ pages necessary to represent in memory all of the data stored in tables and
the metadata for these objects. This value is always greater than or equal to the value of
CompressedPages (the next sp_iqdbsize column).
CompressedPages The total number of IQ pages necessary to store on disk the data in tables and metadata for these
objects. This value is always less than or equal to the value of Pages (the previous
sp_iqdbsize column), because SAP IQ compresses pages when the IQ page is written from
memory to disk. The sp_iqdbsize CompressedPages column represents the number of
compressed pages.
NBlocks The total size in blocks used to store the data in tables. This value is always less than or equal to
the sp_iqdbsize PhysicalBlocks value.
CatalogBlocks The total size in blocks used to store the metadata for tables.
RLVLogBlocks The number of blocks used for log information for the RLV store.
Returns the total size of the database. Also returns the number of pages required to hold the database in
memory and the number of IQ pages when the database is compressed (on disk).
If run on a multiplex database, the default parameter is main, which returns the size of the shared IQ store.
If run when there are no rows in any RLV-enabled tables, the Physical Blocks, the RLVLogBlocks and
RLVLogKBytes columns will contain non-zero entries, and the remaining columns contain zeros. This indicates
no row-level versioned tables.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
ALTER DATABASE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
The following example displays size information for the database iqdemo:
sp_iqdbsize
57 36 512
Syntax
sp_iqdbspace [ <dbspace-name> ]
Parameters
dbspace-name
Returns
DBSpaceName The name of the dbspace as specified in the CREATE DBSPACE state
ment. Dbspace names are always case-insensitive, regardless of the
CREATE DATABASE...CASE IGNORE or CASE RESPECT specifica-
tion.
Usage The percent of dbspace currently in use by all files in the dbspace.
TotalSize The total size of all files in the dbspace in the units:
● B (bytes)
● K (kilobytes)
● M (megabytes)
● G (gigabytes)
● T (terabytes)
● P (petabytes)
Reserve The total reserved space that can be added to all files in the dbspace.
BlkTypes The space used by both user data and internal system structures.
is_dbspace_preallocated "F" indicates that the NOPREALLOCATE keyword was used in the CRE
ATE DBSPACE statement when creating the dbspace on a cooked (not
raw) filesystem; otherwise "T" (the default).
Remarks
Use the information from sp_iqdbspace to determine whether data must be moved, and for data that has
been moved, whether the old versions have been deallocated.
● A – active version
● B – backup structures
● C – checkpoint log
● D – database identity
● F – free list
● G – global free list manager
● H – header blocks of the free list
● I – index advice storage
● M – multiplex CM. The multiplex commit identity block (actually 128 blocks) exists in all SAP IQ databases,
even though it is not used by SAP IQ databases.
● N – column use
● O – old version
● R – RLV free list manager. The manager first reserves the blocks from the main store freelist and marks
them as free. As RLV logging uses these blocks, they are marked as in use.
● RC – number of blocks actually in use by RLV store logs
● RU – number of blocks used by the commit log
● T – table use
● U – index use
● X – drop at checkpoint
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBSPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
iq_main MAIN T T 26
IQ_SYSTEM_LOG PITR T T 0
IQ_SYSTEM_MAIN MAIN T T 22
IQ_SYSTEM_MAIN TEMPORARY T T 23
rvspace RLV T T 17
100 M 200 M 1 1 T
0B 0B 1 1 F
100 M 200 M 1 1 T
25 M 200 M 1 1 T
1000 M 0B 1 1 F
is_dbspace_preallo
StripSize BlkTypes OkToDrop lsname cated
1K 1H,3254A N (NULL) T
0B 1H N (NULL) T
1K 1H,2528F,32D,128M N (NULL) T
1K 1H,64F,16A N (NULL) T
1K 1H,20480R,2096RU, N lsname T
1040RC
Note
For the rvspace RLV dbspace, in the BlkTypes column, of the 20480 blocks reserved for RLV store logs
(20489R), 2096 blocks are in use (RU), 1040 blocks (RC) of which are in use by the commit log.
Related Information
Displays the size of each object and subobject used in the specified table. Not supported for RLV dbspaces.
Syntax
sp_iqdbspaceinfo [ <dbspace-name> ]
[, <owner_name> ] [, <object_name> ] [, <object-type> ]
Parameters
dbspace-name
(Optional) If specified, sp_iqdbspaceinfo displays one line for each table that has any component in the
specified dbspace. Otherwise, the procedure shows information for all dbspaces in the database.
owner_name
(Optional) Owner of the object. If specified, sp_iqdbspaceinfo displays output only for tables with the
specified owner. If not specified, sp_iqdbspaceinfo displays information on tables for all users in the
database.
object_name
(Optional) Name of the table. If not specified, sp_iqdbspaceinfo displays information on all tables in the
database.
object_type
Returns
indexes The size of index storage space on the given dbspace. Does not use sys
tem-generated indexes (for example, HG indexes in unique constraints or
FP indexes).
metadata The size of storage space for metadata objects on the given dbspace.
primary_key The size of storage space for primary key related objects on the given
dbspace.
unique_constraint The size of storage space for unique constraint-related objects on the
given dbspace.
foreign_key The size of storage space for foreign-key-related objects on the given
dbspace.
is_dbspace_preallocate "F" indicates that the NOPREALLOCATE keyword was used in the CRE
ATE DBSPACE statement when creating the dbspace on a cooked (not
raw) filesystem; otherwise "T" (the default).
Remarks
All parameters are optional, and any parameter may be supplied independent of another parameter’s value.
The sp_iqdbspaceinfo stored procedure supports wildcard characters for interpreting <dbspace_name>,
<object_name>, and <owner_name>. It shows information for all dbspaces that match the given pattern in
the same way the LIKE clause matches patterns inside queries.
sp_iqdbspaceinfo shows the DBA the amount of space used by objects that reside on each dbspace. The
DBA can use this information to determine which objects must be relocated before a dbspace can be dropped.
The subobject columns display sizes reported in integer quantities followed by the suffix B, K, M, G, T, or P,
representing bytes, kilobytes, megabytes, gigabytes, terabytes, and petabytes, respectively.
If you run sp_iqdbspaceinfo against a server you have started with the -r switch (read-only), you see the
following error:
Msg 13768, Level 14, State 0: SAP SQL Anywhere
Error -757: Modifications not permitted for read-only database.
This behavior is expected. The error does not occur on other stored procedures such as sp_iqdbspace,
sp_iqfile, sp_iqdbspaceobjectinfo, or sp_iqobjectinfo.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● BACKUP DATABASE System privileges GRANT System Privilege Statement [page 1511]
● SERVER OPERATOR
● MANAGE ANY DBSPACE
Side Effects
None
Examples
These examples show objects in the iqdemo database to better illustrate output. iqdemo includes a sample
user dbspace named iq_main that may not be present in your own databases.
● The following example displays the size of all objects and subobjects in all tables in all dbspaces in the
database:
sp_iqdbspaceinfo
● The following example displays the size of all objects and subobjects owned by a specified user in a
specified dbspace in the database:
sp_iqdbspaceinfo iq_main,GROUPO
● The following example displays the size of a specified object and its subobjects owned by a specified user
in a specified dbspace in the database:
sp_iqdbspaceinfo iq_main,GROUPO,Departments
Related Information
Lists objects and subobjects of type table (including columns, indexes, metadata, primary keys, unique
constraints, foreign keys, and partitions) for a given dbspace. Not supported for RLV dbspaces.
Syntax
sp_iqdbspaceobjectinfo [ <dbspace-name> ]
[ , <owner_name> ] [ , <object_name> ] [ , <object-type> ]
Parameters
dbspace-name
(Optional) If specified, sp_iqdbspaceobjectinfo displays output only for the specified dbspace.
Otherwise, it shows information for all dbspaces in the database.
owner_name
(Optional) Owner of the object. If specified, sp_iqdbspaceobjectinfo displays output only for tables
with the specified owner. If not specified, sp_iqdbspaceobjectinfo displays information for tables for
all users in the database.
object_name
(Optional) Name of the table. If not specified, sp_iqdbspaceobjectinfo displays information for all
tables in the database.
object-type
object_type Table.
columns Number of table columns located on the given dbspace. If a column or one of the col
umn-partitions is located on a dbspace, it is counted to be present on that dbspace.
The result is shown in the form n/N (n out of total N columns of the table are on the
given dbspace).
indexes Number of user-defined indexes on the table located on the given dbspace. Shown in
the form n/N (n out of total N indexes on the table are on the given dbspace). This
does not contain indexes, which are system-generated, such as FP indexes and HG in
dexes in the case of unique constraints.
metadata
primary_key Boolean field (1/0) that denotes whether the primary key of the table, if any, is lo
cated on this dbspace.Boolean field (Y/N) that denotes whether the metadata infor
mation of the subobject is also located on this dbspace.
unique_constraint Number of unique constraints on the table that are located on the given dbspace. Ap
pears in the form n/N (n out of total N unique constraints on the table are in the given
dbspace).
foreign_key Boolean fieldNumber of foreign_keys on the table that are located on the given
dbspace. Appears in the form n/N (n out of total N foreign keys on the table are in the
given dbspace).
partitions Number of partitions of the table that are located on the given dbspace. Appears in
the form n/N (n out of total N partitions of the table are in the given dbspace).
Remarks
All parameters are optional and any parameter may be supplied independent of the value of other parameters.
For tables, sp_iqdbspaceobjectinfo displays summary information for all associated subobjects sorted by
dbspace_name, owner and object_name.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
These examples show objects in the iqdemo database to better illustrate output. iqdemo includes a sample
user dbspace named iq_main that may not be present in your own databases.
● The following example displays information about a specific dbspace in the database:
sp_iqdbspaceobjectinfo iq_main
● The following example displays information about the objects owned by a specific user in a specific
dbspace in the database:
sp_iqdbspaceobjectinfo iq_main,GROUPO
Related Information
Syntax
sp_iqdbstatistics
Displays the database statistics collected by the most recent execution of sp_iqcheckdb.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
ALTER DATABASE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
The following example shows the output from sp_iqdbstatistics. For this example, the most recent
execution of sp_iqcheckdb was the command sp_iqcheckdb 'allocation database':
Related Information
Syntax
Syntax 1
Syntax 2
sp_iqdroplogin '<userid>'
Syntax 3
sp_iqdroplogin <userid>
Syntax 4
sp_iqdroplogin ('<userid>')
Parameters
userid
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
sp_iqdroplogin 'rose'
sp_iqdroplogin rose
Related Information
Empties a dbfile and moves the objects in the dbfile to another available read-write dbfile in the same dbspace.
Not available for files in an RLV dbspace.
Syntax
sp_iqemptyfile ( <logical-file-name> )
Parameters
logical-file-name
An identifier.
Remarks
sp_iqemptyfile empties a dbfile. The dbspace must be read-write before you can execute the
sp_iqemptyfile procedure. Dbfiles must be read-only before you can execute the sp_iqemptyfile
procedure.. The procedure moves the objects in the file to another available read-write dbfile in the same
dbspace. If there is no other read-write dbfile available, then SAP IQ displays an error message.
Note
In a shared multiplex environment, you can run sp_iqemptyfile only on the coordinator. There must be
one read-write dbspace available for the procedure to succeed.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● BACKUP DATABASE System privileges GRANT System Privilege Statement [page 1511]
● SERVER OPERATOR
● ALTER DATABASE
System privileges ● INSERT ANY TABLE GRANT System Privilege Statement [page 1511]
● UPDATE ANY TABLE
● DELETE ANY TABLE
● ALTER ANY TABLE
● LOAD ANY TABLE
● TRUNCATE ANY TABLE
● ALTER ANY OBJECT
Side Effects
None
Example
sp_iqemptyfile ('das1')
Estimates the number and size of dbspaces needed for a given total index size.
Syntax
Parameters
db_size_in_bytes
A SMALLINT parameter that specifies the page size defined for the IQ segment of the database (must be a
power of 2 between 65536 and 524288; the default is 131072).
min_#_of_bytes
An INT parameter that specifies the minimum number of bytes per dbspace segment. The default is
20,000,000 (20 MB).
max_#_of_bytes
An INT parameter that specifies the maximum number of bytes per dbspace segment. The default is
2,146,304,000 (2.146 GB).
Remarks
sp_iqestdbspaces reports several recommendations, depending on how much of the data is unique:
● min – if there is little variation in data, you can choose to create only the dbspace segments of the sizes
recommended as min. These recommendations reflect the best possible compression on data with the
least possible variation.
● avg – if your data has an average amount of variation, create the dbspace segments recommended as
min, plus additional segments of the sizes recommended as avg.
● max – if your data has a high degree of variation (many unique values), create the dbspace segments
recommended as min, avg, and max.
Displays information about the number and size of dbspace segments based on the size of the database, the IQ
page size, and the range of bytes per dbspace segment. This procedure assumes that the database was
created with the default block size for the specified IQ page size; otherwise, the returned estimated values are
incorrect.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● MANAGE ANY DBSPACE System privileges GRANT System Privilege Statement [page 1511]
● ALTER DATABASE
Side Effects
None
Example
This following example estimates the size and number of dbspace segments needed for a 12 GB database:
1 min 2146304000
2 min 2146304000
3 min 507392000
4 avg 2146304000
5 max 2053697536
6 spare 1200001024
In this section:
Related Information
You need to run two stored procedures to provide the <db_size_in_bytes> parameter needed by
sp_iqestdbspaces.
Context
Results of sp_iqestdbspaces are only estimates, based on the average size of an index. The actual size
depends on the data stored in the tables, particularly on how much variation there is in the data.
SAP strongly recommends that you create the spare dbspace segments, because you can delete them later if
they are unused.
Procedure
1. Run sp_iqestjoin for all the table pairs you expect to join frequently.
2. Select one of the suggested index sizes for each pair of tables.
3. Total the index sizes you selected for all tables.
4. Run sp_iqestspace for all tables.
5. Total all of the RAW DATA index sizes returned by sp_iqestspace.
6. Add the total from step 3 to the total from step 5 to determine total index size.
Estimates the amount of space needed to create an index based on the number of rows in the table.
Syntax
Parameters
table_name
A SMALLINT parameter that specifies the page size defined for the IQ segment of the database (must be a
power of 2 between 65536 and 524288; the default is 131072)
Remarks
Displays the amount of space that a database requires based on the number of rows in the underlying
database tables and on the database IQ page size. This procedure assumes that the database was created with
the default block size for the specified IQ page size (or else the estimate is incorrect).
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● CREATE ANY INDEX System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY INDEX
● CREATE ANY OBJECT
● ALTER ANY OBJECT
Side Effects
None
Related Information
Syntax
Parameter
event-name
● SYSTEM – displays information about system events (events owned by user SYS or dbo) only
● ALL – displays information about user and system events
● Any other value – displays information about user events
event_type For system events, the event type as listed in the SYSEVENTTYPE system table.
condition The WHERE condition used to control firing of the event handler.
● C – consolidated
● R – remote
● A – all
Remarks
The sp_iqevent procedure can be invoked without any parameters. If no parameters are specified, only
information about user events (events not owned by dbo or SYS) is displayed by default.
If you do not specify either of the first two parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example:sp_iqevent NULL, NULL, SYSTEM and
sp_iqevent NULL, user1.
The sp_iqevent stored event displays information about events in a database. If you specify one or more
parameters, the result is filtered by the specified parameters. For example, if <event-name> is specified, only
information about the specified event is displayed. If <event-owner> is specified, sp_iqevent only returns
information about events owned by the specified owner. If no parameters are specified, sp_iqevent displays
information about all the user events in the database.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
● The following example displays information about all user events in the database:
sp_iqevent
● The following example displays information about the user-defined event e1:
sp_iqevent e1
event_name event_owner event_type enabled action
e1 DBA (NULL) Y (NULL)
condition location remarks
(NULL) A (NULL)
● In the following example, No rows returned, as the event non_existing_event does not exist:
sp_iqevent non_existing_event
● The following example displays information about all events owned by DBA:
● The following example displays information about the event e1 owned by DBA:
● In the following example, ev_iqbegintxn is a system-defined event. If there is no user-defined event also
named ev_iqbegintxn, no rows are returned. (By default, only user-defined events are returned):
sp_iqevent ev_iqbegintxn
● In the following example, no rows returned, as the event ev_iqbegintxn is not a user event (by default
only user events returned):
● The following example displays information about all system events (owned by dbo or SYS):
● The following example displays information about the system event ev_iqbegintxn:
● The following example displays information about the system event ev_iqbegintxn owned by dbo:
Syntax
sp_iqfile [ <dbspace-name> ]
Returns
DBSpaceName The name of the dbspace as specified in the CREATE DBSPACE state
ment. Dbspace names are always case-insensitive, regardless of the
CREATE DATABASE...CASE IGNORE or CASE RESPECT specifi-
cation.
● MAIN
● TEMPORARY
● RLV
● CACHE
Online ● T – online. This is the online value of both the file's associated
dbspace and the file in SYS.ISYSIQDBFILE.
● F – offline.
Usage The percent of dbspace currently in use by this file in the dbspace. When
run against a secondary node in a multiplex configuration, this column
displays NA.
DBFileSize The current size of the file or raw partition. For a raw partition, this size
value can be less than the physical size.
Reserve Reserved space that can be added to this file in the dbspace.
BlkTypes The space used by both user data and internal system structures.
Remarks
sp_iqfile displays the usage, properties, and types of data in each dbfile in a dbspace. You can use this
information to determine whether data must be moved, and for data that has been moved, whether the old
versions have been deallocated.
● A – Active Version
● B – Backup Structures
● C – Checkpoint Log
● D – Database Identity
● F – Free List
● G – Global Free List Manager
● H – Header Blocks of the Free List
● I – Index Advice Storage
● M – Multiplex CM. The multiplex commit identity block (actually 128 blocks) exists in all SAP IQ databases,
even though it is not used by SAP IQ databases.
● N – Column Use
● O – Old Version
● R – RLV Free List manager
● T – Table Use
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBSPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
The following example displays information about the files in the dbspaces:
sp_iqfile;
sp_iqfile;
DBSpaceName,DBFileName,Path,SegmentType,RWMode,Online,
Usage,DBFileSize,Reserve,StripeSize,BlkTypes,FirstBlk,
LastBlk,OkToDrop,servername,mirrorLogicalFileName,IsDASSharedFile
'IQ_SYSTEM_MAIN','IQ_SYSTEM_MAIN',
'../mpx_configdb.iq','MAIN','RW','T','24','700M','0B','1K',
'1H,17888F,32D,2498A,151O,198X,128M,32C',1,89600,'N',,'(NULL)','F'
'dbsp1','dbsp1','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb1','MAIN','RW','T','1','50M','0B','1K','1H',
1045440,1051839,'N',,'(NULL)','F'
'dbsp2','dbsp2','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb2','MAIN','RW','T','1','50M','0B','1K','1H',
2090880,2097279,'N',,'(NULL)','F'
'dbsp3','dbsp3','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb3','MAIN','RW','T','1','50M','0B','1K','1H',
3136320,3142719,'N',,'(NULL)','F'
'dbsp4','dbsp4','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb4','MAIN','RW','T','1','50M','0B','1K','1H',
4181760,4188159,'N',,'(NULL)','F'
'dbsp5','dbsp5','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb5','MAIN','RW','T','1','50M','0B','1K','1H',
5227200,5233599,'N',,'(NULL)','F'
'dbsp6','dbsp6','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb6','MAIN','RW','T','1','50M','0B','1K','1H',
6272640,6279039,'N',,'(NULL)','F'
'dbsp7','dbsp71','/lint12dev7/users/user4/machine.lint12dev_local/mpxstore/
mpx_configdb.iqdb71','MAIN','RW','T','1','200M','0B','1K','1H',
Related Information
Displays information about system and user-defined objects and data types.
Syntax
Parameters
obj-name
Columns, constraints, and indexes are associated with tables and cannot be queried directly. When a table
is queried, the information about columns, indexes, and constraints associated with that table is displayed.
If the specified object category is not one of the allowed values, displays an Invalid object category
message.
● SYSTEM – displays information about system objects (objects owned by user SYS or dbo) only
● ALL – displays information about all objects. By default, only information about non-system objects is
displayed. If the specified object type is not SYSTEM or ALL, displays an Invalid object type
message.
The sp_iqhelp procedure can be invoked without any parameters. If no parameters are specified, sp_iqhelp
displays information about all independent objects in the database, that is, base tables, views, stored
procedures, functions, events, and data types.
If you do not specify any of the first three parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example, sp_iqhelp NULL, NULL, NULL, SYSTEM
and sp_iqhelp NULL, user1, "table".
Enclose the <obj-category> parameter in single or double quotes., except when NULL.
If sp_iqhelp does not find an object in the database that satisfies the specified description, displays a No
object found for the given description message.
The sp_iqhelp stored procedure displays information about system and user-defined objects and data types
in an IQ database. Objects supported by sp_iqhelp are tables, views, columns, indexes, constraints, stored
procedures, functions, events, and data types.
If you specify one or more parameters, the result is filtered by the specified parameters. For example, if <obj-
name> is specified, only information about the specified object is displayed. If <obj-owner> is specified,
sp_iqhelp returns information only about objects owned by the specified owner. If no parameters are
specified, sp_iqhelp displays summary information about all user-defined tables, views, procedures, events,
and data types in the database.
The sp_iqhelp procedure returns either summary or detailed information, depending on whether the
specified parameters match multiple objects or a single object. The output columns of sp_iqhelp are similar
to the columns displayed by the stored procedures sp_iqtable, sp_iqindex, sp_iqview, and
sp_iqconstraint.
When multiple objects match the specified sp_iqhelp parameters, sp_iqhelp displays summary
information about those objects. Object types and the columns displayed are:
Table Displays information about ● Table columns: table_name, table_owner, server_type, location,
the specified base table, its table_constraints, remarks
columns, indexes, and con ● Column columns: column_name, domain_name, width, scale,
straints. nulls, default, check, pkey, user_type, cardinality, est_cardinality,
remarks
● Index columns: index_name, column_name, index_type,
unique_index, location, remarks
● Constraint columns: constraint_name (role), column_name, in
dex_name, constraint_type, foreigntable_name, foreignta
ble_owner, foreigncolumn_name, foreignindex_name, location
View Displays information about ● View columns: view_name, view_creator, view_def, server_type,
the specified view and its location, remarks
columns ● Column columns: column_name, domain_name, width, scale,
nulls, default, check, pkey, user_type, cardinality, est_cardinality,
remarks
Stored procedure Displays information about ● Procedure columns: proc_name, proc_creator, proc_defn, repli
the specified procedure and cate, srvid, remarks
its parameters ● Parameter columns: parameter_name, type, width, scale, de
fault, mode
Function Displays information about ● Function columns: proc_name, proc_creator, proc_defn, repli
the specified function and its cate, srvid, remarks
parameters ● Parameter columns: parameter_name, type, width, scale, de
fault, mode
Event Displays information about ● Event columns: event_name, event_creator, enabled, location,
the specified event event_type, action, external_action, condition, remarks
Data type Displays information about ● Data type columns: type_name, creator, nulls, width, scale, de
the specified data type fault, check
Note
Procedure definitions (proc-defn) of system procedures are encrypted and hidden from view.
For descriptions of the individual output columns, refer to the related stored procedure. For example, for a
description of the table column, see the sp_iqtable procedure.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● The following example displays detailed information about the table sale:
sp_iqhelp sale
● The following example displays detailed information about the procedure sp_customer_list:
sp_iqhelp sp_customer_list
proc_name proc_owner proc_defn
========== =========== =========
sp_customer_list DBA create procedure DBA.sp_customer_list()
result(id integer company_name char(35))
begin
select id company_name from Customers
● The following example displays summary information about all user-defined tables, views, procedures,
events, and data types in the database:
sp_iqhelp
● The following example displays information about table t1 owned by user u1 and the columns, indexes,
and constraints associated with t1:
● The following example displays information about view v1u1 and the columns associated with v1:
● The following example displays information about the procedure sp2 and the parameters of owned by user
sp2:
sp_iqhelp sp2
sp_iqhelp e1
● The following example displays information about the data type dt1:
sp_iqhelp dt1
● The following example displays summary information about all system objects (owned by dbo or SYS):
● The following examples all return the error message owned by user"Object 'non_existing_obj'
not found", as the object non_existing_obj does not exist:
sp_iqhelp non_existing_obj
In this section:
Related Information
The SAP IQ sp_iqhelp stored procedure is similar to the SAP Adaptive Server Enterprise sp_help procedure,
which displays information about any database object listed in the SYSOBJECTS system table and about
system and user-defined data types.
SAP IQ has some architectural differences from SAP ASE in terms of types of objects supported and the
namespace of objects. In SAP ASE, all objects (tables, views, stored procedures, logs, rules, defaults, triggers,
check constraints, referential constraints, and temporary objects) are stored in the SYSOBJECTS system table
and are in the same namespace. The objects supported by SAP IQ (tables, views, stored procedures, events,
primary keys, and unique, check, and referential constraints) are stored in different system tables and are in
different namespaces. For example, in SAP IQ a table can have the same name as an event or a stored
procedure.
Because of the architectural differences between SAP IQ and SAP ASE, the types of objects supported by and
the syntax of SAP IQ sp_iqhelp are different from the supported objects and syntax of SAP ASE sp_help;
however, the type of information about database objects that is displayed by both stored procedures is similar.
Syntax
Syntax 1
Syntax 2
sp_iqindex [ table_name='<tablename>' ],
[ column_name='<columnname>' ],[ table_owner='<tableowner>' ]
Syntax 3
Syntax 4
sp_iqindex_alt [ table_name='<tablename>' ],
[ column_name='<columnname>' ],[ table_owner='<tableowner>' ]
Go to:
● Remarks
● Privileges
● Side Effects
● Examples
Returns
(back to top)
column_name The name of the column; multiple names can appear in a multicolumn index
(back to top)
Displays information about indexes in the database. Specifying one of the parameters returns the indexes from
only that table, column, or tables owned by the specified user. Specifying more than one parameter filters the
results by all of the parameters specified. Specifying no parameters returns all indexes for all tables in the
database.
sp_iqindex always produces one line per index. sp_iqindex_alt produces one line per index per column if
there is a multicolumn index.
Syntax 1
If you do not specify either of the first two parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example, sp_iqindex NULL,NULL,DBA and
sp_iqindex Departments,NULL,DBA.
Syntax 2
You can specify the parameters in any order. Enclose them in single quotes.
Syntax 3 and 4
Produces slightly different output when a multicolumn index is present. Allows the same options as Syntax 1
and 2.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
(back to top)
None
Examples
(back to top)
sp_iqindex column_name='DepartmentID'
(Continued)
● The following variations in syntax both return all indexes in the table Departments that is owned by table
owner GROUPO:
sp_iqindex Departments,NULL,GROUPO
sp_iqindex table_name='Departments',table_owner='DBA'
(Continued)
● The following variations in syntax for sp_iqindex_alt both return indexes on the table Employees that
contain the column City. The index emp_loc is a multicolumn index on the columns City and State.
sp_iqindex_alt displays one row per column for a multicolumn index:
sp_iqindex_alt Employees,City
(Continued)
● The output from sp_iqindex for the same table and column is slightly different:
sp_iqindex Employees,City
sp_iqindex table_name='Employee',column_name='City'
(Continued)
Related Information
Syntax
sp_iqindexadvice ( [ <resetflag> ] )
Parameters
resetflag
Lets the caller clear the index advice storage. If <resetflag> is nonzero, all advice is removed after the
last row has been retrieved.
Remarks
Allows users to query aggregated index advisor messages using SQL. Information can be used to help decide
which indexes or schema changes will affect the most queries.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● ALTER ANY INDEX System privilege GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
None
Example
Add a CMP index on DBA.tb (c2, c3) Predicate: (tb.c2 = 2073 2009-04-07 16:37:31.000
tb.c3)
Join Key Columns DBA.ta.c1 and DBA.tb.c1 have mis 911 2009-02-25 20:59:01.000
matched data types
Related Information
Reports information about the percentage of page space taken up within the B-trees, garrays, and bitmap
structures in SAP IQ indexes.
Syntax
dbo.sp_iqindexfragmentation ( '<target>' )
'<target>' ::=
table <table-name> | index <index-name> [...]
table-name
Target table <table-name> reports on all nondefault indexes in the named table.
index-name
Target index <index-name> reports on the named index. Each <index-name> is a qualified index name.
You can specify multiple indexes within the table, but you must repeat the index keyword with each index
specified.
Remarks
For garrays, the fill percentage calculation does not take into account the reserved space within the garray
groups, which is controlled by the GARRAY_FILL_FACTOR_PERCENT option.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBASPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
Reports the internal index fragmentation for the unique HG index DBA.prop_nu.prop_nu_a table:
DBA.prop_nu.prop_nu_a HG 8 25
SQLCODE: 0
0-10% 13 2 8
11-20% 1 8 0
21-30% 0 4 0
31-40% 3 20 0
41-50% 4 116 0
51-60% 6 4 0
61-70% 3 3 0
71-80% 4 1 0
81-90% 1 1 0
Note
All percentages are truncated to the nearest percentage point. HG indexes also display the value of option
GARRAY_FILL_FACTOR_PERCENT. Index types that use a B-tree also display the number of node (nonleaf)
pages. These are HG, WD, DATE, and DTTM.
If an error occurs during execution of this stored procedure, the SQLCODE would be nonzero.
Related Information
Displays the number of blocks used per index per main dbspace for a given object. If the object resides on
several dbspaces, sp_iqindexinfo returns the space used in all dbspaces, as shown in the example.
Syntax
Parameters
table-name
index-name
resource-percent
The resources percentage allows you to limit the CPU utilization of the sp_iqindexinfo procedure by
specifying the percent of total CPUs to use. <resource-percent> must be an integer greater than 0.
Returns
MaxBlk Last block used by this object on this dbspace; useful for determining which objects must be relo
cated before the dbspace is resized to a smaller size.
Remarks
You can request index information for the entire database, or you can specify any number of table or index
parameters. If a table name is specified, sp_iqindexinfo returns information on all indexes in the table. If an
index name is specified, only the information on that index is returned.
If the specified <table-name> or <index-name> is ambiguous or the object cannot be found, an error is
returned.
By default in a multiplex database, sp_iqindexinfo displays information about the shared IQ store on a
secondary node. If individual tables or indexes are specified, the store to display is automatically selected.
sp_iqindexinfo shows the DBA on which dbspaces a given object resides. The DBA can use this information
to determine which dbspaces must be given relocate mode to relocate the object.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBSPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
shows the DBA on which dbspaces a given object resides.Displays information about indexes in the
Departments table:
Related Information
Syntax
dbo.sp_iqindexmetadata '<index-name>'
[ , '<table-name>' [ , '<owner-name>' ] ]
Parameter
index-name
For all indexes except FP, use the text name defined for the index. For FP indexes, use the name of the index
as defined in the iname column of the sysindex table. Run SELECT * FROM SYS.SYSINDEXES WHERE
TNAME=<table_name> to display the value.
Remarks
You can optionally restrict the output to only those indexes on a specified table, and to only those indexes
belonging to a specified owner.
User supplier IQ UNIQUE value for the column is available through sp_iqindexmetadata. It reports exact
cardinality if Unique HG are present. It reports 0 as cardinality if (only) non-unique HG is present.
The first row of output for all index types is the owner name, table name, and index name for the index.
Additional output is index type specific.
FP Type, Style, Version, DBType, Maximum Width, EstUnique, TokenCount, NBit, CountSize,
DictSize, CountLen, MaxKeyToken, MinKey Token, MinCount, MaxCount, DistinctKey, BAr
ray Version, RidMap Version, IQ Unique
HG Type, Version, Maintains Exact Distinct, Level 0 Threshold, Force Physical Delete, Maximum
Level Count, Tier ratio, Auto sizing, Average Load Size (records), Active Subindex count,
Cardinality Range Min - Max, Estimated Cardinality, Accuracy of Cardinality
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● ALTER ANY INDEX System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
REFERENCES privilege on Object-level privilege GRANT Object-Level Privilege Statement [page 1502]
the table
Side Effects
None
This example determines the name of the FP index for column C1 on table table1 and then displays the
metadata of the index. First, determine the iname value for the FP index.
Sample Code
sp_iqindexmetadata 'ASIQ_IDX_T1707_C1_FP','table1','dbo'
Type FP
Style NBit FP
Version 4
DBType 11
Maximum Width 0
EstUnique 0
TokenCount 0
NBit 1
CountSize 0
DictSize 0
CountLen 4
MaxKey Token 0
MinKey Token 0
MinCount 0
MaxCount 0
DistinctKey 0
BArray Version 2
RidMap Version 1
IQ Unique 0
This example displays the metadata for the non high group index nonhg on column C1 on table table1.
Sample Code
sp_iqindexmetadata 'nonhg','table1','dbo'
Type HG
Version 3
Tier ratio 30
Auto sizing On
Estimated Cardinality 5
Related Information
Identifies wide columns in migrated databases that you must rebuild before they are available for read/write
activities.
Syntax
sp_iqindexrebuildwidedata [<table.name>]
table.name
Include the optional <table.name> parameter to generate a list of wide columns for that table. Omit the
<table.name> parameter to generate a list of wide columns for all tables in the database.
Remarks
CHAR, VARCHAR, BINARY, and VARBINARY columns wider than 255 characters, as well as all LONG VARCHAR
and LONG BINARY columns in databases migrated to SAP IQ 16.1 must be rebuilt before the database engine
can perform read/write activities on them. sp_iqindexrebuildwidedata identifies these columns and
generates a list of statements that you can use to rebuild the columns with the sp_iqrebuildindex
procedure.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
INSERT ANY TABLE System privilege GRANT System Privilege Statement [page 1511]
INSERT privilege on the table Object-level privilege GRANT Object-Level Privilege Statement [page 1502]
Side Effects
None
Example
sp_iqindexrebuildwidedata T2
Syntax
Returns
Indexname Index for which results are returned, including the table name.
Info Component of the IQ index for which the KBytes, Pages, and Compressed Pages are being re
ported. The components vary by index type. For example, the default (FP) index includes BARRAY
(barray) and Bitmap (bm) components.
Remarks
Returns the total size of the index in bytes and kilobytes, and an Info column that describes the component of
the IQ index for which the KBytes, Pages, and Compressed Pages are reported. The components described
vary by index type. For example, the default (FP) index includes BARRAY (barray) and Bitmap (bm)
components.
Also returns the number of pages required to hold the object in memory and the number of IQ pages when the
index is compressed (on disk).
You must specify the <index_name> parameter with this procedure. To restrict results to this index name in a
single table, include <owner.table.> when specifying the index.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● ALTER ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY INDEX
Side Effects
None
Example
sp_iqindexsize ASIQ_IDX_T780_I4_HG
Related Information
Reports detailed usage information for secondary (non-FP) indexes accessed by the workload.
Syntax
sp_iqindexuse
Returns
UID Index unique identifier. UID is a number assigned by the system that uniquely identifies the in
stance of the index (where instance is defined when an object is created).
Remarks
Each secondary index accessed by the workload displays a row. Indexes that have not been accessed do not
appear. Index usage is broken down by optimizer, constraint, and query usage.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
Related Information
Syntax
sp_iqlmconfig
[ { 'allow' | 'disallow' } , {
'ALL'
| '<specific_license_name>'
| 'IQ_VLDBMGMT' , '<quantity>' } ]
| [ 'edition' [, <edition_type> ]]
| [ 'license type' [, <license_type_name> ]]
| [ 'smtp host' [, <smtp_host_name> ]]
| [ 'smtp port' [, <smtp_port_number> ]]
| [ 'email sender' [, <sender_email_address> ]]
| [ 'email recipients' [, <email_recipients> ]]
| [ 'email severity' [, <email_severity> ]] ]
Parameters
'allow' | 'disallow'
The ALL keyword enables or disables all optional licenses, except IQ_VLDBMGMT. To enable or disable a
specific license, specify the license by name.
<quantity> is an integer value from 0 to 4294967295 that sets the number of available IQ_VLDBMGMT
licenses.
Note
The disallow parameter can only disable an unlicensed option if the option is not in use. If the server
checks out an unlicensed option, the option cannot be unauthorized and the server may fall into grace
mode.
specific_license_name
A specific license. To allow or disallow a specific license, specify the license by name:
● 'IQ_CORE'
● 'IQ_LOB'
● 'IQ_VLDBMGMT'
● 'IQ_SECURITY'
● 'IQ_MPXNODE'
● 'IQ_UDF'
● 'IQ_IDA'
● 'IQ_UDA'
edition, edition_type
The current license type. The valid values for <license_type_name> are:
The SMTP host used to send e-mail for license event notifications.
smtp port, smtp port number
The SMTP port used to send e-mail for license event notifications.
email sender, sender email address
The e-mail address used as the sender's address on license event email notifications.
email recipients, email recipients
A comma-separated list of e-mail recipients who receive license event email notifications.
email severity, email severity
● 'ERROR' (default)
● 'WARNING'
● 'INFORMATIONAL'
● 'NONE'
Remarks
At startup, sp_iqlmconfig checks the edition type and license type. If a specified license is not found, the
server falls to grace mode. A specified license type becomes valid only when you specify a non-null edition
value.
Using an unlicensed option on a licensed server can throw the server into grace mode, which can cause the
server to shutdown when the grace period expires. The database administrator must explicitly "allow" access
to an optionally licensed feature, or the feature will not be available:
● You see the following message when you try to use an "unauthorized" optional feature:
Authorization required to attempt checkout
<specific_license_name> license.
● You see the following message when you try to create a dbspace that increases the IQ main store size
beyond the "authorized" size:
Insufficient quantity authorization
available for IQ_VLDBMGMT license.
The DBA can "disallow" any unused optional feature, but once a feature is in use and the license is checked out,
revoking access to that feature is no longer possible. Authorizing an IQ_MPXNODE optional license is not
required. For multiplex, authorization is required only on one node (any one), and is propagated to all other
nodes and enforced everywhere.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
SERVER OPERATOR System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Shows information about locks in the database, for both the IQ main store and the IQ catalog store.
Syntax
Parameter
connection
(Optional) An INTEGER parameter that specifies the connection ID. With this option, the procedure returns
information about locks for the specified connection only. Default is zero, which returns information about
all connections.
owner.table_name
(Optional) A CHAR(128) parameter that specifies the table name. With this option, the procedure returns
information about locks for the specified table only. Default is NULL, which returns information about all
tables in the database. If you do not specify owner, it is assumed that the caller of the procedure owns the
table.
max_locks
(Optional) An INTEGER parameter that specifies the maximum number of locks for which to return
information. Default is 0, which returns all lock information.
sort_order
(Optional) A CHAR(1) parameter that specifies the order in which to return information:
sp_iqlocks displays the following information, sorted as specified in the <sort_order> parameter:
table_type CHAR(6) The type of table. This type is either BASE for a table, GLBTMP for global
temporary table, or MVIEW for a materialized view. Materialized views are
only supported for SAP SQL Anywhere tables in the IQ catalog store.
lock_class CHAR(8) The lock class. One of Schema, Row, Table, or Position.
lock_duration CHAR(11) The duration of the lock. One of Transaction, Position, or Connection.
lock_type CHAR(9) The lock type (this is dependent on the lock class).
row_identifier UNSIGNED BIGINT The identifier for the row the lock starts on, or NULL.
row_range BIGINT The number of contiguous rows that are locked. Row locks in the RLV
store can either be a single row, or a range of rows.
Remarks
Displays information about current locks in the database. Depending on the options you specify, you can
restrict results to show locks for a single connection, a single table, or a specified number of locks.
If sp_iqlocks cannot find the connection ID or user name of the user who has a lock on a table, it displays a 0
(zero) for the connection ID and User unavailable for the user name.
The value in the lock_type column depends on the lock classification in the lock_class column. The following
values can be returned:
Schema ● Shared – shared schema lock For schema locks, the row_identifier and index ID values
● Exclusive – (IQ catalog store tables are NULL.
only) exclusive schema lock
Row ● Read – read lock Row read locks can be short-term locks (scans at isolation
● Intent – intent lock level 1) or long-term locks at higher isolation levels. The
● ReadPK – read lock lock_duration column indicates whether the read lock is of
● Write – write lock short duration because of cursor stability (Position) or
● WriteNoPK – write lock long duration, held until COMMIT/ROLLBACK (Transac
● Surrogate – surrogate lock tion). Row locks are always held on a specific row that has
an 8-byte row identifier that is reported as a 64-bit integer
value in the row_identifier column.
Position ● Phantom – (IQ catalog store tables Usually a position lock is also held on a specific row, and
only) phantom lock that row's 64-bit row identifier appears in the row_identi
● Insert – insert lock fier column in the result set. However, Position locks can
be held on entire scans (index or sequential), in which
case the row_identifier column is NULL.
Note
Exclusive, phantom, or anti-phantom locks can be placed on IQ catalog store tables, but not on SAP IQ
tables in the IQ main store. Unless you have explicitly taken out locks on a table in the catalog store, you
never see these types of locks in an SAP IQ database.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
The example shows the sp_iqlocks procedure call and its output in the SAP IQ database. The procedure is
called with all default options, so that the output shows all locks, sorted by connection:
call sp_iqlocks()
Related Information
Triggers a merge of a single row-level versioned (RLV) table store into the IQ main store.
Syntax
merge_type
The type of merge to perform. Valid entries are BLOCKING (default) and NON-BLOCKING.
table_name
Remarks
After performing the merge, the stored procedure automatically commits the merge transaction.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Triggers a merge of a group of row-level version (RLV) table stores into the IQ main store.
Syntax
Parameters
merge_type
The type of merge to perform. Valid entries are BLOCKING (default) and NON-BLOCKING.
The expression to identify tables to merge. Defaults to all tables if not specified.
table_owner_exp
The expression to identify the owner of tables to merge. Defaults to all owners if not specified
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Remarks
● After performing the merge, the stored procedure automatically commits the merge transaction.
● <table_name> and <table_owner> accept the REGEXP wildcard characters: [ ] * . ? - | ( ) { } \ ^ $ : +
Note
Some wildcard characters are also allowed in database identifiers. These characters are interpreted as
wildcards unless escaped with the "\" character.
Side Effects
None
Examples
You have the following RLV-enabled tables: T1, C1, C4, C18, C19, C2G, and C2Great:
● This command merges any table name that matches any single character ( the . ), one or more times ( the
* ). Therefore, all seven tables are merged:
● This command merges any table name that starts with T followed by a single character that has a value of
1-9 or starts with C followed by a single character that has a value of 1-9. Based on the available tables, only
tables T1, C1, and C4 are merged:
● This command merges any table name that starts with T followed by a single character that has a value of
1-9, or any table name that starts with C followed by a single character that has a value of 1-9 followed by
In this section:
Metacharacters are symbols or characters that have a special meaning within a regular expression.
● Whether the regular expression is being used with the SIMILAR TO or REGEXP search conditions, or the
REGEXP_SUBSTR function.
● Whether the metacharacter is inside of a character class in the regular expression.
Before continuing, you should understand the definition of a character class. A character class is a set of
characters enclosed in square brackets, against which characters in a string are matched. For example, in the
syntax SIMILAR TO 'ab[1-9]', [1-9] is a character class and matches one digit in the range of 1 to 9,
inclusive. The treatment of metacharacters in a regular expression can vary depending on whether the
metacharacter is placed inside a character class. Specifically, most metacharacters are handled as regular
characters when positioned inside of a character class.
For SIMILAR TO (only), the metacharacters *, ?, +, _, |, (, ), and { must be escaped within a character class.
To include a literal minus sign (-), caret (^), or right-angle bracket (]) character in a character class, it must be
escaped.
This table lists the supported regular expression metacharacters. Almost all metacharacters are treated the
same when used by SIMILAR TO, REGEXP, and REGEXP_SUBSTR:
[ ] Left and right square brackets are used to specify a character class. A character class is a set of
characters to match against.
With the exception of the hyphen (-) and the caret (^), metacharacters and quantifiers (such as *
and {m}, respectively) specified within a character class have no special meaning and are evalu
ated as actual characters.
* The asterisk can be used to match a character 0 or more times. For example, REGEXP '.*abc'
matches a string that ends with abc, and starts with any prefix. So, aabc, xyzabc, and abc
match, but bc and abcc do not.
? The question mark can be used to match a character 0 or 1 times. For example, 'colou?r'
matches color and colour.
+ The plus sign can be used to match a character 1 or more times. For example, 'bre+' matches
bre and bree, but not br.
- A hyphen can be used within a character class to denote a range. For example, REGEXP '[a-
e]' matches a, b, c, d, and e.
% The percent sign can be used with SIMILAR TO to match any number of characters.
The percent sign is not considered a metacharacter for REGEXP and REGEXP_SUBSTR. When
specified, it matches a percent sign (%).
The underscore is not considered a metacharacter for REGEXP and REGEXP_SUBSTR. When
specified, it matches an underscore (_).
| The pipe symbol is used to specify alternative patterns to use for matching the string. In a string of
patterns separated by a vertical bar, the vertical bar is interpreted as an OR and matching stops at
the first match made starting from the leftmost pattern. So, you should list the patterns in de
scending order of preference. You can specify an unlimited number of alternative patterns.
( ) Left and right parenthesis are metacharacters when used for grouping parts of the regular expres
sion. For example, (ab)* matches zero or more repetitions of ab. As with mathematical expres
sions, you use grouping to control the order in which the parts of a regular expression are evalu
ated.
{ } Left and right curly braces are metacharacters when used for specifying quantifiers. Quantifiers
specify the number of times a pattern must repeat to constitute a match. For example:
● {<m>}
Matches a character exactly <m> times. For example, '519-[0-9]{3}-[0-9]{4}'
matches a phone number in the 519 area code (providing the data is formatted in the manner
defined in the syntax).
● {<m>,}
Matches a character at least <m> times. For example, '[0-9]{5,}' matches any string of
five or more digits.
● {<m>,<n>}
Matches a character at least <m> times, but not more than <n> times. For example, SIMILAR
TO '_{5,10}' matches any string with between 5 and 10 (inclusive) characters.
\ The backslash is used as an escape character for metacharacters. It can also be used to escape
non-metacharacters.
^ For REGEXP and REGEXP_SUBSTR, when a caret is outside a character class, the caret matches
the start of a string. For example, '^[hc]at' matches hat and cat, but only at the beginning
of the string.
$ When used with REGEXP and REGEXP_SUBSTR, matches the end of a string. For example,
REGEXP 'cat$' matches cat, but not catfish.
. When used with REGEXP and REGEXP_SUBSTR, matches any single character. For example,
REGEXP 'a.cd' matches any string of four characters that starts with a and ends with cd.
: The colon is used within a character set to specify a subcharacter class. For example,
'[[:alnum:]]'.
Sets an option on a named login policy to a certain value. If no login policy is specified, the option is set on the
root policy. In a multiplex, sp_iqmodifyadmin takes an optional parameter that is the multiplex server name.
Syntax
Syntax 1
Syntax 2
Syntax 3
Parameters
policy_option_name
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY LOGIN POL System privileges GRANT System Privilege Statement [page 1511]
ICY
Side Effects
None
Examples
● The following option sets the login option locked to ON for the policy named lockeduser:
Related Information
Syntax
Syntax 1
Syntax 2
Parameters
user_id
(Optional) The name of the login policy to which the user will be assigned. If no login policy name is
specified, the user is assigned to the root login policy.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY USER System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Examples
● The following example assigns user joe to a login policy named expired_password:
● The following example assigns user joe to the root login policy:
Related Information
Moves the table from all read-only dbfiles to read-write dbfiles of the same dbspace. It can move multiple tables
in the same dbspace in parallel via multiple concurrent connections to the server.
Syntax
table name
Remarks
● If the dbspace where the table to be moved resides is not online, or not set to readwrite.
● If there are no read-only dbfiles in the dbspace where the table to be moved resides.
● If there are no read-write dbfiles in the dbspace where the table to be moved resides.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● BACKUP DATABASE System privileges GRANT System Privilege Statement [page 1511]
● SERVER OPERATOR
● ALTER DATABASE
● INSERT ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● UPDATE ANY TABLE
● DELETE ANY TABLE
● ALTER ANY TABLE
● LOAD ANY TABLE
● TRUNCATE ANY TABLE
● ALTER ANY OBJECT
None
Example
sp_iqmovetablefromfile 'lineitem_partitioned';
Object_Name Bytes_Moved
DBA.lineitem_partitioned 2850816
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C1_FP 376832
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C10_FP 212992
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C11_FP 385024
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C12_FP 385024
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C13_FP 385024
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C14_FP 229376
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C15_FP 229376
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C16_FP 344064
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C2_FP 385024
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C3_FP 335872
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C4_FP 303104
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C5_FP 327680
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C6_FP 401408
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C7_FP 327680
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C8_FP 327680
DBA.lineitem_partitioned.ASIQ_IDX_T1682_C9_FP 221184
DBA.lineitem_partitioned.ASIQ_IDX_T1682_I17_HG 1024000
DBA.lineitem_partitioned.l_p_commitdate_hng 1269760
DBA.lineitem_partitioned.l_p_orderkey_hg 1179648
DBA.lineitem_partitioned.l_p_part_ord_hg 1998848
DBA.lineitem_partitioned.l_p_partkey_hg 917504
DBA.lineitem_partitioned.l_p_quantity_hng 483328
DBA.lineitem_partitioned.l_p_receiptdate_hng 1269760
DBA.lineitem_partitioned.l_p_shipdate_hng 1269760
DBA.lineitem_partitioned.l_p_supp_ord_hg 1998848
DBA.lineitem_partitioned.l_p_supp_part_hg 1024000
DBA.lineitem_partitioned.l_p_supp_part_ord_hg 1998848
DBA.lineitem_partitioned.l_p_suppkey_hg 876544
Upon success, sp_iqmovetablefromfile generates a message similar to the following in the IQ message
file:
sp_iqmovetablefromfile for table <owner name>.<table name>
started sp_iqmovetablefromfile completed for table <owner name>.<table name>.
Relocated ... bytes in ... milliseconds at ... bytes/millisecond.
sp_iqmpxcheckdqpconfig is a diagnostic tool that checks the DQP configuration for the current connection.
If DQP fails, run sp_iqmpxcheckdqpconfig to determine if DQP configuration issues are causing the query
distribution failure.
Syntax
sp_iqmpxcheckdqpconfig
Returns
Description Diagnostic message describing the issue found with DQP configuration
Remarks
Diagnostic information:
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Example
diagmsgid description
3 Logical server policy option dqp_enabled is set to 0
5 Logical server context has only one member node
6 Coordinator does not participate in DQP since its
named membership in the logical server is
currently ineffective
7 Coordinator does not participate in DQP since
its logical membership in the logical server
is currently ineffective because
ALLOW_COORDINATOR_AS_MEMBER option in Root
Logical server policy set to OFF
8 There is no dbfile in IQ_SHARED_TEMP dbspace
Related Information
Syntax
Remarks
sp_iqmpxdumptlvlog returns the contents of the queue through which the coordinator propagates DML and
DDL commands to secondary nodes.
'main', 'asc'
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE MULTIPLEX System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
RowID Contents
--------------------------------------------------------------
1 Txn CatId:196 CmtId:196 TxnId:195 Last Rec:1
UpdateTime: 2011-08-08 15:41:43.621
2 Txn CatId:243 CmtId:243 TxnId:242 Last Rec:5
UpdateTime: 2011-08-08 15:42:25.070
3 DDL: Type=34, CatID=0, IdxID=0,
Object=IQ_SYSTEM_TEMP, Owner=mpx4022_w1
4 CONN: CatID=0, ConnUser=
5 SQL: ALTER DBSPACE "IQ_SYSTEM_TEMP" ADD FILE
"w1_temp1" '/dev/raw/raw25' FILE ID 16391 PREFIX 65536
FINISH 0 FIRST BLOCK
1 BLOCK COUNT 3276792 RESERVE 0 MULTIPLEX SERVER
"mpx4022_w1" COMMITID 242 CREATETIME
'2011-08-08 15:42:24.860'
6 Txn CatId:283 CmtId:283 TxnId:282 Last Rec:7
UpdateTime: 2011-08-08 15:42:50.827
7 RFRB TxnID: 242 CmtID:243 ServerID 0 BlkmapID:
0d00000000000000d2000a000000000002000000000000000000
0000000000000000000008003501010000000c38000000000000
010000000000000000000000RFID:01000501000000001300000
0000000000100000000000100RBID:010005010000000013000
If run on the coordinator node, displays file status for coordinator and for every shared dbspace file on every
included secondary node. If executed on a secondary node, displays file status for only the current node.
Syntax
sp_iqmpxfilestatus
Returns
server_id UNSIGNED INT Identifier for the multiplex server, from SYSIQMPXINFO
server_name CHAR(128) Name of the multiplex node where the dbspace file resides
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE MULTIPLEX System privilege GRANT System Privilege Statement [page 1511]
None
Example
server_id,server_name,DBSpace_name,FileName,FileStatus
1,'mpx2422_m','IQ_SYSTEM_MAIN','IQ_SYSTEM_MAIN','VALID'
1,'mpx2422_m','mpx_main1','mpx_main1','VALID'
1,'mpx2422_m','IQ_SHARED_TEMP','sharedfile_dba','VALID'
1,'mpx2422_m','IQ_SHARED_TEMP','sharedfile_dba1','VALID'
2,'mpx2422_w1','IQ_SYSTEM_MAIN','IQ_SYSTEM_MAIN','VALID'
2,'mpx2422_w1','mpx_main1','mpx_main1','VALID'
2,'mpx2422_w1','IQ_SHARED_TEMP','sharedfile_dba','VALID'
2,'mpx2422_w1','IQ_SHARED_TEMP','sharedfile_dba1','VALID'
3,'mpx2422_r1','IQ_SYSTEM_MAIN','IQ_SYSTEM_MAIN','VALID'
3,'mpx2422_r1','mpx_main1','mpx_main1','VALID'
3,'mpx2422_r1','IQ_SHARED_TEMP','sharedfile_dba','VALID'
3,'mpx2422_r1','IQ_SHARED_TEMP','sharedfile_dba1','VALID'
If run on the coordinator node, displays INC connection pool status for every node. If executed on a secondary
node, displays INC connection pool status for only the current node.
Syntax
sp_iqmpxincconnpoolinfo
Returns
If the procedure is run on the coordinator and a secondary node is not responding or has timed out, the result
set omits the row for that node, because this data cannot be accessed unless that node is running.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE MULTIPLEX System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
server_id,server_name,current_pool_size,
idle_connection_count,connections_in_use
2,'r2_dbsrv90210',0,0,0
3,'w3_dbsrv90210',0,0,0
Related Information
If run on the coordinator node, displays INC heartbeat status for every node. If executed on a secondary node,
displays INC heartbeat status for just the current node.
Syntax
sp_iqmpxincheartbeatinfo
Returns
last_positive_hb TIMESTAMP Date/time of last successful heartbeat ping, in the following format:
DD:MM:YYYY:HH:MM:SS
time_not_responding TIME Time since last successful heartbeat ping, in the following format:
HH:MM:SS
time_until_timeout TIME If a node is not responding, the time left until node is declared offline.
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE MULTIPLEX System privileges GRANT System Privilege Statement [page 1511]
Side Effects
None
server_id,server_name,last_positive_hb,
time_not_responding,time_until_timeout
2,'r2_dbsrv90210',2012-11-17
15:48:42.0,00:00:00,00:00:00
3,'w3_dbsrv90210',2012-11-17
15:48:42.0,00:00:00,00:00:00
● If the elapsed time exceeds 24 hours, SAP IQ returns sp_iqmpxincheartbeatinfo output like the
following:
server_id,server_name,last_positive_hb,
time_not_responding,time_until_timeout
2,'r2_mpx_cr_srv',Jan 14 2013 11:57AM,11:59PM,11:59PM
3,'w4_mpx_cr_srv',Jan 14 2013
11:57AM,11:59PM,11:59PM
(2 rows affected)
(return status = 0)
A value of 11:59PM in the time_not_responding and time_until_timeout columns means that the
time has crossed the 24-hour limit.
Related Information
Displays a snapshot of the aggregate statistics of internode communication (INC) status since server startup
as of the moment of execution.
Syntax
sp_iqmpxincstatistics
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY STATISTICS System privileges GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
The following example shows one suspended and one resumed transaction:
sp_iqmpxincstatistics
stat_name stat_value
Returns a row for every node in the multiplex. Can be run from any multiplex node.
Syntax
sp_iqmpxinfo
Returns
server_id UNSIGNED INT Identifier for the server for which information appears
connection_info LONG VARCHAR A formatted string containing the host/port portion of the connec
tion string used for TCP/IP connections between multiplex servers.
● 'coordinator'
● 'writer'
● 'reader'
● 'included'
● 'excluded'
●
● 'single'
● 'coordinator'
● 'writer'
● 'reader'
● 'unknown'
● 'active'
● 'not responding'
● 'timed out'
private_connection_ LONG VARCHAR A formatted string containing the host/port portion of the connec
info tion string used for private TCP/IP connections between multiplex
servers
rlvstore CHAR(8) Indicator of existence of RLV store on multiplex. Values are enabled
and disabled.
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● MANAGE MULTIPLEX System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
Side Effects
None
server_id,server_name,connection_info,db_path,role,
status,mpx_mode,inc_state,coordinator_failover,
current_version,active_versions,private_connection_
info,mipc_priv_state,mipc_public_state
1,'my_mpx1','host=(fe80::214:4fff:fe45:be26%2):1362
0,(fd77:55d:59d9:329:214:4fff:fe45:be2
6%2):13620,10.18.41.196:13620','/system3/users
/devices/s16900269/iqmpx1/mpx1.db',
'coordinator','included','coordinator','N/A',
'my_mpx2',0,,,'active','active'
2,'IQ_mpx2','host=system3:13625',
'/system3/users/devices/s16900269
/iqmpx_2/wk0001.db','writer','included',
'writer','active','IQ_mpx20', 'not responding','active'
3,'IQ_mpx3,'host=system3:13630/system3/users/devi
ces/s16900269/iqmpx_3/mpx1.db','reader','included',
'unknown',timed out',
'IQ_mpx20','not responding',
'not responding'
Shows details about currently suspended connections and transactions on the coordinator node.
Syntax
sp_iqmpxsuspendedconninfo
Returns
GlobalTxnID UNSIGNED INT Global transaction identifier of active transaction on this connection
MPXServerName CHAR(128) Name of the multiplex server where the INC connection originates
TimeInSuspended INT Total time, in seconds, spent by the connection in suspended state
State
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. No system privilege is needed to see your own suspended connections.
To see all suspended connections in the database, you need one of the following:
● DROP CONNECTION System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
● SERVER OPERATOR
Side Effects
None
Example
sp_iqmpxsuspendedconninfo
Syntax
Returns
● 0 – No errors detected
● 1 – Dynamic state is not as expected.
● 2 – Nonfatal configuration error; for example, multiplex operation impaired
● 3 – Fatal configuration problem; for example, one or more servers might not start
Remarks
Executes multiple checks on tables SYS.SYSIQDBFILE and other multiplex events and stored procedures. May
run on any server.
Returns rows listing all errors and their severity. If called interactively, with the optional calling parameter 'N',
returns only the severity status.
Each error indicates its severity. If there are no errors, the procedure returns No errors detected.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Shows the current version information for this server, including server type (write server, query server, single-
node mode) and synchronization status.
Syntax
sp_iqmpxversioninfo
Returns
● "C" – Coordinator
● "W" – Write Server
● "Q" – Query Server
● "T" – synchronized
● "F" – not synchronized
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Related Information
Syntax
Parameter
owner_name
(Optional) Owner of the object. If specified, sp_iqobjectinfo displays output only for tables with the
specified owner. If not specified, sp_iqobjectinfo displays information on tables for all users in the
database.
object_name
(Optional) Name of the table. If not specified, sp_iqobjectinfo displays information on all tables in the
database.
object-type
(Optional) Valid table object types. If <object-type> is a table, enclose it in quotation marks.
Returns
Returns all the partitions and the dbspace assignments of a particular or all database objects (of type table)
and its subobjects. The subobjects are columns, indexes, primary key, unique constraints, and foreign keys.
object_name Name of the object (of type table) located on the dbspace.
object_type Type of the object (column, index, primary key, unique constraint, foreign key, partition, or table).
dbspace_name Name of the dbspace on which the object resides. The string "[multiple]" appears in a special
meta row for partitioned objects. The [multiple] row indicates that multiple rows follow in the out
put to describe the table or column.
Remarks
All parameters are optional, and any parameter may be supplied independent of the value of another
parameter.
Use input parameters with sp_iqobjectinfo; you can query the results of the sp_iqobjectinfo and it
performs better if you use input parameters rather than using predicates in the WHERE clause of the query. For
example, Query A is written as:
Query B returns results faster than Query A. When the input parameters are passed to sp_iqobjectinfo, the
procedure compares and joins fewer records in the system tables, thus doing less work compared to Query A.
In Query B, the predicates are applied in the procedure itself, which returns a smaller result set, so a smaller
number of predicates is applied in the query.
The sp_iqobjectinfo stored procedure supports wildcard characters for interpreting <owner_name>,
<object_name>, and <object_type>. It shows information for all dbspaces that match the given pattern in
the same way the LIKE clause matches patterns inside queries.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
Note
These examples show objects in the iqdemo database to better illustrate output. iqdemo includes a
sample user dbspace named iq_main that may not be present in your own databases.
● The following example displays information about partitions and dbspace assignments of a specific
database object and subobjects owned by a specific user:
sp_iqobjectinfo GROUPO,Departments
● The following example displays information about partitions and dbspace assignments of a specific
database object and subobjects owned by a specific user for <object-type> table:
sp_iqobjectinfo DBA,sale,'table'
Related Information
Note
Though sp_iqpassword is still supported for backwards compatibility, use ALTER USER to change a user
password.
Syntax
Syntax 1
Syntax 2
Parameters
caller_password
Your password. When you are changing your own password, this is your old password. When a user with the
CHANGE PASSWORD system privilege is changing another user’s password, caller_password is the
password of the user making the change.
new_password
(Optional) Login name of the user whose password is being changed by another user with CHANGE
PASSWORD system privilege. Do not specify user_name when changing your own password.
A user password is an identifier. Any user can change his or her own password using sp_iqpassword. The
CHANGE PASSWORD system privilege is required to change the password of any existing user.
Identifiers have a maximum length of 128 bytes. They must be enclosed in double quotes or square brackets if
any of these conditions are true:
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. No additional system privilege is need to set your own password.
CHANGE PASSWORD System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Examples
● The following example changes the password of the logged-in user from irk103 to exP984:
● In the following example, if the logged-in user has the CHANGE PASSWORD system privilege or joe, the
password of user joe from eprr45 to pdi032:
Displays information about primary keys and primary key constraints by table, column, table owner, or for all
SAP IQ tables in the database.
Syntax
Parameter
table-name
(Optional) The name of a base or global temporary table. If specified, the procedure returns information
about primary keys defined on the specified table only.
column-name
(Optional) The name of a column. If specified, the procedure returns information about primary keys on
the specified column only.
table-owner
(Optional) The owner of a table or table. If specified, the procedure returns information about primary keys
on tables owned by the specified owner only.
Returns
The sp_iqpkeys stored procedure displays the following information about primary keys on base and global
temporary tables in a database:
column_name The name of the column(s) on which the primary key is defined.
Remarks
One or more of the parameters can be specified. If you do not specify either of the first two parameters, but
specify the next parameter in the sequence, you must substitute NULL for the omitted parameters. If none of
the parameters are specified, a description of all primary keys on all tables in the database is displayed. If any
of the specified parameters is invalid, no rows are displayed in the output.
Syntax Output
sp_iqpkeys sales Displays information about primary keys defined on table sales.
sp_iqpkeys sales, NULL, Displays information about primary keys defined on table sales owned by DBA.
DBA
sp_iqpkeys sales, Displays information about primary key defined on column store_id of table
store_id, DBA sales owned by DBA.
sp_iqpkeys NULL, NULL, Displays information about primary keys defined on all tables owned by DBA.
DBA
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
● The following example displays the primary keys defined on columns of table sales1:
sp_iqpkeys sales1
● The following example displays the primary keys defined on columns of table sales2:
sp_iqpkeys sales2
table_name table_owner column_name column_id constraint_name constraint_id
sales2 DBA store_id, 1,2 MA115 115
order_num
● The following example displays the primary keys defined on the column store_id of table sales2:
Related Information
Syntax
Parameters
proc-name
● SYSTEM – displays information about system procedures (procedures owned by user SYS or dbo) only
● ALL – displays information about user and system procedures
● Any other value – displays information about user procedures
Returns
proc_defn The command used to create the procedure. For hidden procedures, the keyword 'HIDDEN' is dis
played.
replicate Displays Y if the procedure is a primary data source in a Replication Server installation; N if not.
srvid Indicates the remote server, if the procedure is on a remote database server.
Remarks
The sp_iqprocedure procedure can be invoked without any parameters. If no parameters are specified, only
information about user-defined procedures (procedures not owned by dbo or SYS) is displayed by default.
If you do not specify either of the first two parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example, sp_iqprocedure NULL, NULL, SYSTEM
and sp_iqprocedure NULL, user1.
Syntax Output
sp_iqprocedure Displays information about all procedures in the database not owned by dbo or SYS.
sp_iqprocedure NULL, DBA Displays information about all procedures owned by DBA.
sp_iqprocedure sp_test, Displays information about the procedure sp_test owned by DBA.
DBA
sp_iqprocedure No rows returned, as the procedure sp_iqtable is not a user procedure (by default
sp_iqtable, dbo only user procedures returned).
sp_iqprocedure NULL, Displays information about all system procedures (owned by dbo or SYS).
NULL, SYSTEM
sp_iqprocedure Displays information about the system procedure sp_iqtable owned by dbo.
sp_iqtable, dbo, ALL
The sp_iqprocedure stored procedure displays information about procedures in a database. If you specify
one or more parameters, the result is filtered by the specified parameters. For example, if <proc-name> is
specified, only information about the specified procedure is displayed. If <proc-owner> is specified,
sp_iqprocedure returns only information about procedures owned by the specified owner. If no parameters
are specified, sp_iqprocedure displays information about all the user-defined procedures in the database.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
sp_iqprocedure sp_test
Related Information
Displays information about stored procedure parameters, including result set variables and SQLSTATE/
SQLCODE error values.
Syntax
Parameters
proc-name
● SYSTEM – displays information about system procedures (procedures owned by user SYS or dbo) only
● ALL – displays information about user and system procedures
● Any other value – displays information about user procedures
Returns
parm_mode The mode of the parameter: whether a parameter supplies a value to the procedure, returns a
value, does both, or does neither. Parameter mode is one of the following:
domain_name The name of the data type of the parameter as listed in the SYSDOMAIN system table
width The length of string parameters, the precision of numeric parameters, and the number of bytes of
storage for all other data types
scale The number of digits after the decimal point for numeric data type parameters and zero for all
other data types
Remarks
You can invoke sp_iqprocparm without parameters. If you do not specify any parameters, input/output and
result parameters of user-defined procedures (procedures not owned by dbo or SYS) appear.
If you do not specify either of the first two parameters, but specify the next parameter in the sequence, you
must substitute NULL for the omitted parameters. For example, sp_iqprocparm NULL, NULL, SYSTEM and
sp_iqprocparm NULL, user1.
Syntax Output
sp_iqprocparm Displays parameters for all procedures in the database not owned by dbo
or SYS.
sp_iqprocparm non_existing_proc No rows returned, as the procedure non_existing_proc does not ex
ist.
sp_iqprocparm NULL, DBA Displays parameters for all procedures owned by DBA.
sp_iqprocparm sp_test, DBA Displays parameters for the procedure sp_test owned by DBA.
sp_iqprocparm sp_iqtable, dbo No rows returned, as the procedure sp_iqtable is not a user procedure
(by default, only user procedures are returned).
sp_iqprocparm NULL, NULL, Displays parameters for all system procedures (owned by dbo or SYS).
SYSTEM
sp_iqprocparm sp_iqtable, dbo, Displays parameters of the system procedure sp_iqtable owned by
ALL dbo.
The sp_iqprocparm stored procedure displays information about stored procedure parameters, including
result set variables and SQLSTATE/SQLCODE error values. If you specify one or more parameters, the result is
filtered by the specified parameters. For example, if <proc-name> is specified, only information about
parameters to the specified procedure displays. If <proc-owner> is specified, sp_iqprocparm only returns
information about parameters to procedures owned by the specified owner. If no parameters are specified,
sp_iqprocparm displays information about parameters to all the user-defined procedures in the database.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
sp_iqprocparm sp_test
Related Information
Syntax
sp_purgeiqbackuphistory (
[ bu_id='<value>' ], [ bu_time_low='<value>' ],
[ bu_time_high='<value>' ], [ bu_type=<value>' ]
)
bu_id='value'
(Optional) An UNSIGNED BIGINT parameter that deletes entries that match the bu_id.
bu_time_low='value'
(Optional) A TIMESTAMP parameter that deletes entries with timestamps (hh:mm:ss.ms) greater than or
equal to bu_id.
bu_time_high='value'
(Optional) A TIMESTAMP parameter that deletes entries with backup times less than or equal to bu_id.
bu_type=value
(Optional) A TINYINT parameter that deletes entries that match the bu_id:
● 0 = FULL
● 1 = INCREMENTAL
● 2 = INCREMENTAL SINCE FULL
● 5 = POINT IN TIME RECOVERY
Remarks
The selection parameters you provide determine which rows are deleted from the SysIQBackupHistory and
SysIQBackupHistoryDetail system tables. If no selection parameters are specified, all rows are deleted.
Since SysIQBackupHistory and SysIQBackupHistoryDetail are system tables, purging the tables entries is non-
transactional and cannot be rolled back.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
BACKUP DATABASE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
sp_purgeiqbackuphistory(bu_id='9277')
sp_purgeiqbackuphistory(bu_time_low='2013/01/01')
● The following example deletes entries with backup times before January 1, 2013:
sp_purgeiqbackuphistory(bu_time_high='2013/01/01')
● The following example contrasts the SYSIQBACKUPHISOTRY table values before and after running
sp_purgeiqbackuphistory. Corresponding changes to the SYSIQABACKUPHISOTRYDETAIL table are
not shown.
Syntax
table_name
Partial or fully qualified table name on which the index rebuild process takes place. If the user both owns
the table and executes the procedure, a partially qualified name may be used; otherwise, the table name
must be fully qualified.
index_clause
column <column_name>[<count>]
index <index_name>
You must specify the keywords column and index. These keywords are not case-sensitive.
Caution
Remarks
Note
To rebuild an index other than the default FP index, specify the index name. sp_iqrebuildindex behavior
is the same regardless of the FP_NBIT_IQ15_COMPATIBILITY setting.
Each <column_name> or <index_name> must refer to a column or index on the specified table. If you specify
a <column_name> or <index_name> multiple times, the procedure returns an error and no index is rebuilt.
The <count> is a non-negative number that represents the IQ UNIQUE value. In a CREATE TABLE statement,
IQ UNIQUE (count) approximates how many distinct values can be in a given column. The number of
distinct values affects query speed and storage requirements.
If MERGEALL or RETIER are omitted from an operation from an HG index , sp_iqrebuildindex truncates and
reconstructs the entire HG index from the column data.
MERGEALL merges all tiers of a tiered HG index and moves the contents into an appropriate tier:
The merge ensures that there is only one active sub-index in a tiered HG index. MERGEALL operations may
improve query access time for a tiered index in cases where there are too many deleted records (as shown by
RETIER is a keyword specific to HG indexes that changes the format of an HG index from non-tiered HG to tiered
HG, or tiered HG to non-tiered HG:
● RETIER converts a tiered HG index into a single non-tiered HG index. Tiering metadata is disabled and only
one sub-index is maintained.
● RETIER converts a non-tiered HG into a tiered HG index, and pushes the single sub-index, which contains all
the data into an appropriate tier.
MERGEALL and RETIER will only be supported with an index clause, and only if the index specified is an HG
index.
If you specify a column name, sp_iqrebuildindex rebuilds the default FP index for that column; no index
name is needed. If you specify the default FP index name assigned by SAP IQ in addition to the column name,
sp_iqrebuildindex returns an error.
A column with IQ UNIQUE <n> value determines whether sp_iqrebuildindex rebuilds the column as Flat
FP or NBit. An IQ UNIQUE <n> value set to 0 rebuilds the index as a Flat FP. An <n> value greater than 0
but less than 2,147,483,647 rebuilds the index as NBit. NBit columns without an <n> value are rebuilt as NBit.
sp_iqrebuildindex rebuilds an NBit column as NBit, even if you do not specify a count. If you do specify a
count, the <n> value must be greater than the number of unique values already in the index.
If you rebuild a column with a Flat FP index, and the column does not include an IQ UNIQUE <n> value,
sp_iqrebuildindex rebuilds the index as N-Bit FP up to the limits defined in the
FP_NBIT_AUTOSIZE_LIMIT and FP_NBIT_LOOKUP_MB options. Specifying an <n> value for a flat column
throws an error if FP_NBIT_ENFORCE_LIMITS=ON and the cardinality exceeds the count.
The sp_iqrebuildindex default interface allows a user to re-create an entire HG index from an existing FP
index. sp_iqrebuildindex re-reads all FP index column values and creates the HG index. This will, however
retain all the metadata regarding tier sizes, continuous load size, etc.
Note
This procedure does not support TEXT indexes. To rebuild a TEXT index you must drop and re-create the
index.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● INSERT ANY TABLE System privilege GRANT System Privilege Statement [page 1511]
● INSERT privilege on the Object-level privilege GRANT Object-Level Privilege Statement [page 1502]
table
Side Effects
None
Examples
● The following two lines of syntax show the default FP index on column dept_id:
● The following two converts the default Flat FP index to an Nbit index with an estimated distinct count of
1024:
Note
Users can expect to see a temporary performance drop when sp_iqrebuildindex runs on a large HG
index.
Related Information
Syntax
Parameters
table_name
Identifies the table. This parameter is required, but can include an empty string. Substituting an empty
string for the <table_name> rebuilds all wide-column tables in the database for the <table owner>
specified in the command. Substituting an empty string for the <table_name> and <table owner>
rebuilds all wide-column tables in the database.
table_owner
(Optional) Is the owner of the table. An explicit <table_owner> name is optional; the default is an empty
string. Substituting an empty string for the <table_name> rebuilds all wide-column tables in the database
for the <table_owner> specified in the command. Using an explicit <table_name> and an empty string
as the <table_owner> rebuilds the table for all users. Substituting an empty string for <table_owner>
and <table_owner> rebuilds all wide-column tables in the database.
level
(Optional) Determines how sp_iqrebuildindexwide rebuilds the table or tables. This parameter is
optional and includes four options:
● '1' – rebuilds all pre-16.1 columns wider than 255 bytes for a given user.
● '2' – rebuilds all tokenized FPs (i.e., pre-16.1 1/2/3 byte FPs, projectable 1 & 2 byte FP, and 16.1 NBit FP)
as well as VARCHAR or VARBINARY columns, and all pre-16.1 columns wider than 255 bytes.
● '3' – rebuilds all fixed Flat FPs, and all pre-16.1 columns wider than 255 bytes.
● '4' – applies levels 1, 2, 3, and rebuilds all pre- 16.0 columns wider than 255 bytes, all tokenized FPs, all
varchar and varbinary columns, and all FLAT fixed FPs.
Remarks
CHAR, VARCHAR, BINARY, and VARBINARY columns wider than 255 characters, as well as all LONG VARCHAR
and LONG BINARY columns in databases migrated to SAP IQ 16.1 must be rebuilt before the database engine
can perform read/write activities on them
SAP IQ implicitly rebuilds these type of columns the first time a table is opened for read-write access.
sp_iqrebuildindexwide explicitly rebuilds these columns to the state defined by the level parameter.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● INSERT ANY TABLE System privilege GRANT System Privilege Statement [page 1511]
● INSERT privilege on the Object-level privilege GRANT Object-Level Privilege Statement [page 1502]
table
Side Effects
None
Examples
● In this example, the vartab table is owned by the DBA. Running this query returns the following:
Running sp_iqrebuildindexwide at level '1' with vartab as the <table_name> and DBA as the
<table_owner> rebuilds columns clob1, colb2, lvc1, lvb1, blob1, and blob2:
Running sp_iqrebuildindexwide at level '2' with vartab3 as the table_name and user1 as the
table_owner rebuilds columns vc1, vb1, c1, b1, tk1, tk2, tk3, tk4, and tk5:
sp_iqrebuildindexwide('vartab3', 'user1', 2)
Running sp_iqrebuildindexwide at level '3' with vartab3 as the table_name and user1 as the
table_owner rebuilds columns rid and part:
Renames user-created tables, columns, indexes, constraints (unique, primary key, foreign key, and check),
stored procedures, and functions.
Syntax
Parameters
object-name
If the object to be renamed is a column, index, or constraint, you must specify the name of the table with
which the object is associated. For a column, index, or constraint, <object-name> can be of the form
<table-name.object-name> or <owner-name.table-name.object-name>.
new-name
The new name of the object. The name must conform to the rules for identifiers and must be unique for the
type of object being renamed.
object-type
(Optional) A parameter that specifies the type of the user-created object being renamed, that is, the type
of the object <object-name>. The <object-type> parameter can be specified in either upper or
lowercase.
Caution
The sp_iqrename procedure does not automatically update the definitions of dependent objects. You
must change these definitions manually.
Remarks
The sp_iqrename stored procedure renames user-created tables, columns, indexes, constraints (unique,
primary key, foreign key, and check), and functions.
If you attempt to rename an object with a name that is not unique for that type of object, sp_iqrename returns
the message Item already exists.
sp_iqrename does not support renaming a view, a procedure, an event or a data type, and returns the
message Feature not supported if you specify event or datatype as the <object-type> parameter.
You can also rename using the RENAME clause of the ALTER TABLE statement and ALTER INDEX statement.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure and exclusive access to any object
referenced by the procedure. See GRANT EXECUTE Privilege Statement [page 1499]. If you own the object
referenced by the procedure, no additional privilege is required.
For objects owned by others, additional privileges are needed, depending on the object type.
ALTER ANY OBJECT Rename any object. System privileges GRANT System Privi
lege Statement [page
ALTER ANY TABLE Rename any table, col
1511]
umn or constraint.
Side Effects
None
● The following example renames the table titles owned by user shweta to books:
● The following example renames the column id of the table books to isbn:
● The following example renames the index idindex on the table books to isbnindex:
● The following example renames the primary key constraint prim_id on the table books to prim_isbn:
Related Information
Sets the seed of the Identity/Autoincrement column associated with the specified table to the specified value.
Syntax
Parameters
table_name
table owner
The seed value you specify to replace the default seed value.
The Identity/Autoincrement column stores a number that is automatically generated. The values generated are
unique identifiers for incoming data. The values are sequential, are generated automatically, and are never
reused, even when rows are deleted from the table. The seed value specified replaces the default seed value
and persists across database shutdowns and failures.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● ALTER ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
● ALTER privilege on the Object-level privilege GRANT Object-Level Privilege Statement [page 1502]
table
Side Effects
None
Example
The following example creates an Identity column with a starting seed of 50:
Related Information
Identifies actions required to bring the database to a state consistent with a given date.
Parameters
Returns
● "Non-virtual"
● "Decoupled"
● "Encapsulated"
restore_dbspace Can be empty. Indicates that all dbspaces are to be restored from the backup archive.
restore_dbfile Could be empty. Indicates that all dbfiles in the given dbspace are to be restored from the backup
archive.
sp_iqrestoreaction returns an error if the database cannot be brought to a consistent state for the
timestamp. Otherwise, suggests restore actions that will return the database to a consistent state.
The common point to which the database can be restored coincides with the last backup time that backed up
read-write files just before the specified timestamp. The backup may be all-inclusive or read-write files only.
Output may not be in exact ascending order based on backup time. If a backup archive consists of multiple
read-only dbfiles, it may contain multiple rows (with the same backup time and backup id).
If you back up a read-only dbfile or dbspace multiple times, the restore uses the last backup. The
corresponding backup time could be after the specified timestamp, as long as the dbspace/dbfile alter ID
matches the dbspace/dbfile alter ID recorded in the last read-write backup that is restored.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Example
Running the procedure along with the specified timestamp returns the following output:
Related Information
Syntax
Parameters
table_name
(Optional) The name of the table. If you do not specify this parameter,sp_iqrlvmemory returns
information on all RLV tables consuming memory.
table_owner
(Optional) The table owner. If you do not specify this parameter, it defaults to the current user.
Returns
sp_iqrlvmemory displays one row per table consuming RLV store memory, with the following output
columns:
data Amount of RLV store memory, in MB, used for the column fragments for this table.
dictionary Amount of RLV store memory, in MB, used for the dictionaries for this table.
bitmap Amount of RLV store memory, in MB, used to store table-level bitmaps.
ridspace_index The ridspace of the table using the RLV store memory.
Remarks
Version-specific data, such as version bitmaps and on-demand indexes, are not included in RLV memory
accounting. They do not count against the RLV memory limit, and are not reported in sp_iqrlvmemory.
Uncommitted transactions consume memory for the table. A transaction ID of 0 indicates there are no
uncommitted transaction for the table. A nonzero transaction ID indicates uncommitted transactions. In case
of large memory consumption, even after a merge, use this stored procedure in conjunction with the
sp_iqtransaction stored procedure to cross reference transaction ids to identify problematic transactions.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
This example returns the current RLV memory usage for the table rlv_table1 owned by user DBA. Note that
only the last entry contains no uncommitted transactions (transaction ID of 0):
1 778 1 1 0
2 779 1 48 48
4 791 1 48 48
1 1 0 164
0 1 1 189
0 12 0 250
1 1 1 0
Reports information about the internal row fragmentation for a table at the FP index level.
Syntax
dbo.sp_iqrowdensity ( '<target>' )
'<target>' ::=
( table <table-name> | ( column <column-name> ( … ) )
Parameter
table-name
Reports on the named column in the target table. You may specify multiple target columns, but must
repeat the keyword each time.
Remarks
You must specify the keywords table and column. These keywords are not case-sensitive.
sp_iqrowdensity measures row fragmentation at the default index level. Density is the ratio of the minimum
number of pages required by an index for existing table rows to the number of pages actually used by the index.
This procedure returns density as a number such that 0 < <density> < 1. For example, if an index that
requires 8 pages minimum storage occupies 10 pages, its density is .8.
The density reported does not indicate the number of disk pages that may be reclaimed by re-creating or
reorganizing the default index.
This procedure displays information about the row density of a column, but does not recommend further
action. You must determine whether or not to re-create, reorganize, or rebuild an index.
The sp_iqrowdensity IndexType column always returns the maximum number of bits required to encode the
column.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. If you own the object referenced by the procedure, no additional privilege is required.
For objects owned by others, you need one of the following privileges:
● ALTER ANY INDEX System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
● CREATE ANY INDEX
● CREATE ANY OBJECT
● MANAGE ANY DBSPACE
● MONITOR
Side Effects
None
Example
sp_iqrowdensity('column groupo.SalesOrders.ID')
Related Information
Sets compression of data in columns of LONG BINARY (BLOB) and LONG VARCHAR (CLOB) data types.
Syntax
Parameters
owner
The owner of the table for which you are setting compression
table
A compression setting:
● ON – enables compression
● OFF – disables compression
Remarks
sp_iqsetcompression provides control of compression of LONG BINARY (BLOB) and LONG VARCHAR
(CLOB) data type columns. The compression setting applies only to base tables.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● ALTER ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
A side effect of sp_iqsetcompression is that a COMMIT occurs after you change the compression setting.
Example
Related Information
Shows the current shared temp space usage distribution. If run from the coordinator,
sp_iqsharedtempdistrib displays shared temp space distribution for all nodes. If run from a secondary
node, displays shared temp space usage for only that node.
Syntax
sp_iqsharedtempdistrib
Returns
VersionID UNSIGNED BIGINT Version ID of the unit. For active units, the version when the unit was
reserved for the node. For expired units, the version when the unit
was expired. For quarantined units, the version when the unit was
quarantined.
Remarks
Shared temporary space is reserved for each node in the multiplex on demand. Space is reserved for a node in
an allocation unit. Nodes can have multiple allocation units reserved based on their dynamic space demands.
Allocation units are leased to allow nodes to use more space as needed and return the space to a global pool
when not needed. Allocation units expire when space usage decreases and their lease time ends, or when a
server shuts down.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBSPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Related Information
Displays compression settings for columns of LONG BINARY (BLOB) and LONG VARCHAR (CLOB) data types.
Syntax
Parameters
owner
The owner of the table for which you are setting compression.
table
Returns
Returns the column name and compression setting. Compression setting values are 'ON' (compression
enabled) and 'OFF' (compression disabled).
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● ALTER ANY TABLE System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY OBJECT
Side Effects
None
To check the compression status of the columns in the pixTable table, call sp_iqshowcompression:
'picJPG','ON'
Related Information
Displays information about the settings of database options that control the priority of tasks and resource
usage for connections.
Syntax
sp_iqshowpsexe [ <connection-id> ]
Parameters
connection-id
application Information about the client application that opened the connection. Includes the AppInfo con
nection property information:
iqgovern_priori Value of the database option IQGOVERN_PRIORITY that assigns a priority to each query wait
ty ing in the -iqgovern queue. By default, this option has a value of 2 (MEDIUM). The values 1, 2, and 3
are shown as HIGH, MEDIUM, and LOW, respectively.
max_query_time Value of the database option MAX_QUERY_TIME that sets a limit, so that the optimizer can disal
low very long queries. By default, this option is disabled and has a value of 0.
query_row_limit Value if the database option QUERY_ROWS_RETURNED_LIMIT that sets the row threshold for
rejecting queries based on the estimated size of the result set. The default is 0, which means there
is no limit.
query_temp_spac Value of the database option QUERY_TEMP_SPACE_LIMIT (in MB) that constrains the use of
e_limit temporary IQ dbspace by user queries. The default value is 2000 MB.
max_cursors Value of the database option MAX_CURSOR_COUNT that specifies a resource governor to limit
the maximum number of cursors a connection can use at once. The default value is 50. A value of
0 implies no limit.
max_statements Value of the database option MAX_STATEMENT_COUNT that specifies a resource governor to
limit the maximum number of prepared statements that a connection can use at once. The default
value is 100. A value of 0 implies no limit.
Remarks
The sp_iqshowpsexe stored procedure displays information about the settings of database options that
control the priority of tasks and resource usage for connections, which is useful to database administrators for
performance tuning.
Note
The AppInfo property may not be available from Open Client or jConnect applications such as Interactive
SQL. If the AppInfo property is not available, the application column is blank.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● DROP CONNECTION System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
● SERVER OPERATOR
Side Effects
None
Example
The following example displays information about the settings of database options that control the priority of
tasks and resource usage for connection ID 1:
Related Information
Displays the number of blocks used by each object in the current database and the name of the dbspace in
which the object is located.
Syntax
table-name
Remarks
For the current database, displays the object name, number of blocks used by each object, and the name of the
dbspace. sp_iqspaceinfo requires no parameters.
If run on a multiplex database, the default parameter is main, which returns the size of the shared IQ store.
If you supply no parameter, you must have at least one user-created object, such as a table, to receive results.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY DBSPACE System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
This output is from the sp_iqspaceinfo stored procedure run on the iqdemo database. Output for some
tables and indexes are removed from this example:
Contacts 19 IQ_SYSTEM_MAIN
Related Information
Shows information about space available and space used in the IQ store, IQ temporary store, RLV store, and IQ
global and local shared temporary stores.
Syntax
Returns
mainKBUsed The number of kilobytes of IQ main store space used by the database. Secondary multiplex nodes
return '(Null)'.
tempKBUsed The number of kilobytes of total IQ temporary store space in use by the database.
shTempLocalKBUs The number of kilobytes of IQ local shared temporary store space in use by the database.
ed
rlvLogKBUsed The number of kilobytes of RLV store space in use by the database.
Remarks
sp_iqspaceused returns several values as unsigned bigint out parameters. This system stored procedure can
be called by user-defined stored procedures to determine the amount of main, temporary, and RLV store space
in use.
sp_iqspaceused returns a subset of the information provided by sp_iqstatus, but allows the user to return
the information in SQL variables to be used in calculations.
If run on a multiplex database, this procedure applies to the server on which it runs. Also returns space used on
IQ_SHARED_TEMP.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● ALTER DATABASE System privileges GRANT System Privilege Statement [page 1511]
● MANAGE ANY DBSPACE
● MONITOR
Side Effects
None
sp_iqspaceused requires seven output parameters. This example creates a user-defined stored procedure
myspace that declares the seven output parameters, then calls sp_iqspaceused:
myspace
Related Information
Returns serial number, name, description, value, and unit specifier for each available statistic, or a specified
statistic.
Syntax
sp_iqstatistics [ <stat_name> ]
Parameter
stat_name
Returns
When stat_name is provided, sp_iqstatistics returns one row for the given statistic, or zero rows if the
name is invalid. When invoked without any parameter, sp_iqstatistics returns all statistics.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
MANAGE ANY STATISTICS System privilege GRANT System Privilege Statement [page 1511]
Side Effects
None
Example
● The following example displays a single statistic, the total CPU time:
sp_iqstatistics 'CpuTotalTime'
Syntax
sp_iqstatus
Remarks
Shows status information about the current database, including the database name, creation date, page size,
number of dbspace segments, block usage, buffer usage, I/O, backup information, and so on.
sp_iqstatus displays an out-of-space status for main and temporary stores. If a store runs into an out-of-
space condition, sp_iqstatus shows Y in the store’s out-of-space status display value.
Memory used by the row-level versioning (RLV) store can be monitored with sp_iqstatus. The RLV memory
limit row displays the memory limit as specified by the -iqrlvmem server option, or the sa_server_option
rlv_memory_mb. The RLV memory used row displays the amount of memory used by the RLV store.
Memory used by direct-attached storage devices in the cache dbspace can be monitored with sp_iqstatus:
Measurement Description
Number of Cache Dbspace Files The number of cache dbspace dbfiles in the database.
Cache Dbspace Block Identifies the cache dbspace blocks and the corresponding storage device dbfile
name.
Cache Dbspace IQ Blocks Used The number of IQ blocks used, compared to the total number of IQ blocks. Usage is
also shown as a percentage. If the percentage is high, consider adding more storage.
sp_iqspaceused returns a subset of the same information as provided by sp_iqstatus, but allows the user
to return the information in SQL variables to be used in calculations.
To display space that can be reclaimed by dropping connections, use sp_iqstatus and add the results from
the two returned rows:
The above example output shows that one active write transaction created 2175 MB and destroyed 2850 MB of
data. The total data consumed in transactions and not yet released is 4818 MB, or 1968 MB + 2850 MB = 4818
MB.
sp_iqstatus omits blocks that will be deallocated at the next checkpoint. These blocks do however, appear in
sp_iqdbspace output as type X.
In a multiplex, this procedure also lists information about the shared IQ store and IQ temporary store. If
sp_iqstatus shows a high percentage of main blocks in use on a multiplex server, run sp_iqversionuse to
see which versions are being used and the amount of space that can be recovered by releasing versions.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● ALTER DATABASE System privileges GRANT System Privilege Statement [page 1511]
● MANAGE ANY DBSPACE
● MONITOR
● SERVER OPERATOR
Side Effects
None
Example
Note
This example includes a sample user dbspace named iq_main, which may not be present in your own
databases.
SAP IQ (TM) Copyright (c) 1992-2016 by SAP AG or an SAP affiliate company. All rights
reserved.
Catalog Format: 2
DB Updated: 1
RLV Status: RW
The following is a key to understanding the Main IQ I/O and Temporary IQ I/O output codes:
● I – Input
● L – Logical pages read (“Finds”)
● P – Physical pages read
● O – Output
● C – Pages created
● D – Pages dirtied
● P – Physically written
● D – Pages destroyed
● C – Compression ratio
Monitors multiple components of SAP IQ, including the management of buffer cache, memory, threads, locks,
I/O functions, and CPU utilization.
Syntax
sp_iqsysmon start_monitor
start_monitor
Starts monitoring.
stop_monitor
See the Remarks [page 774] section for a complete list of abbreviations.
If you specify more than one section, separate the section abbreviations using spaces, and enclose the list
in single or double quotes. The default is to display all sections.
For sections related to the IQ main store, you can specify main or temporary store by prefixing the section
abbreviation with 'm' or 't', respectively. Without the prefix, both stores are monitored. For example, if you
specify 'mbufman', only the IQ main store buffer manager is monitored. If you specify 'mbufman tbufman'
or 'bufman', both the main and temporary store buffer managers are monitored.
start_monitor
Starts monitoring.
stop_monitor
Stops monitoring and writes the remaining output to the log file.
filemode
Specifies that sp_iqsysmon is running in file mode. In file mode, a sample of statistics appear for every
interval in the monitoring period. By default, the output is written to a log file named <dbname.connid-
iqmon>. Use the file_suffix option to change the suffix of the output file. See the
<monitor_options> parameter for a description of the file_suffix option.
monitor_options
● -interval seconds – specifies the reporting interval, in seconds. A sample of monitor statistics is
output to the log file after every interval. The default is every 60 seconds, if the -interval option is not
specified. The minimum reporting interval is 2 seconds. If the interval specified for this option is invalid
or less than 2 seconds, the interval is set to 2 seconds.
The first display shows the counters from the start of the server. Subsequent displays show the
difference from the previous display. You can usually obtain useful results by running the monitor at
the default interval of 60 seconds during a query with performance problems or during a time of day
that generally has performance problems. A very short interval may not provide meaningful results.
The interval should be proportional to the job time; 60 seconds is usually more than enough time.
● -file_suffix suffix – creates a monitor output file named dbname.connid-suffix. If you do
not specify the -file_suffix option, the suffix defaults to iqmon. If you specify the -file_suffix option
and do not provide a suffix or provide a blank string as a suffix, no suffix is used.
● -append or -truncate – directs sp_iqsysmon to append to the existing output file or truncate the
existing output file, respectively. Truncate is the default. If both options are specified, the option
specified later in the string takes precedence.
● -section section(s) – specifies the abbreviation of one or more sections to write to the monitor
log file.
See the Remarks [page 774] section for a complete list of abbreviations.
The default is to write all sections. The abbreviations specified in the sections list in file mode are the
same abbreviations used in batch mode. When more than one section is specified, spaces must
separate the section abbreviations.
If the -section option is specified with no sections, none of the sections are monitored. An invalid
section abbreviation is ignored and a warning is written to the IQ message file.
Remarks
Note
sp_iqsysmon does not support the SAP IQ components Disk I/O and Lock Manager .
sp_iqsysmon in batch mode is similar to the SAP Adaptive Server Enterprise procedure
sp_sysmon.
File mode sp_iqsysmon writes the sample statistics in a log file for every interval period between
starting and stopping the monitor.
The first display in file mode shows the counters from the start of the server. Subsequent
displays show the difference from the previous display.
(temporary) – tbufalloc
(temporary) – tbufman
(temporary) – tbufpool
(temporary) – tfreelist
(temporary) – tprefetch
The sp_iqsysmon stored procedure monitors multiple components of SAP IQ, including the management of
buffer cache, memory, threads, locks, I/O functions, and CPU utilization.
STATS-NAME Definition
Large Memory Space Maximum Large Memory configured size (-iqlm value from params.cfg).
Large Memory Max Flexible Maximum memory granted for flexible operators. Example: Load Engine (hash sort
merge for hash or hash-range partitioned table and hash sort merge cursor).
Large Memory Num Flex Alloca The count of memory chunks allocated as flex memory.
tions
Large Memory Flexible % Percentage of large memory used for flexible operators.
Large Memory Flexible used This is the total amount of memory allocated to flex users.
Large Memory Inflexible % Percentage of large memory used for inflexible operators (N-bit metadata structures,
data buffer of column vector in load ).
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
Batch Mode
● Starts the monitor in batch mode and displays all sections for the main and temporary stores:
sp_iqsysmon start_monitor
sp_iqsysmon stop_monitor
sp_iqsysmon start_monitor
sp_iqsysmon stop_monitor 'mbufman mbufpool'
sp_iqsysmon '00:10:00'
● Prints only the Memory Manager section of the sp_iqsysmon report after 5 minutes:
● Starts the monitor, executes two procedures and a query, stops the monitor, then prints only the Buffer
Manager section of the report:
sp_iqsysmon start_monitor
go
execute proc1
go
execute proc2
go
select sum(total_sales) from titles
go
sp_iqsysmon stop_monitor, bufman
go
● Prints only the Main Buffer Manager and Main Buffer Pool sections of the report after 2 minutes:
sp_iqsysmon '01:00:00','rlv'
● Runs the monitor in batch mode for 10 seconds and displays the consolidated statistics at the end of the
time period:
File Mode
● Truncates and writes information to the log file every 2 seconds between starting the monitor and stopping
the monitor:
● Appends output for only the Main Buffer Manager and Memory Manager sections to an ASCII file with the
name dbname.connid-testmon. For the database iqdemo, writes results in the file iqdemo.2-testmon:
● Starts the monitor in file mode and writes statistics for Main Buffer Pool and Memory Manager to the log
file every 5 seconds:
In this section:
Related Information
Example 1
The following example displays output for the Buffer Allocation (Main and Temporary) after 20 minutes:
==============================
Buffer Allocator (Main)"
==============================
STATS-NAME VALUE
NActiveCommands 2
BufAllocMaxBufs 2275( 81.6% )
BufAllocAvailBufs 2115( 93.0% )
BufAllocReserved 160( 7.0% )
BufAllocAvailPF 750( 33.0% )
BufAllocSlots 100
BufAllocNPinUsers 0
BufAllocNPFUsers 2
BufAllocNPostedUsrs 0
BufAllocNUnpostUsrs 0
BufAllocPinQuota 0
Example 2
The following example displays output for the Buffer Manager (Main and Temporary) after 20 minutes:
==============================
Buffer Manager (Main)
==============================
STATS-NAME TOTAL NONE TXTPOS TXTDOC CMPACT BTREEV BTREEF BV VDO
DBEXT DBID SORT STORE GARRAY
Finds 80137 0 0 0 0 9046 3307 0
20829 0 0 0 0 275
Hits 80090 0 0 0 0 9015 3291 0
20829 0 0 0 0 275
Hit% 99.9 0 0 0 0 99.7 99.5 0
100 0 0 0 0 100
FalseMiss 26469 0 0 0 0 63 40 0
1097 0 0 0 0 0
UnOwnRR 48 0 0 0 0 31 16 0
1 0 0 0 0 0
Cloned 0 0 0 0 0 0 0 0
0 0 0 0 0 0
Creates 1557 0 0 0 0 60 179 0
256 0 0 0 0 58
Destroys 546 0 0 0 0 12 21 0
6 0 0 0 0 29
Dirties 7554 0 0 0 0 1578 585 0
0 0 0 0 0 0
RealDirties 2254 0 0 0 0 117 180 0
542 0 0 0 0 58
PrefetchReqs 80 0 0 0 0 0 0 0
74 0 0 0 0 0
PrefetchNotInMem 1 0 0 0 0 0 0 0
1 0 0 0 0 0
Example 3
The following example displays output for the Buffer Pool (Main and Temporary) after 20 minutes:
==============================
Buffer Pool (Main)
==============================
STATS-NAME TOTAL NONE TXTPOS TXTDOC CMPACT BTREEV BTREEF BV VDO DBEXT
DBID SORT STORE GARRAY
MovedToMRU 30514 0 0 0 0 0 0 0 0
0 0 1218 696 0
MovedToWash 258 0 0 0 0 0 0 0 0
0 0 0 256 0
RemovedFromLRU 30506 0 0 0 0 0 0 0 0
0 0 1218 694 0
RemovedFromWash 30503 0 0 0 0 0 0 0 0
0 0 1218 694 0
RemovedInScanMode 0 0 0 0 0 0 0 0 0
0 0 0 0 0
MovedToPSList 0 0 0 0 0 0 0 0 0
0 0 0 0 0
RemovedFromPSList 0 0 0 0 0 0 0 0 0
0 0 0 0 0
STATS-NAME (cont'd) BARRAY BLKMAP HASH CKPT BM TEST CMID RIDCA LOB
LVCRID FILE RIDMAP RVLOG
MovedToMRU 0 8575 124 0 19898 0 0 0
0 0 3 0 0
Example 4
The following example displays output for the Prefetch Manager (Main and Temporary) after 20 minutes:
==============================
Prefetch Manager (Main)
==============================
STATS-NAME VALUE
PFMgrNThreads 10
PFMgrNSubmitted 81
PFMgrNDropped 0
PFMgrNValid 0
PFMgrNRead 1
PFMgrNReading 0
PFMgrCondVar Locks 0 Lock-Waits 0 ( 0.0% ) Signals 0
Broadcasts 2 Waits 2
==============================
Example 5
The following example displays output for the IQ Store Free List (Main and Temporary) after 20 minutes:
==============================
IQ Store (Main) Free List
==============================
STATS-NAME VALUE
FLBitCount 74036
FLIsOutOfSpace NO
FLMutexLocks 0
FLMutexWaits 0 ( 0.0% )
==============================
IQ Store (Temporary) Free List
==============================
STATS-NAME VALUE
FLBitCount 4784
FLIsOutOfSpace NO
FLMutexLocks 0
FLMutexWaits 0 ( 0.0% )
Example 6
The following example displays output for Memory Manager, Thread Manager, CPU utilization, Transaction
Manager after 20 minutes:
==============================
Memory Manager
==============================
STATS-NAME VALUE
MemAllocated 67599536 ( 66015 KB )
MemAllocatedMax 160044816 ( 156293 KB )
MemAllocatedEver 1009672456 ( 986008 KB )
MemNAllocated 77309
MemNAllocatedEver 914028
MemNTimesLocked 0
MemNTimesWaited 0 ( 0.0 %)
==============================
Thread Manager
STATS-NAME VALUE
ThrNumOfCpus 4
ThreadLimit 99
ThrNumThreads 98 ( 99.0 %)
ThrReserved 15 ( 15.2 %)
ThrNumFree 55 ( 55.6 %)
NumThrUsed 44 ( 44.4 %)
UsedPerActiveCmd 22
ThrNTeamsInUse 5
ThrMaxTeams 7
NumTeamsAlloc 238
TeamThrAlloc 421
SingleThrAlloc 492
ThrMutexLocks 0
ThrMutexWaits 0 ( 0.0 %)
==============================
CPU time statistics
==============================
STATS-NAME VALUE
Elapsed Seconds 59.65 ( 25.0 %)
CPU User Seconds 37.79 ( 15.8 %)
CPU Sys Seconds 1.89 ( 0.8 %)
CPU Total Seconds 39.68 ( 16.6 %)
==============================
Transaction Manager
==============================
STATS-NAME VALUE
TxnMgrNPending 0
TxnMgrNBlocked 2
TxnMgrNWaiting 0
TxnMgrPCcondvar Locks 0 Lock-Wait 0 ( 0.0 %) Signals
0 Broadcasts 2 Waits 2
TxnMgrTxnIDseq 407
TxnMgrtxncblock Locks 0 Lock-Wait 0 ( 0.0 %)
TxnMgrVersionID 0
TxnMgrOAVI 0
TxnMgrVersionLock Locks 0 Lock-Wait 0 ( 0.0 %) Signals
0 Broadcasts 0 Waits 0
Example 7
The following example displays output for server context and catalog statistics after 20 minutes:
==============================
Context Server statistics
==============================
STATS-NAME VALUE
StCntxNumConns 1
StCntxNResource 16
StCntxNOrigResource 18
StCntxNWaiting 0
StCntxNWaited 0
StCntxNAdmitted 1116
StCntxLock Locks 0 Lock-Waits 0 ( 0.0 %)
StCntxCondVar Locks 0 Lock-Waits 0 ( 0.0 %)
==============================
Catalog, DB Log, and Repository statistics
==============================
Example 8
The following example displays output for IQ RLV In-Memory Store and Large Memory Allocator (LMA)
statistics after 20 minutes:
==============================
IQ In-Memory Store
==============================
STATS-NAME VALUE
RLV Memory Limit 2048 MB
RLV Memory Used 0 MB
RLV Chunks Used 0
==============================
Large Memory Allocator
==============================
STATS-NAME VALUE
Large Memory Space 2048 MB
Large Memory Max Fle 512 MB
Large Memory Num Fle 0
Large Memory Flexibl 0.5
Large Memory Flexibl 0 MB
Large Memory Inflexi 0.9
Large Memory Inflexi 0 MB
Large Memory Anti-St 0.5
Large Memory Num Con 0
Syntax
Syntax 1
<table_name> ::=
TEMP
| VIEW
| ALL
| <any_other_value>
sp_iqtable [ table_name='<tablename>' ],
[ table_owner='<tableowner>' ] , [ table_type='<tabletype>' ]
Go to:
● Returns
● Remarks
● Privileges
● Side Effects
● Examples
Parameters
(back to top)
table_name or tablename
Returns
(back to top)
Specifying one parameter returns only the tables that match that parameter. Specifying more than one
parameter filters the results by all of the parameters specified. Specifying no parameters returns all SAP IQ
tables in the database. There is no method for returning the names of local temporary tables.
● 'Y' – if the column belongs to a partitioned table and has one or more partitions whose
dbspace is different from the table partition's dbspace
● 'N' – if the column's table is not partitioned or each partition of the column resides in the
same dbspace as the table partition.
● Hash-range
● Range
● Hash
● None
Remarks
(back to top)
For Syntax 1, if you do not specify either of the first two parameters, but specify the next parameter in the
sequence, you must substitute NULL for the omitted parameters. For example, sp_iqtable
NULL,NULL,TEMP and sp_iqtable NULL,dbo,SYSTEM.
The <table_type> values ALL and VIEW must be enclosed in single quotes in Syntax1.
For Syntax 2, the parameters can be specified in any order. Enclose them in single quotes.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
(back to top)
None
Examples
(back to top)
● The following variations in syntax both return information about the table Departments:
sp_iqtable ('Departments')
sp_iqtable table_name='Departments'
IQ Main 16387
N contains the names and heads of the various departments in the (NULL)
sporting goods company
PartitionType isRlv
None F
sp_iqtable NULL,GROUPO
sp_iqtable table_owner='GROUPO'
16387 N contains the names and heads of the various departments (NULL)
in the sporting goods company
16387 N contains information such as names, salary, hire date and (NULL)
birthday
16387 N types of revenue and expenses that the sporting goods (NULL)
company has
16387 N sales orders that customers have submitted to the sport (NULL)
ing goods company
PartitionType isRlvd
None F
None F
None F
None F
None F
None F
None F
None F
None F
Related Information
Syntax
sp_iqtablesize ( <table_owner>.<table_name> )
Parameters
table_owner
KBytes The physical table size, in kilobytes. If you divide the KBytes value by page size, you see the aver
age on-disk page size.
Pages The number of IQ pages needed to hold the table in memory. Pages is the total number of IQ pa
ges for the table. The unit of measurement for pages is IQ page size. All in-memory buffers (buf
fers in the IQ buffer cache) are the same size.
CompressedPages The number of IQ pages that are compressed, when the table is compressed (on disk). IQ pages
on disk are compressed. For example, if Pages is 1000 and CompressedPages is 992, this
means that 992 of the 1000 pages are compressed. CompressedPages divided by Pages is
usually near 100%, because most pages compress. An empty page is not compressed, since SAP
IQ does not write empty pages. IQ pages compress well, regardless of the fullness of the page.
NBlocks The number of IQ blocks. NBlocks is Kbytes divided by IQ block size. Each IQ page on disk
uses 1 to 16 blocks. If the IQ page size is 128 KB, then the IQ block size is 8 KB. In this case, an
individual on-disk page could be 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, or 128
KB.
RlvLogPages The number of IQ pages needed to hold the RLV table log information on disk.
Remarks
Returns the total size of the table in KBytes and NBlocks (IQ blocks). Also returns the number of pages
required to hold the table in memory, and the number of IQ pages that are compressed when the table is
compressed (on disk). You must specify the <table_name> parameter with this procedure. If you are the
owner of <table_name>, then you do not have to specify the <table_owner> parameter.
Note
SAP IQ always reads and writes an entire page, not blocks. For example, if an individual page compresses to
88 KB, then IQ reads and writes the 88 KB in one I/O. The average page is compressed by a factor of 2 to 3.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● MANAGE ANY DBSPACE System privileges GRANT System Privilege Statement [page 1511]
● ALTER ANY TABLE
● You own the table
Side Effects
None
Example
DBA t1 3
(Continued)
192 5 4
(Continued)
24 96 12288
Related Information
Syntax
sp_iqtableuse
UID Table unique identifier. UID is a number assigned by the system that uniquely identifies the in
stance of the table (where instance is defined when an object is created).
Remarks
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Related Information
Syntax
sp_iqtransaction
Returns
TxnID The transaction ID of this transaction control block. The transaction ID is assigned during begin
transaction. It appears in the .iqmsg file by the BeginTxn, CmtTxn, and PostCmtTxn mes
sages, and is the same as the Txn ID Seq that is logged when the database is opened.
CmtID The ID assigned by the transaction manager when the transaction commits. For active transac
tions, the CmtID is zero.
VersionID For an SAP IQ server and multiplex nodes, a value of 0 indicates that the transaction is unver
sioned, and the VersionID has not been assigned.
For the multiplex coordinator, the VersionID is assigned after the transaction establishes table
locks. Multiplex secondary servers receive the VersionID from the coordinator. The VersionID is
used internally by the SAP IQ in-memory catalog and the IQ transaction manager to uniquely iden
tify a database version to all nodes within a multiplex database.
State The state of the transaction control block. This variable reflects internal SAP IQ implementation
details and is subject to change in the future. Currently, transaction states are NONE, ACTIVE,
ROLLING_BACK, ROLLED_BACK, COMMITTING, COMMITTED, and APPLIED.
NONE, ROLLING_BACK, ROLLED_BACK, COMMITTING and APPLIED are transient states with
a very small life span.
COMMITTED indicates that the transaction has completed and is waiting to be APPLIED, at
which point a version that is invisible to any transaction is subject to garbage collection.
Once the transaction state is ROLLED_BACK, COMMITTED, or APPLIED, ceases to own any
locks other than those held by open cursors.
IQConnID The 10-digit connection ID that is included as part of all messages in the .iqmsg file. This is a
monotonically increasing integer unique within a server session.
MainTableKBDr The number of kilobytes of IQ store space dropped by this transaction, but which persist on disk in
the store because the space is visible in other database versions or other savepoints of this trans
action.
TempTableKBCr The number of kilobytes of IQ temporary store space created by this transaction for storage of IQ
temporary table data.
TempTableKBDr The number of kilobytes of IQ temporary table space dropped by this transaction, but which per
sist on disk in the IQ temporary store because the space is visible to IQ cursors or is owned by
other savepoints of this transaction.
TempWorkSpaceKB For ACTIVE transactions, a snapshot of the work space in use at this instant by this transaction,
such as sorts, hashes, and temporary bitmaps. The number varies depending on when you run
sp_iqtransaction. For example, the query engine might create 60 MB in the temporary
cache but release most of it quickly, even though query processing continues. If you run
sp_iqtransaction after the query finishes, this column shows a much smaller number. When
the transaction is no longer active, this column is zero.
For ACTIVE transactions, this column is the same as the TempWorkSpaceKB column of
sp_iqconnection.
TxnCreateTime The time the transaction began. All SAP IQ transactions begin implicitly as soon as an active con
nection is established or when the previous transaction commits or rolls back.
CursorCount The number of open SAP IQ cursors that reference this transaction control block. If the transac
tion is ACTIVE, it indicates the number of open cursors created within the transaction. If the trans
action is COMMITTED, it indicates the number of hold cursors that reference a database version
owned by this transaction control block.
SpCount The number of savepoint structures that exist within the transaction control block. Savepoints
may be created and released implicitly. Therefore, this number does not indicate the number of
user-created savepoints within the transaction.
SpNumber The active savepoint number of the transaction. This is an implementation detail and might not
reflect a user-created savepoint.
MPXServerName Indicates if an active transaction is from an internode communication (INC) connection. If from
INC connection, the value is the name of the multiplex server where the transaction originates.
NULL if not from an INC connection. Always NULL if the transaction is not active.
GlobalTxnID The global transaction ID associated with the current transaction, 0 (zero) if none.
VersioningType The snapshot versioning type of the transaction; either table-level (the default), or row-level. Row-
level snapshot versioning (RLV) applies only to RLV-enabled tables. Once a transaction is started,
this value cannot change.
Blocking Indicates if connection blocking is enabled (True) or disabled (False). You set connection blocking
using the BLOCKING database option. If true, the transaction blocks, meaning it waits for a con
flicting lock to release before it attempts to retry the lock request.
BlockingTimeout Indicates the time, in milliseconds, a transaction waits for a locking conflict to clear. You set the
timeout threshold using the BLOCKING_TIMEOUT database option. A value of 0 (default) indi
cates that the transaction waits indefinitely.
sp_iqtransaction returns a row for each transaction control block in the SAP IQ transaction manager. The
columns Name, Userid, and ConnHandle are the connection properties Name, Userid, and Number,
respectively. Rows are ordered by TxnID.
sp_iqtransaction output does not include connections without transactions in progress. To include all
connections, use sp_iqconnection.
Note
Although you can use sp_iqtransaction to identify users who are blocking other users from writing to a
table, sp_iqlocks is a better choice for this purpose.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
sp_iqstatus Procedure
sp_iqversionuse Procedure [page 807]
Determining the Security Model Used by a Database [page 576]
Syntax
sp_iqunusedcolumn
Returns
Remarks
Columns from tables created in SYSTEM or local temporary tables are not reported.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Example
Related Information
Reports IQ secondary (non-FP) indexes that were not referenced by the workload.
Syntax
sp_iqunusedindex
Remarks
Indexes from tables created in SYSTEM or local temporary tables are not reported.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
Related Information
Syntax
sp_iqunusedtable
Returns
Remarks
Tables created in SYSTEM and local temporary tables are not reported.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
The following table illustrates sample output from the sp_iqunusedtable procedure:
Related Information
Syntax
sp_iqversionuse
Returns
VersionID In SAP IQ databases, the VersionID is displayed as zero. For the multiplex coordinator, the
VersionID is the same as the TxnID of the active transaction and VersionID is the same as
the CmtID of a committed transaction. In multiplex secondary servers, the VersionID is the
CmtID of the transaction that created the database version on the multiplex coordinator. It is
used internally by the SAP IQ in-memory catalog and the SAP IQ transaction manager to uniquely
identify a database version to all nodes within a multiplex database.
WasReported Indicates whether the server has received usage information for this version.
MinKBRelease The minimum amount of space returned once this version is no longer in use.
MaxKBRelease The maximum amount of space returned once this version is no longer in use.
Remarks
The sp_iqversionuse system stored procedure helps troubleshoot situations where the database uses
excessive storage space due to multiple table versions.
If out-of-space conditions occur or sp_iqstatus shows a high percentage of main blocks in use on a multiplex
server, run sp_iqversionuse to find out which versions are being used and the amount of space that can be
recovered by releasing versions.
The procedure produces a row for each user of a version. Run sp_iqversionuse first on the coordinator to
determine which versions should be released and the amount of space in KB to be released when the version is
no longer in use. Connection IDs are displayed in the IQConn column for users connected to the coordinator.
Version usage due to secondary servers is displayed as the secondary server name with connection ID 0.
The amount of space is expressed as a range because the actual amount typically depends on which other
versions are released. The actual amount of space released can be anywhere between the values of
MinKBRelease and MaxKBRelease. The oldest version always has MinKBRelease equal to MaxKBRelease.
The WasReported column is used in a multiplex setting. WasReported indicates whether version usage
information has been sent from the secondary server to the coordinator. WasReported is 0 initially on a
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
● The following displays sample output from the sp_iqversionuse system procedure:
MinKBRelease MaxKBRelease
============ ============
0 0
The following examples show multiplex output. The oldest version 42648 is in use by connection 108 on the
coordinator (<mpxw>). Committing or rolling back the transaction on connection 108 releases 7.9 MB of space.
Version 42686 is in use by secondary server (<mpxq>) according to output from the coordinator. Using the
secondary server output, the actual connection is connection 31. The actual amount of space returned from
releasing version 42686 depends on whether 42648 is released first.
WasReported is 0 for versions 42715 and 42728 on the coordinator because these are new versions that have
not yet been replicated. Since version 42728 does not appear on the secondary server output, it has not yet
been used by the secondary server.
call dbo.sp_iqversionuse
call dbo.sp_iqversionuse
42686 'mpxq' 31 1 0 0
42715 'mpxq' 00 1 0 0
Related Information
sp_iqstatus Procedure
sp_iqtransaction Procedure [page 799]
Determining the Security Model Used by a Database [page 576]
Syntax
Syntax 1
Syntax 2
sp_iqview [ view_name='<viewname>' ],
[ view_owner='<viewowner>' ] , [ view_type='<viewtype>' ]
Parameters
Returns
Specifying one of the parameters returns only the views with the specified view name or views that are owned
by the specified user. Specifying more than one parameter filters the results by all of the parameters specified.
Specifying no parameters returns all user views in a database.
Remarks
sp_iqview returns a view definition greater than 32K characters without truncation.
Syntax 1
For Syntax 1, sp_iqview NULL,NULL,SYSTEM, if you do not specify either of the first two parameters, but do
specify the next parameter in the sequence, you must substitute NULL for the omitted parameters.
Note
Syntax 2
For Syntax 2, the parameters can be specified in any order, enclosed in single quotes.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Examples
● The following variations in syntax both return information about the view deptview:
call sp_iqview('ViewSalesOrders')
sp_iqview view_name='ViewSalesOrders'
● The following variations in syntax both return all views that are owned by view owner GROUPO:
sp_iqview NULL,GROUPO
sp_iqview view_owner='GROUPO'
Related Information
Displays information about all current users and connections, or about a particular user or connection.
Syntax
Parameters
connhandle
An integer representing the connection ID. If this parameter is specified, sp_iqwho returns information
only about the specified connection. If the specified connection is not open, no rows are displayed in the
output.
user-name
A char(255) parameter representing a user login name. If this parameter is specified, sp_iqwho returns
information only about the specified user. If the specified user has not opened any connections, no rows
are displayed in the output. If the specified user name does not exist in the database, sp_iqwho returns
the error message "User <user-name> does not exist."
arg-type
(Optional) Can be specified only when the first parameter has been specified. The only value for <arg-
type> is "user". If the <arg-type> value is specified as "user", sp_iqwho interprets the first
parameter as a user name, even if the first parameter is numeric. If any value other than "user" is
specified for <arg-type>, sp_iqwho returns the error "Invalid parameter."
Returns
Userid The name of the user that opened the connection "ConnHandle".
BlockedOn The connection on which a particular connection is blocked; 0 if not blocked on any connection.
BlockUserid The owner of the blocking connection; NULL if there is no blocking connection.
ReqType The type of the request made through the connection; DO_NOTHING if no command is issued.
IQCmdType The type of SAP IQ command issued from the connection; NONE if no command is issued.
IQIdle The time in seconds since the last SAP IQ command was issued through the connection; in case of
no last SAP IQ command, the time since '01-01-2000' is displayed.
SAIdle The time in seconds since the last SA request was issued through the connection; in case of no
last SA command, the time since '01-01-2000' is displayed.
IQThreads The number of threads with the connection. At least one thread is started as soon as the connec
tion is opened, so the minimum value for IQThreads is 1.
TempTableSpaceK The size of temporary table space in kilobytes; 0 if no temporary table space is used
B
Remarks
The sp_iqwho stored procedure displays information about all current users and connections, or about a
particular user or connection.
loginame Userid
hostname Name of the host on which the server is running; currently not supported
blk_spid BlockedOn
dbname Omitted, as there is one server and one database for SAP IQ and they are the same for every
connection
block_xloid BlockUserid
If no parameters are specified, sp_iqwho displays information about all currently active connections and
users.
Either a connection handle or a user name can be specified as the first sp_iqwho parameter. The parameters
<connhandle> and <user-name> are exclusive and optional. Only one of these parameters can be specified
at a time. By default, if the first parameter is numeric, the parameter is assumed to be a connection handle. If
the first parameter is not numeric, it is assumed to be a user name.
sp_iqwho 1, "user"
When the <arg-type> "user" is specified, sp_iqwho interprets the first parameter 1as a user name, not as a
connection ID. If a user named 1 exists in the database, sp_iqwho displays information about connections
opened by user 1.
Syntax Output
sp_iqwho 3, "user" Interprets 3 as a user name and displays connections opened by user 3. If
user 3 does not exist, returns the error "User 3 does not exist."
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
● DROP CONNECTION System privileges GRANT System Privilege Statement [page 1511]
● MONITOR
● SERVER OPERATOR
Side Effects
None
Standards
The SAP IQ sp_iqwho stored procedure incorporates the SAP IQ equivalents of columns displayed by the SAP
Adaptive Server Enterprise sp_who procedure.
Some SAP ASE columns are omitted, as they are not applicable to SAP IQ.
Controls collection of workload monitor usage information, and reports monitoring collection status.
sp_iqworkmon collects information for all SQL statements.
Syntax
<action> ::=
'start' , 'stop' , 'status' , 'reset'
<mode> ::=
'index' , 'table' , 'column ' , 'all'
Parameters
action
Specifies the control action to apply by using one of the following values:
The statistics are persisted until they are cleared with the reset argument, or until the server is restarted.
Statistics collection does not automatically resume after a server restart, and it needs to be restarted using
start.
mode
Specifies the type of monitoring to control. The INDEX, TABLE, and COLUMN keywords individually control
monitoring of index usage, table usage, and column usage respectively. The default ALL keyword controls
monitoring of all usage monitoring features simultaneously.
Remarks
There is always a result set when you execute sp_iqworkmon. If you specify a specific mode (such as index),
only the row for that mode appears.
sp_iqworkmon 'stop'
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Example
Related Information
Syntax
Returns
Remarks
If no indexes are found with a ridmap version of 0, the message "No indexes require building" is returned.
Otherwise, the syntax to rebuild each identified column is returned.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
In this example, columns a, c, d, and f on table t1 are identified as having a ridmap version of 0 and require an
FP index rebuild to use the zone map feature.
Sample Code
In this example, no columns on the table t2 are identified as having a ridmap version of 0.
Sample Code
Catalog store stored procedures return result sets displaying database server, database, and connection
properties in tabular form.
These procedures are owned by the dbo user ID. The PUBLIC role has EXECUTE privilege on them.
In this section:
Returns information about the non-core SQL extensions used in a SQL statement.
Syntax
sa_ansi_standard_packages(
<standard>
, <statement>
)
Parameters
standard
Use this LONG VARCHAR parameter to specify the standard to use for the core extensions. One of SQL:
1999 or SQL:2003.
statement
Use this LONG VARCHAR parameter to specify the SQL statement to evaluate.
Result set
Remarks
If there are no non-core extensions used for the statement, the result set is empty.
Privileges
None
Syntax
sa_audit_string( <string> )
Parameters
string
Remarks
If auditing is turned on, this system procedure adds a comment to the auditing information stored in the
transaction log. The string can be a maximum of 128 characters.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the MANAGE AUDITING system
privilege.
Side effects
None
Example
The following example uses sa_audit_string to add a comment to the transaction log:
Breaks a CHAR string into terms and returns each term as a row along with its position.
Syntax
sa_char_terms(
<text>
[, <config_name>
[, <owner> ] ]
)
Parameters
text
Use this optional CHAR(128) parameter to specify the text configuration object to apply when processing
the string. The default value is 'default_char'.
owner
Use this optional CHAR(128) parameter to specify the owner of the text configuration object. The default
value is NULL. The current user is assumed if the owner is not specified or if NULL is specified.
Remarks
You can use this system procedure to find out how a string is interpreted when the settings for a text
configuration object are applied. This can be helpful when you want to know what terms would be dropped
during indexing or from a query string.
Privileges
None
Example
The following statement returns the terms in the CHAR string "It's a work-at-home day!" using the default
CHAR text configuration object, default_char:
term position
It 1
s 2
a 3
work 4
at 5
home 6
day 7
Syntax
sa_checkpoint_execute '<shell_commands>'
Parameters
shell_commands
One or more user commands to be executed in a system shell. The shell commands are specific to the
system shell. Commands are separated by a semicolon (;).
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have one of the following system privileges:
Remarks
Allows users to execute shell commands to copy a running database from the middle of a checkpoint
operation, when the server is quiescent. The copied database can be started and goes through normal
recovery, similar to recovery following a system failure.
sa_checkpoint_execute initiates a checkpoint, and then executes a system shell from the middle of the
checkpoint, passing the user commands to the shell. The server then waits for the shell to complete, creating
an arbitrary size time window during which to copy database files. Most database activity stops while the
checkpoint is executing, so the duration of the shell commands should be limited to acceptable user response
time.
Do not use the sa_checkpoint_execute with interactive commands, as the server must wait until the
interactive command is killed. Supply override flags to disable prompting for any shell commands that might
become interactive; in other words, the COPY, MOVE, and DELETE commands might prompt for confirmation.
The intended use of sa_checkpoint_execute is with disk mirroring, to split mirrored devices.
When using sa_checkpoint_execute to copy iqdemo.* files to another directory, all files are copied except
the .db and .log files. Error -910 is returned.
This error not a product defect but a Windows limitation; the Windows copy command cannot copy catalog
files while they are open by the database.
Side Effects
None
Example
Assuming you have created a subdirectory named backup, the following statement issues a checkpoint,
copies all of the iqdemo database files to the backup subdirectory, and completes the checkpoint:
Related Information
Returns the most recently prepared SQL statement for each connection to the indicated database on the
server.
Syntax
sa_conn_activity( [ <connidparm> ] )
Parameters
connidparm
Use this optional INTEGER parameter to specify the connection ID number. The default is NULL.
Result set
If <connidparm> is less than zero, then information for the current connection is returned. If <connidparm>
is not supplied or is NULL, then information is returned for all connections to all databases running on the
database server.
The sa_conn_activity system procedure returns a result set consisting of the most recently prepared SQL
statement for the connection. Recording of statements must be enabled for the database server before calling
sa_conn_activity. To do this, specify the -zl option when starting the database server, or execute the following:
CALL sa_server_option('RememberLastStatement','ON');
This procedure is useful when the database server is busy and you want to obtain information about the last
SQL statement prepared for each connection. This feature can be used as an alternative to request logging.
Privileges
To obtain a list of all connection IDs, you must also have either the SERVER OPERATOR, MONITOR, or DROP
CONNECTION system privilege.
Side effects
None
Example
The following example uses the sa_conn_activity system procedure to display the most recently prepared
SQL statement for each connection.
CALL sa_conn_activity( );
Related Information
Syntax
sa_conn_info( [ <connidparm> ] )
Parameters
connidparm
This optional INTEGER parameter specifies the connection ID number. The default is NULL.
Result set
Remarks
If <connidparm> is less than zero, then a result set consisting of connection properties for the current
connection is returned. If <connidparm> is not supplied or is NULL, then connection properties are returned
for all connections to all databases running on the database server.
In a block situation, the BlockedOn value returned by this procedure allows you to check which users are
blocked, and who they are blocked on. The sa_locks system procedure can be used to display the locks held by
the blocking connection.
For more information based on any of these properties, you can execute something similar to the following:
The value of LockRowID can be used to look up a lock in the output of the sa_locks procedure.
The value in LockIndexID can be used to look up a lock in the output of the sa_locks procedure. Also, the value
in LockIndexID corresponds to the primary key of the ISYSIDX system table, which can be viewed using the
SYSIDX system view.
Every lock has an associated table, so the value of LockTable can be used to unambiguously determine whether
a connection is waiting on a lock.
To obtain a list of all connection IDs, you must also have either the SERVER OPERATOR, MONITOR, or DROP
CONNECTION system privilege.
Side effects
None
Example
The following example uses the sa_conn_info system procedure to return a result set summarizing
connection properties for all connections to the server.
CALL sa_conn_info( );
The following example uses the sa_conn_info system procedure to return a result set showing which
connection created a temporary connection.
Connection 8 created the temporary connection that executed a CREATE DATABASE statement.
The following example uses the sa_conn_info system to return the number of blocked connections.
Related Information
Syntax
Parameters
connidparm
Result Set
Remarks
If <connidparm> is greater than zero, then information for the supplied connection is returned. If
<connidparm> is less than zero, then information for the current connection is returned. If <connidparm>
and <dbidparm> are not supplied or are NULL, then connection IDs for all connections to all databases
running on the database server are returned.
If <connidparm> is NULL and <dbidparm> is greater than or equal to zero, then connection IDs for only that
database are returned. If <connidparm> is NULL and <dbidparm> is less than zero, then connection IDs for
just the current database are returned.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have one of the following system privileges:
● SERVER OPERATOR
Side Effects
None
Example
The following example uses the sa_conn_list system procedure to display a list of connection IDs:
CALL sa_conn_list( );
Number
1,949
1,948
...
Related Information
Syntax
sa_conn_properties( [ <connidparm> ] )
Parameters
connidparm
Use this optional INTEGER parameter to specify the connection ID number. The default is NULL.
Remarks
Returns the connection ID as Number, and the PropNum, PropName, PropDescription, and Value for each
available connection property. Values are returned for all connection properties, database option settings
related to connections, and statistics related to connections. Valid properties with NULL values are also
returned.
If <connidparm> is less than zero, then property values for the current connection are returned. If
<connidparm> is not supplied or is NULL, then property values are returned for all connections to the current
database.
Privileges
To obtain a list of all connection IDs, you must also have either the SERVER OPERATOR, MONITOR, or DROP
CONNECTION system privilege.
Side effects
None
Example
The following example uses the sa_conn_properties system procedure to return a result set summarizing
connection property information for all connections.
CALL sa_conn_properties( );
79 37 ClientStmtCacheHits ...
79 38 ClientStmtCacheMisses ...
This example uses the sa_conn_properties system procedure to return a list of all connections, in
decreasing order by CPU time*:
Syntax
sa_db_info( [ <dbidparm> ] )
Parameters
dbidparm
Use this optional INTEGER parameter to specify the database ID number. The default is NULL.
Result set
Remarks
If you specify a database ID, sa_db_info returns a single row containing the Number, Alias, File, ConnCount,
PageSize, and LogName for the specified database.
If <dbidparm> is greater than zero, then properties for the supplied database are returned. If <dbidparm> is
less than zero, then properties for the current database are returned. If <dbidparm> is not supplied or is NULL,
then properties for all databases running on the database server are returned.
Privileges
To execute this system procedure for other databases, you must also have either the SERVER OPERATOR or
MONITOR system privilege.
Side effects
None
Example
The following statement returns a row for each database that is running on the server:
CALL sa_db_info( );
Syntax
sa_db_option(
<opt>
, <val>
)
Parameters
opt
Use this LONG VARCHAR parameter to specify the new value for the database option.
Remarks
Database administrators can use this procedure to override some database options temporarily, without
restarting the database.
The option values that are changed using this procedure are reset to their default values when the database
shuts down. To change an option value every time the database is started, specify the corresponding database
option when the database is started (if one exists).
You must have EXECUTE privilege on the system procedure, as well as the SERVER OPERATOR system
privilege.
Side effects
None.
Example
For the following example to work, the database server must be started with the option -sk securefkey.
This example enables the SYSTEM secured feature key that includes MANAGE_KEYS, creates a new
secured feature key called SECURITY with case-sensitive authorization code NewSecurityCode, and then
uses the new secured feature key to enable the DiskSandbox option.
Syntax
sa_db_properties( [ <dbidparm> ] )
Parameters
dbidparm
Use this optional INTEGER parameter to specify the database ID number. The default is NULL.
Remarks
If you specify a database ID, the sa_db_properties system procedure returns the database ID number and the
PropNum, PropName, PropDescription, and Value for each available database property. Values are returned for
all database properties and statistics related to databases. Valid properties with NULL values are also returned.
If <dbidparm> is greater than zero, then database properties for the supplied database are returned. If
<dbidparm> is less than zero, then database properties for the current database are returned. If <dbidparm>
is not supplied or is NULL, then database properties for all databases running on the database server are
returned.
Privileges
To execute this system procedure for other databases, you must also have either the SERVER OPERATOR or
MONITOR system privilege.
Side effects
None
Example
The following example uses the sa_db_properties system procedure to return a result set summarizing
database properties for all databases when the invoker has SERVER OPERATOR or MONITOR system
privilege. Otherwise, database properties for the current database are returned.
CALL sa_db_properties( );
0 0 ConnCount ...
0 1 IdleCheck ...
0 2 IdleWrite ...
The following example uses the sa_db_properties system procedure to return a result set summarizing
database properties for a second database.
CALL sa_db_properties( 1 );
Related Information
Returns the list of all dependent views for a given table or view.
Syntax
sa_dependent_views(
[ <tbl_name>
[, <owner_name> ] ]
)
Parameters
tbl_name
Use this optional CHAR(128) parameter to specify the name of the table or view. The default is NULL.
owner_name
Use this optional CHAR(128) parameter to specify the owner for <tbl_name>. The default is NULL.
Remarks
Use this procedure to obtain the list of IDs of tables and their dependent views.
No errors are generated if no existing tables satisfy the specified criteria for table and owner names. The
following conditions also apply:
● If both <owner> and <tbl_name> are NULL, information is returned on all tables that have dependent
views.
● If <tbl_name> is NULL but <owner> is specified, information is returned on all tables owned by the
specified owner.
● If <tbl_name> is specified but <owner> is NULL, information is returned on any one of the tables with the
specified name.
Privileges
Side effects
None
Example
In this example, the sa_dependent_views system procedure is used to obtain the list of IDs for the views
that are dependent on the SalesOrders table. The procedure returns the table_id for SalesOrders, and the
dep_view_id for the dependent view, ViewSalesOrders.
In this example, the sa_dependent_views system procedure is used in a SELECT statement to obtain the
list of names of views dependent on the SalesOrders table. The procedure returns the ViewSalesOrders
view.
Describes the names and types of columns contained in an ESRI shapefile. This system feature is for use with
the spatial data features.
Syntax
Parameters
shp_filename
A VARCHAR(512) parameter that identifies the location of the ESRI shapefile. The file nameneeds the .shp
extension and an associated .dbf file with the same base name located in the same directory. The path is
relative to the database server, not the client application.
srid
An INTEGER parameter that identifies the SRID for the geometries in the shapefile. Specify NULL to
indicate the column can store multiple SRIDs. Specifying NULL limits the operations that can be performed
on the geometry values.
encoding
(Optional) A VARCHAR(50) parameter that identifies the encoding to use when reading the shapefile. The
default is NULL. When encoding is NULL, the ISO-8859-1 character set is used.
Result Set
column_number INTEGER The ordinal position of the column described by this row,
starting at 1.
domain_name_with_size VARCHAR(160) The data type name, including size and precision (as used in
CREATE TABLE or CAST functions).
Remarks
The sa_describe_shapefile system procedure is used to describe the name and type of columns in an
ESRI shapefile. This information can be used to create a table to load data from a shapefile using the LOAD
TABLE or INPUT statements. Alternately, this system procedure can be used to read a shapefile by specifying
the WITH clause for OPENSTRING...FORMAT SHAPEFILE.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. In addition:
● If the -gl database option is set to DBA, you must have one of the following system privileges:
○ ALTER ANY TABLE
○ ALTER ANY OBJECT
○ LOAD ANY TABLE
○ READ FILE
● If the -gl database option is set to ALL, no additional system privileges are needed.
● If the -gl database option is set to NONE, you must have the READ FILE system privilege.
Side Effects
None
Example
The following example displays a string that was used to create a table for storing shapefile data:
BEGIN
DECLARE create_cmd LONG VARCHAR;
SELECT 'create table if not exists esri_load( record_number int primary
key, ' ||
(SELECT list( name || ' ' || domain_name_with_size, ', ' ORDER BY
column_number )
FROM sa_describe_shapefile( 'c:\\esri\\tgr36069trt00.shp', 1000004326 )
WHERE column_number > 1 ) || ' )'
INTO create_cmd;
SELECT create_cmd;
EXECUTE IMMEDIATE create_cmd;
END
You can load the shapefile data into the table using the following statement (provided that you have the LOAD
ANY TABLE system privilege and that the -gl database option has not been set to NONE):
Syntax
sa_disable_auditing_type( <types> )
Parameters
types
Use this VARCHAR(128) parameter to specify a comma-delimited string containing one or more of the
following values:
all
Remarks
Use sa_disable_auditing_type to specify which types of auditing to exclude. This system procedure removes
the specified events from the current set of audit events. Use sa_enable_auditing_type to add events to the
current set of audit events. These system procedures set the PUBLIC auditing_options database option so the
setting is permanent.
Set the PUBLIC auditing database option to On or Off to enable or disable auditing.
If the set of events is empty and you set the PUBLIC auditing database option to On, no auditing information is
recorded. To re-establish auditing, you must use the sa_enable_auditing_type system procedure to specify
which types of information you want to audit.
If you set the PUBLIC auditing database option to Off, then no auditing information is recorded.
Specify the location where events are logged with the audit_log database option.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SET ANY SECURITY OPTION system
privilege.
Side effects
None
Example
The following example enables all auditing except for DDL and options auditing:
Related Information
Reports information about space available for a transaction log, transaction log mirror, and/or temporary file.
Syntax
sa_disk_free_space( [ <p_dbspace_name> ] )
Parameters
p_dbspace_name
Use this VARCHAR(128) parameter to specify the name of a transaction log file, transaction log mirror file,
or temporary file. The default is NULL.
Specify SYSTEM to get information about the main database file, TEMPORARY or TEMP to get information
about the temporary file, TRANSLOG to get information about the transaction log, or TRANSLOGMIRROR
to get information about the transaction log mirror.
Result set
Remarks
If the <p_dbspace_name> parameter is not specified or is NULL, then the result set contains one row for each
of the transaction log, transaction log mirror, and temporary file, if they exist. If <p_dbspace_name> is
specified, then exactly one or zero rows are returned (zero if log or mirror is specified and there is no log or
mirror file).
You must have EXECUTE privilege on the system procedure, as well as the MANAGE ANY DBSPACE system
privilege.
Side effects
None
Example
The following example uses the sa_disk_free_space system procedure to return a result set containing
information about available space.
CALL sa_disk_free_space( );
Related Information
Syntax
sa_enable_auditing_type( <types> )
Parameters
types
all
Remarks
Use sa_enable_auditing_type to specify which types of auditing to include. This system procedure adds the
specified events to the current set of audit events. Use sa_disable_auditing_type to remove events from the
current set of audit events. These system procedures set the PUBLIC auditing_options database option so the
setting is permanent.
Set the PUBLIC auditing database option to On or Off to enable or disable auditing.
By default, all events are audited (types='all'). If you want a smaller set, use the sa_disable_auditing_type
system procedure to clear the events you are not interested in; or use the sa_disable_auditing_type system
procedure to clear all events and then use the sa_enable_auditing_type system procedure to specify which
types of auditing you want.
If the set of events is empty and you set the PUBLIC auditing database option to On, no auditing information is
recorded. To re-establish auditing, you must use the sa_enable_auditing_type system procedure to specify
which types of information you want to audit.
If you set the PUBLIC auditing database option to Off, then no auditing information is recorded.
Specify the location where events are logged with the audit_log database option.
You must have EXECUTE privilege on the system procedure, as well as the SET ANY SECURITY OPTION system
privilege.
Side effects
None
Example
The following example illustrates another way to enable only DDL and triggers auditing:
Related Information
Syntax
sa_eng_properties( )
Remarks
Returns the PropNum, PropName, PropDescription, and Value for each available server property. Values are
returned for all database server properties and statistics related to database servers.
Privileges
Side effects
None
Example
CALL sa_eng_properties( );
1 IdleWrite ...
2 IdleChkPt ...
Related Information
Syntax
sa_external_library_unload ( [ '<external-library>' ] )
Parameters
external-library
(Optional) A LONG VARCHAR parameter that specifies the name of a library to be unloaded. If no library is
specified, all external libraries that are not in use are unloaded.
Remarks
If an external library is specified, but is in use or is not loaded, an error is returned. If no parameter is specified,
an error is returned if no loaded external libraries are found.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MANAGE ANY EXTERNAL OBJECT system privilege.
Side Effects
None
Example
● The following example unloads all libraries that are not currently in use:
CALL sa_external_library_unload();
Empties all pages for the current database in the database server cache.
Syntax
sa_flush_cache( )
Remarks
Database administrators can use this procedure to empty the contents of the database server cache for the
current database. This is useful in performance measurement to ensure repeatable results.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SERVER OPERATOR system
privilege.
Side effects
None
Example
The following example empties all pages for the current database in the database server cache.
CALL sa_flush_cache( );
Related Information
Syntax
sa_get_ldapserver_status()
Result Set
ldsrv_id UNSIGNED BIGINT A unique identifier for the LDAP server configuration object
that is the primary key and is used by the login policy to refer
to the LDAP server.
ldsrv_name CHAR(128) The name assigned to the LDAP server configuration object.
● 1 – RESET
● 2 – READY
● 3 – ACTIVE
● 4 – FAILED
● 5 – SUSPENDED
ldsrv_last_state_change TIMESTAMP Indicates the time the last state change occurred. The value
is stored in Coordinated Universal Time (UTC), regardless of
the local time zone of the LDAP server.
Remarks
To see SYSLDAPSERVER column values before a checkpoint occurs and the contents of memory are written to
the catalog on disk. The updates to the catalog columns ldsrv_state and ldsrv_last_state_change occur
asynchronously during checkpoint to the LDAP server object as the result of an event that changes the LDAP
server object state, such as a failed connection due to a failed LDAP directory server. The LDAP server object
state reflects the state of the LDAP directory server.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Syntax
sa_get_user_status( )
Result set
user_dn_cached_at TIMESTAMP The date and time that the user_dn col
umn was last cached. This value is used
to determine whether to purge an old
DN. Regardless of the database server
local time zone, the value is stored in
Coordinated Universal Time (UTC). This
value is not affected by simulated time
zone.
password_change_first_user UNSIGNED INTEGER The user_id of the user who set the first
part of a dual password; otherwise
NULL.
password_change_second_user UNSIGNED INTEGER The user_id of the user who set the sec
ond part of a dual password; otherwise
NULL.
Remarks
This procedure returns a result set that shows the current status of users. In addition to basic user information,
the procedure includes a column indicating if the user has been locked out and a column with a reason for the
lockout. Users can be locked out for the following reasons: locked due to policy, password expiry, or too many
failed attempts.
If the user is authenticated using LDAP User Authentication, the output includes the user's distinguished name
and the date and time that the distinguished name was found.
Privileges
To view information about other users, you must also have the MANAGE ANY USER system privilege.
Side effects
None
Example
The following example uses the sa_get_user_status system procedure to return the status of database
users.
CALL sa_get_user_status;
Related Information
Syntax
sa_http_header_info( [<header_parm>] )
Parameters
header_parm
Use this optional VARCHAR(255) parameter to specify an HTTP header name. The default is NULL.
Result set
Remarks
The sa_http_header_info system procedure returns header names and values. If you do not specify the header
name using the optional parameter, the result set contains values for all headers.
This procedure returns a non-empty result set if it is called while processing an HTTP request within a web
service.
Note
The sa_http_header_info system procedure may return multiple rows with the same name if the request
contains multiple HTTP headers with the same name.
Privileges
None
Example
The following web service procedure which is called from a web service illustrates the use of the
sa_http_header_info system procedure.
When the web service that calls this web service procedure is used, output appears in the database server
messages window that is similar to the following.
Syntax
sa_list_external_library( )
Returns a list of external libraries loaded in the engine along with their reference count.
The reference count is the number of instances of the library in the engine. An external library can be unloaded
by executing the procedure sa_external_library_unload, only if its reference count is 0.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MANAGE ANY EXTERNAL OBJECT system privilege.
Side Effects
None
Example
The following example lists the external libraries and their reference count:
CALL sa_list_external_library()
Related Information
Syntax
sa_list_statements( )
num_client_prepares UNSIGNED INTEGER The number of times the client has pre
pared the identical statement text.
Remarks
The sa_list_statements system procedure can be used in a CALL statement or in the FROM clause of a SELECT
statement. The statement executing the sa_list_statements system procedure is not included in the result.
The SQLStatement column is rewritten from the original text due to semantic transform optimizations and
normalization in a manner similar to that of the REWRITE function. Sensitive information such as encryption
keys and passwords is replaced with ***. Because of these changes, the SQLStatement value requires
interpretation when comparing to statements in the application, which might have a different form.
Privileges
None
Side effects
None
The following example returns the list of statements for the connection:
CALL sa_list_statements();
The following example returns the list of statements that contribute to the max_statement_count resource
governor:
SELECT *
FROM sa_list_statements()
WHERE dropped_by_app=0;
Syntax
sa_locks(
[ <connection>
[, <creator>
[, <table_name>
[, <max_locks>
[, <object_type> ] ] ] ] )
Parameters
connection
(Optional) This INTEGER parameter specifies a connection ID number. The procedure returns lock
information only about the specified connection. The default value is 0 (or NULL), in which case
information is returned about all connections.
creator
(Optional) This CHAR(128) parameter specifies a user ID. The procedure returns information only about
the tables owned by the specified user. The default value for the creator parameter is NULL. When this
parameter is set to NULL, sa_locks returns the following information:
● If <table_name> is unspecified – locking information is returned for all tables in the database
● If <table_name> is specified – locking information is returned for tables with the specified name that
were created by the current user
table_name
(Optional) This CHAR(128) parameter specifies a table name. The procedure returns information only
about the specified tables. The default value is NULL, in which case information is returned about all tables.
max_locks
(Optional) This CHAR(5) parameter limits your results to the type of object associated with the lock.
Specify ALL to return lock information for all object types. Specify TABLE to return lock information for
tables, global temporary tables, and materialized views. Specify MUTEX to return mutex information. If you
do not specify <object_type>, the procedure returns lock information for all object types.
Result Set
table_type CHAR(6) The type of table. This type is either BASE for a table,
GLBTMP for global temporary table, or MVIEW for a materi
alized view.
lock_class CHAR(8) The lock class. One of Schema, Row, Table, or Position.
lock_type CHAR(9) The lock type (this is dependent on the lock class).
row_identifier UNSIGNED BIGINT The identifier for the row. This is either an 8-byte row identi
fier or NULL.
Remarks
The sa_locks procedure returns a result set containing information about all the locks in the database. The
value in the lock_type column depends on the lock classification in the lock_class column. The following
values can be returned:
Schema ● Shared – shared schema lock For schema locks, the row_identifier and index ID values
● Exclusive – (IQ catalog store tables are NULL.
only) exclusive schema lock
Row ● Read – read lock Row read locks can be short-term locks (scans at isolation
● Intent – intent lock level 1) or long-term locks at higher isolation levels. The
● ReadPK – read lock lock_duration column indicates whether the read lock is of
● Write – write lock short duration because of cursor stability (Position) or
● WriteNoPK – write lock long duration, held until COMMIT/ROLLBACK (Transac
● Surrogate – surrogate lock tion). Row locks are always held on a specific row that has
an 8-byte row identifier that is reported as a 64-bit integer
value in the row_identifier column.
Position ● Phantom – (IQ catalog store tables Usually a position lock is also held on a specific row, and
only) phantom lock that row's 64-bit row identifier appears in the row_identi
● Insert – insert lock fier column in the result set. However, Position locks can
be held on entire scans (index or sequential), in which
case the row_identifier column is NULL.
A position lock can be associated with a sequential table scan, or an index scan. The index_id column indicates
whether the position lock is associated with a sequential scan. If the position lock is held because of a
sequential scan, the index_id column is NULL. If the position lock is held as the result of a specific index scan,
the index identifier of that index is listed in the index_id column. The index identifier corresponds to the primary
key of the ISYSIDX system table, which can be viewed using the SYSIDX view. If the position lock is held for
scans over all indexes, the index ID value is -1.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MONITOR system privilege.
None
Example
CALL sa_locks( );
Use the sa_locks system procedure to view the locks that are currently held in the database, including
information about the connection holding the lock, the lock duration, and the lock type. Execute a query that
joins the results of the sa_locks system procedure to a particular table by using the ROWID of the table in the
join predicate.
The result set of the sa_locks system procedure contains the row_identifier column that allows you to identify
the row in a table the lock refers to. It may not be necessary to specify the WITH NOLOCK clause; however, if
the query is issued at isolation levels other than 0, the query may block until the locks are released, which
reduces the usefulness of this method of checking.
Ensures that a skeletal instance of an object exists before executing an ALTER statement.
Syntax
sa_make_object(
<objtype>
, <objname>
[, <owner>
[, <tabname> ] ]
)
<objtype>:
'procedure'
| 'function'
| 'view'
| 'trigger'
| 'service'
| 'event'
objtype
Use this CHAR(30) parameter to specify the type of object being created. If objtype is 'trigger', this
argument specifies the owner of the table on which the trigger is to be created.
objname
Use this CHAR(128) parameter to specify the name of the object to be created.
owner
Use this optional CHAR(128) parameter to specify the owner of the object to be created. The default is
NULL.
tabname
This CHAR(128) parameter is required only if objtype is 'trigger', in which case you use it to specify the
name of the table on which the trigger is to be created. The default is NULL.
Remarks
This procedure can be used in scripts that are run repeatedly to create or modify a database schema, however
its use is deprecated in favor of using the CREATE OR REPLACE statement for the type object you are creating
or modifying, wherever possible. Using CREATE OR REPLACE is more efficient, and offers the correct behavior
when trying to create an object that already exists.
If you use the sa_make_object system procedure, you typically follow it by an ALTER statement that contains
the entire object definition.
Privileges
You must have EXECUTE privilege on the system procedure, as well as other privileges, as follows:
CREATE PROCEDURE, CREATE ANY PROCEDURE, or CREATE ANY OBJECT system privilege
Procedures or functions owned by other users
CREATE VIEW, CREATE ANY VIEW, or CREATE ANY OBJECT system privilege
Views owned by other users
If the trigger is on a table owned by you, you must have either the CREATE ANY TRIGGER or CREATE ANY
OBJECT system privilege.
If the trigger is on a table owned by another user, you must have either the CREATE ANY TRIGGER or the
CREATE ANY OBJECT system privilege. Additionally, you must have one of the following:
Side effects
Automatic commit
Example
The following statements ensure that a skeleton procedure definition is created, define the procedure, and
grant privileges on it. A script file containing these instructions could be run repeatedly against a database
without error.
The following example uses the sa_make_object system procedure to add a skeleton web service.
Breaks an NCHAR string into terms and returns each term as a row along with its position.
Syntax
sa_nchar_terms
( '<char-string>' [ , '<text-config-name>' [, '<owner>' ] ] ] )
Parameters
char-string
(Optional) The text configuration object to apply when processing the string. The default value is
'default_nchar'.
owner
(Optional) The owner of the specified text configuration object. The default value is DBA.
Remarks
You can use sa_nchar_terms to find out how a string is interpreted when the settings for a text configuration
object are applied. This can be helpful when you want to know what terms would be dropped during indexing or
from a query string.
The syntax for sa_nchar_terms is similar to the syntax for the sa_char_terms system procedure.
Note
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Returns a summary of request timing information for all connections when the database server has request
timing logging enabled.
Syntax
sa_performance_diagnostics( )
Number INTEGER Returns the connection ID (a number) for the current con
nection.
● INT:ApplyRecovery
● INT:BackupDB
● INT:Checkpoint
● INT:Cleaner
● INT:CloseDB
● INT:CreateDB
● INT:CreateMirror
● INT:DelayedCommit
● INT:DiagRcvr
● INT:DropDB
● INT:EncryptDB
● INT:Exchange
● INT:FlushMirrorLog
● INT:FlushStats
● INT:HTTPReq
● INT:PromoteMirror
● INT:PurgeSnapshot
● INT:ReconnectMirror
● INT:RecoverMirror
● INT:RedoCheckpoint
● INT:RefreshIndex
● INT:ReloadTrigger
● INT:RenameMirror
● INT:RestoreDB
● INT:StartDB
● INT:VSS
LoginTime TIMESTAMP Returns the date and time the connection was established.
TransactionStar TIMESTAMP Returns a string containing the time the database was first
tTime modified after a COMMIT or ROLLBACK, or an empty string
if no modifications have been made to the database since
the last COMMIT or ROLLBACK.
LastReqTime TIMESTAMP Returns the time at which the last request for the specified
connection started. This property can return an empty string
for internal connections, such as events.
ReqType VARCHAR(255) Returns the type of the last request. If a connection has been
cached by connection pooling, its ReqType value is CON
NECT_POOL_CACHE.
ReqStatus VARCHAR(255) Returns the status of the request. It can be one of the follow
ing values:
ReqTimeUnschedu DOUBLE Returns the amount of unscheduled time, or NULL if the -zt
led option was not specified.
ReqTimeBlockIO DOUBLE Returns the amount of time, in seconds, spent waiting for
I/O to complete, or NULL if the -zt option was not specified.
ReqTimeBlockLoc DOUBLE Returns the amount of time, in seconds, spent waiting for a
k lock, or NULL if the -zt option was not specified.
ReqTimeBlockCon DOUBLE Returns the amount of time, in seconds, spent waiting for
tention atomic access, or NULL if the RequestTiming server prop
erty is set to Off.
ReqCountUnsched INTEGER Returns the number of times the connection waited for
uled scheduling, or NULL if the -zt option was not specified.
ReqCountBlockIO INTEGER Returns the number of times the connection waited for I/O
to complete, or NULL if the -zt option was not specified.
ReqCountBlockLo INTEGER Returns the number of times the connection waited for a
ck lock, or NULL if the -zt option was not specified.
ReqCountBlockCo INTEGER Returns the number of times the connection waited for
ntention atomic access, or NULL if the -zt option was not specified.
CurrentProcedur VARCHAR(255) Returns the name of the procedure that a connection is cur
e rently executing. If the connection is executing nested proce
dure calls, the name is the name of the current procedure. If
there is no procedure executing, an empty string is returned.
EventName VARCHAR(255) Returns the name of the associated event if the connection
is running an event handler. Otherwise, an empty string is re
turned.
CurrentLineNumb INTEGER Returns the current line number of the procedure or com
er pound statement a connection is executing. The procedure
can be identified using the CurrentProcedure property. If the
line is part of a compound statement from the client, an
empty string is returned.
LastStatement LONG VARCHAR Returns the most recently prepared SQL statement for the
current connection.
LastPlanText LONG VARCHAR Returns the long text plan of the last query executed on the
connection. You control the remembering of the last plan by
setting the RememberLastPlan option of the sa_server_op
tion system procedure, or using the -zp server option.
AppInfo LONG VARCHAR Returns information about the client that made the connec
tion. For HTTP connections, this includes information about
the browser. For connections using older versions of jCon
nect or Open Client, the information may be incomplete.
SnapshotCount INTEGER Returns the number of snapshots associated with the con
nection.
Remarks
The sa_performance_diagnostics system procedure returns a result set consisting of a set of request
timing properties and statistics if the server has been told to collect the information. Recording of request
timing information must be turned on the database server before calling sa_performance_diagnostics. To
do this, specify the -zt option when starting the database server or execute the following:
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MONITOR system privilege.
Side Effects
None
Examples
● The following query identifies connections that have spent a long time waiting for database server requests
to complete.
Reports information about the execution time for each line within procedures, functions, events, or triggers
that have been executed in a database.
Syntax
sa_procedure_profile(
[ <filename>
[, <save_to_file> ] ] )
Parameters
filename
(Optional) A LONG VARCHAR parameter that specifies the file to which the profiling information should be
saved, or from which file it should be loaded. The default is NULL. See the Remarks section below for more
about saving and loading the profiling information.
save_to_file
(Optional) An INTEGER parameter that specifies whether to save the profiling information to a file, or load
it from a previously stored file. The default is 0.
object_type CHAR(1) The type of object. The object_type column of the result set
can be:
● P – stored procedure
● F – function
● E – event
● T – trigger
● C – ON UPDATE system trigger
● D – ON DELETE system tribber
object_name CHAR(128) The name of the stored procedure, function, event, or trig
ger. If the object_type is C or D, then this is the name of the
foreign key for which the system trigger was defined.
table_name CHAR(128) The table associated with a trigger (the value is NULL for
other object types).
executions UNSIGNED INTEGER The number of times the line has been executed.
percentage DOUBLE The percentage of the total execution time required for the
specific line.
foreign_owner CHAR(128) The database user who owns the foreign table for a system
trigger.
foreign_table CHAR(128) The name of the foreign table for a system trigger.
Remarks
● Return detailed procedure profiling information – to do this, call the procedure without specifying any
arguments.
● Save detailed procedure profiling information to file – to do this, include the <filename> argument and
specify 1 for the <save_to_file> argument.
● Load detailed procedure profiling information from a previously saved file – to do this, include the
<filename> argument and specify 0 for the <save_to_file> argument. When using the procedure in
this way, the loaded file must have been created by the same database as the one from which you are
running the procedure; otherwise, the results may be unusable.
Since the result set includes information about the execution times for individual lines within procedures,
triggers, functions, and events, and what percentage of the total procedure execution time those lines use, you
can use this profiling information to fine-tune slower procedures that may decrease performance.
Before you can profile your database, you must enable profiling.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MONITOR or MANAGE PROFILING system privilege.
Side Effects
None
Examples
● The following statement returns the execution time for each line of every procedure, function, event, or
trigger that has been executed in the database:
CALL sa_procedure_profile( );
● The following statement returns the same detailed procedure profiling information as the example above,
and saves it to a file called detailedinfo.txt:
● Either of the following statements can be used to load detailed procedure profiling information from a file
called detailedinfo.txt:
Reports summary information about the execution times for all procedures, functions, events, or triggers that
have been executed in a database.
Syntax
sa_procedure_profile_summary (
Parameters
filename
(Optional) A LONG VARCHAR parameter that specifies the file to which the profiling information is saved,
or from which file it should be loaded. The default is NULL. See the Remarks section below for more about
saving and loading the profiling information.
save_to_file
(Optional) An INTEGER parameter that specifies whether to save the summary information to a file, or to
load it from a previously saved file. The default is 0.
Result Set
object_type CHAR(1) The type of object. The object_type column of the result set
can be:
● P – stored procedure
● F – function
● E – event
● T – trigger
● C – ON UPDATE system trigger
● D – ON DELETE system trigger
object_name CHAR(128) The name of the stored procedure, function, event, or trig
ger.
table_name CHAR(128) The table associated with a trigger (the value is NULL for
other object types).
executions UNSIGNED INTEGER The number of times each procedure has been executed.
foreign_owner CHAR(128) The database user who owns the foreign table for a system
trigger.
foreign_table CHAR(128) The name of the foreign table for a system trigger.
Remarks
Since the procedure returns information about the usage frequency and efficiency of stored procedures,
functions, events, and triggers, you can use this information to fine-tune slower procedures to improve
database performance.
Before you can profile your database, you must enable profiling.
If you want line by line details for each execution instead of summary information, use the
sa_procedure_profile procedure instead.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have either the MONITOR or MANAGE PROFILING system privilege.
Finally, you must also have the following privileges:
Side Effects
None
Examples
● The following statement returns the execution time for any procedure, function, event, or trigger that has
been executed in the database:
CALL sa_procedure_profile_summary( );
● The following statement returns the same summary information as the previous example, and saves it to a
file called summaryinfo.txt:
● Either of the following statements can be used to load stored summary information from a file called
summaryinfo.txt:
Retrieves information about deadlocks from an internal buffer created by the database server.
Syntax
sa_report_deadlocks( )
Result Set
who VARCHAR(128) The user ID associated with the connection that is waiting.
what LONG VARCHAR The command being executed by the waiting connection.
object_id UNSIGNED BIGINT The object ID of the table containing the row.
owner INT The connection handle of the connection owning the lock
being waited on.
rollback_operat UNSIGNED INT The number of uncommitted operations that may be lost if
ion_count the transaction rolls back.
When the log_deadlocks option is set to On, the database server logs information about deadlocks in an
internal buffer. You can view the information in the log using the sa_report_deadlocks system procedure.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MONITOR system privilege.
Side Effects
None
Returns a result set with rows between a specified start and end value.
Syntax
sa_rowgenerator(
[ <rstart>
[, <rend>
[, <rstep> ] ] ]
)
Parameters
rstart
Use this optional INTEGER parameter to specify the starting value. The default value is 0.
rend
Use this optional INTEGER parameter to specify the ending value that is greater than or equal to
<rstart>. The default value is 100.
rstep
Use this optional INTEGER parameter to specify the increment by which the sequence values are
increased. The default value is 1.
Remarks
The sa_rowgenerator procedure can be used in the FROM clause of a query to generate a sequence of
numbers. This procedure is an alternative to using the RowGenerator system table. You can use
sa_rowgenerator for such tasks as:
No rows are returned if you do not specify correct start and end values and a positive non-zero step value.
You can emulate the behavior of the RowGenerator table with the following statement:
Privileges
Side effects
None
Example
The following query returns a result set containing one row for each day of the current month.
The following query shows how many employees live in ZIP code ranges (0-9999), (10000-19999), ...,
(90000-99999). Some of these ranges have no employees, which causes a warning.
The following example generates 10 rows of data and inserts them into the NewEmployees table:
The following example uses the sa_rowgenerator system procedure to create a view containing all integers.
The value 2147483647 in this example represents the maximum signed integer that is supported.
This example uses the sa_rowgenerator system procedure to create a view containing dates from
0001-01-01 to 9999-12-31. The value 3652058 in this example represents the number of days between
0001-01-01 and 9999-12-31, the earliest and latest dates that are supported.
The following query returns all years between 1900 and 2058 that have 54 weeks.
Syntax
Go to:
● Privileges
● Side Effects
● Examples
opt
A CHAR(128) parameter that specifies the new value for the server option.
The following table lists the valid values for <opt> and <val>:
AutoMultiProgrammingLevel YES (default); NO When set to YES, the database server automatically adjusts its
multiprogramming level, which controls the maximum number of
tasks that can be active at a time. If you choose to control the mul
tiprogramming level manually by setting this option to NO, you can
still set the initial, minimum, and maximum values for the multi
programming level.
AutoMultiProgrammingLevel YES; NO (default) When set to YES, statistics for automatic multiprogramming level
Statistics
adjustments appear in the database server message log.
CacheSizingStatistics YES; NO (default) When set to YES, display cache information in the database server
messages window whenever the cache size changes.
CollectStatistics YES (default); NO When set to YES, the database server collects Performance Moni
tor statistics.
ConnsDisabled YES; NO (default) When set to YES, no other connections are allowed to any data
bases on the database server.
ConnsDisabledForDB YES; NO (default) When set to YES, no other connections are allowed to the current
database.
ConsoleLogFile <filename>
The name of the file used to record database server message log
information. Specifying an empty string stops logging to the file.
Double any backslash characters in the path because this value is
a SQL string.
ConsoleLogMaxSize <file-size>
The maximum size, in bytes, of the file used to record database
(bytes)
server message log information. When the database server mes
sage log file reaches the size specified by either this property or
the -on server option, the file is renamed with the exten
sion .old appended (replacing an existing file with the same
name if one exists). The database server message log file is then
restarted.
CurrentMultiProgrammingLevel Integer Sets the multiprogramming level of the database server. Default is
20.
DatabaseCleaner ON (default); OFF Do not change the setting of this option except on the recommen
dation of Technical Support.
DeadlockLogging ON; OFF (default); Controls deadlock logging. The value deadlock_logging is also
RESET; CLEAR
supported. The following values are supported:
DebuggingInformation YES; NO (default) Displays diagnostic messages and other messages for trouble
shooting purposes. The messages appear in the database server
messages window.
DiskSandbox ON; OFF (default) Sets the default disk sandbox settings for all databases started on
the database server that do not have explicit disk sandbox set
tings. Changing the disk sandbox settings by using the
sa_server_option system procedure does not affect data
bases already running on the database server. To use the
sa_server_option system procedure to change disk sandbox
settings, you must provide the secure feature key for the man
age_disk_sandbox secure feature.
DropBadStatistics YES (default); NO Allows automatic statistics management to drop statistics that re
turn bad estimates from the database.
DropUnusedStatistics YES (default); NO Allows automatic statistics management to drop statistics that
have not been used for 90 consecutive days from the database.
IdleTimeout Integer (minutes) Disconnects TCP/IP connections that have not submitted a re
quest for the specified number of minutes. This prevents inactive
connections from holding locks indefinitely. The default is 240
IPAddressMonitorPeriod Integer (seconds) The minimum value is 10 and the default is 0. For portable devices,
the default value is 120.
LivenessTimeout Integer (seconds) A liveness packet is sent periodically across a client/server TCP/IP
network to confirm that a connection is intact. If the network
server runs for a LivenessTimeout period without detecting a liv
eness packet, the communication is severed. The default is 120.
MessageCategoryLimit Integer Sets the minimum number of messages of each severity and cate
gory that can be retrieved using the sa_server_messages system
procedure. The default is 400
MinMultiProgrammingLevel Integer Default is the minimum of the value of the -gtc server option and
the number of logical CPUs on the computer.
OptionWatchAction MESSAGE (default); Specifies the action that the database server takes when an at
ERROR
tempt is made to set an option in the list. When OptionWatchAc
tion is set to MESSAGE, and an option specified by OptionWatch
List is set, a message appears in the database server messages
window indicating that the option being set is on the options
watch list.When OptionWatchAction is set to ERROR, an error is
returned indicating that the option cannot be set because it is on
the options watch list.
You can view the current setting for this property by executing:
You can view the current setting for this property by executing:
ProcedureProfiling YES; NO (default); Enables or disables procedure profiling, which provides informa
RESET; CLEAR tion about the usage of stored procedures, user-defined functions,
events, system triggers, and triggers by all connections.
ProfileFilterConn <connection-id>
Instructs the database server to capture profiling information for a
specific connection ID, without preventing other connections from
using the database. When connection filtering is enabled, the
value returned for SELECT
PROPERTY( 'ProfileFilterConn' ) is the connection ID
of the connection being monitored. If no ID has been specified, or
if connection filtering is disabled, the value returned is -1.
ProcessorAffinity Comma-delimited Instructs the database server which logical processors to use on
list of processor
Windows or Linux. Specify a comma-delimited list of processor
numbers and/or
numbers and/or ranges. If the lower endpoint of a range is omit
ranges. The default
is that all processors ted, then it is assumed to be zero. If the upper endpoint of a range
are used or the set is omitted, then it is assumed to be the highest CPU known to the
ting of the -gta op operating system. The in_use column returned by the
tion. sa_cpu_topology system procedure contains the current pro
cessor affinity of the database server, and the in_use column indi
cates
The database server might not use all of the specified logical pro
cessors in the following cases:
ProfileFilterUser <user-id>
Instructs the database server to capture profiling information for a
specific user ID.
PropertyHistorySize <time>; <memory- Specifies either the minimum amount of time to store tracked
size>; MAX; DE
property values or the maximum amount of memory to use to
FAULT
store tracked property values. To set this property to a time, use
the format '[HH:]MM:SS'. To set this property to a memory size,
specify the memory size in bytes. For example, 1M. The default
value is '00:10:00' (ten minutes), unless that amount of time viola
tes the maximum size limit, in which case MAX is used as the de
fault.
QuittingTime Valid date and time Instructs the database server to shut down at the specified time.
RememberLastPlan YES; NO (default) Instructs the database server to capture the long text plan of the
last query executed on the connection. This setting is also control
led by the -zp server option.When RememberLastPlan is turned
on, obtain the textual representation of the plan of the last query
executed on the connection by querying the value of the LastPlan
Text connection property:
SELECT
CONNECTION_PROPERTY( 'LastPlanText' );
RememberLastStatement YES; NO (default) Instructs the database server to capture the most recently pre
pared SQL statement for each database running on the server. For
stored procedure calls, only the outermost procedure call appears,
not the statements within the procedure. When RememberLast
Statement is turned on, you can obtain the current value of the
LastStatement for a connection by querying the value of the Last
Statement connection property:
SELECT
CONNECTION_PROPERTY( 'LastStatement' );
SELECT
CONNECTION_PROPERTY( 'LastStatement',
connection-id );
Note
When -zl is specified, or when the RememberLastState
ment server setting is turned on, any user can call
thesa_conn_activity system procedure or obtain the
value of the LastStatement connection property to find out
the most recently prepared SQL statement for any other user.
Use this option with caution and turn it off when it is not re
quired.
RequestFilterConn <connection-id>; Filter the request logging information so that only information for
-1
a particular connection is logged. This filtering can reduce the size
of the request log file when monitoring a database server with
many active connections or multiple databases. You can obtain
the connection ID by executing the following:
CALL sa_conn_info( );
CALL
sa_server_option( 'RequestFilterConn',
<connection-id> );
CALL
sa_server_option( 'RequestFilterConn',
-1 );
RequestFilterDB <database-id>; -1 Filter the request logging information so that only information for
a particular database is logged. This can help reduce the size of
the request log file when monitoring a server with multiple data
bases. You can obtain the database ID by executing the following
statement when you are connected to the desired database:
RequestLogFile <filename>
The name of the file used to record request information. Specify
ing an empty string stops logging to the request log file. If request
logging is enabled, but the request log file was not specified or has
been set to an empty string, the server logs requests to the data
base server messages window. Double any backslash characters
in the path because this value is a SQL string.
RequestLogging SQL; HOSTVARS; This call turns on logging of individual SQL statements sent to the
PLAN; PROCE
database server for use in troubleshooting with the database
DURES; TRIGGERS;
server -zr and -zo options. Values can be combinations of the
OTHER; BLOCKS;
REPLACE; ALL; YES; following, separated by either a plus sign (+), or a comma:
NONE (default); NO
● PLAN – enables logging of execution plans (short form). If
logging of procedures (PROCEDURES) is enabled, execution
plans for procedures are also recorded.
● HOSTVARS – enables logging of host variable values. If you
specify HOSTVARS, the information listed for SQL is also log
ged.
● The maximum size of the file used to record request logging
information, in bytes. If you specify 0, then there is no maxi
mum size for the request logging file, and the file is never
PROCEDURES – enables logging of statements executed
from within procedures.The maximum size of the file used to
record request logging information, in bytes. If you specify 0,
then there is no maximum size for the request logging file,
and the file is never renamed. This value is the default. When
the request log file reaches the size specified by either the re
named. This value is the default. When the request log file
reaches the size specified by either the
● TRIGGERS – enables logging of statements executed from
within triggers.
● OTHER – enables logging of additional request types not in
cluded by SQL, such as FETCH and PREFETCH. However, if
you specify OTHER but do not specify SQL, it is the equivalent
of specifying SQL+OTHER. Including OTHER can cause the
log file to grow rapidly and could negatively impact server per
formance.
● BLOCKS – enables logging of details showing when a connec
tion is blocked and unblocked on another connection.
● REPLACE – at the start of logging, the existing request log is
replaced with a new (empty) one of the same name. Other
wise, the existing request log is opened and new entries are
appended to the end of the file.
● ALL – logs all supported information. This value is equivalent
to specifying SQL+PLAN+HOSTVARS+PROCEDURES+TRIG
GERS+OTHER+BLOCKS. This setting can cause the log file to
grow rapidly and could negatively impact server performance.
● NO or NONE – turns off logging to the request log.
You can view the current setting for this property by executing:
RequestLogMaxSize <file-size>
The maximum size of the file used to record request logging infor
(bytes)
mation, in bytes. If you specify 0, then there is no maximum size
for the request logging file, and the file is neverPROCEDURES –
enables logging of statements executed from within proce
dures.The maximum size of the file used to record request logging
information, in bytes. If you specify 0, then there is no maximum
size for the request logging file, and the file is never renamed. This
value is the default. When the request log file reaches the size
specified by either the renamed. This value is the default. When
the request log file reaches the size specified by either the
sa_server_option system procedure or the -zs server op
tion, the file is renamed with the extension .old appended (replac
ing an existing file with the same name if one exists). The request
log file is then restarted.
RequestLogNumFiles Integer The number of request log file copies to retain. If request logging
is enabled over a long period, the request log file can become
large. The -zn option allows you to specify the number of request
log file copies to retain
RequestTiming YES; NO (default) Instructs the database server to maintain timing information for
each new connection. This feature is turned off by default. When it
is turned on, the database server maintains cumulative timers for
all new connections that indicate how much time the connection
spent in the server in each of several states. The change is only
effective for new connections, and lasts for the duration each con
nection. You can use the sa_performance_diagnostics
system procedure to obtain a summary of this timing information,
or you can retrieve individual values by inspecting the following
connection properties:
● ReqCountUnscheduled
● ReqTimeUnscheduled
● ReqCountActive
● ReqTimeActive
● ReqCountBlockIO
● ReqTimeBlockIO
● ReqCountBlockLock
● ReqTimeBlockLock
● ReqCountBlockContention
● ReqTimeBlockContention
rlv_memory_mb The minimum value Replaced with the RV_AUTO_MERGE database option. To avoid
is 1 MB. The maxi
Specifies the maximum amount of memory (the RLV store), in
mum value is 2048.
MB, to reserve for row-level versioning. The default value is 2048
Any other value will
set the amount of MB. If the value exceeds 2/3rds of the system virtual memory
memory to 2048 limit, the server generates an error.
MB.
SecureFeatures <feature-list>
Allows you to manage secure features for a database server that is
already running. The feature-list is a comma-separated list of fea
ture names or feature sets. By adding a feature to the list, you limit
its availability. To remove items from the list of secure features,
specify a minus sign (-) before the secure feature name.
To call sa_server_option('SecureFeatures',...),
the connection must have the ManageFeatures secure feature en
abled on the connection. The -sf key (the system secure feature
key) enables ManageFeatures, as well as all of the other features.
So if you used the system secure feature key, then changing the
set of SecureFeatures will not have any effect on the connection.
But if you used another key (for example a key that had been cre
ated using the create_secure_feature_key system pro
cedure) then your connection may be immediately affected by the
change, depending on what other features are included in the key.
CALL sa_server_option('SecureFeatures',
'CONSOLE_LOG,WEBCLIENT_LOG' );
After executing this statement, the list of secure features is set ac
cording to what has been changed.
StatisticsCleaner ON (default); OFF The statistics cleaner fixes statistics that give bad estimates by
performing scans on tables. By default the statistics cleaner runs
in the background and has a minimal impact on performance.
Turning off the statistics cleaner does not disable the statistic gov
ernor, but when the statistics cleaner is turned off, statistics are
only created or fixed when a query is run.
WebClientLogFile <filename>
The name of the web service client log file. The web service client
log file is truncated each time you use the -zoc server option or
the WebClientLogFile property to set or reset the file name. Dou
ble any backslash characters in the path because this value is a
string.
WebClientLogging ON; OFF (default) This option enables and disables logging of web service clients.
The information that is logged includes HTTP requests and re
sponse data. Specify ON to start logging to the web service client
log file, and specify OFF to stop logging to the file.
Privileges
(back to top)
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MANAGE PROFILING system privilege to use the following
options, which are related to application profiling or request logging:
● ProcedureProfiling
● ProfileFilterConn
● ProfileFilterUser
● RequestFilterConn
● RequestFilterDB
● RequestLogFile
● RequestLogging
● RequestLogMaxSize
● RequestLogNumFiles
Side Effects
(back to top)
None
(back to top)
● The following statement causes cache information to be displayed in the database server messages
window whenever the cache size changes:
● The following statement enables logging of all SQL statements, procedure calls, plans, blocking and
unblocking events, and starts a new request log:
Related Information
Syntax
sa_set_http_header(
<fldname>
, <val>
[, <instance> ]
)
Parameters
fldname
Use this CHAR(128) parameter to specify a string containing the name of one of the HTTP header fields.
val
Use this LONG VARCHAR parameter to specify the value to which the named parameter should be set.
Setting a response header to NULL, effectively removes it.
instance
Remarks
Setting the special header field @HttpStatus sets the status code returned with the request. The status code is
also known as the response code. For example, the following script sets the status code to 404 Not Found:
You can create a user-defined status message by specifying a three digit status code with an optional colon-
delimited text message. For example, the following script outputs a status code with the message "999 User
Code":
Note
A user defined status text message is not translated into a database character-set when logged using the
LogOptions protocol option.
The body of the error message is inserted automatically. Only valid HTTP error codes can be used. Setting the
status to an invalid code causes a SQL error.
The sa_set_http_header procedure always overwrites the existing header value of the header field when called.
Response headers generated automatically by the database server can be removed. For example, the following
command removes the Expires response header:
Privileges
Side effects
None
Example
The following example sets the Set-Cookie header field to type=chocolate and specifies the third instance
of the header.
Syntax
sa_set_http_option(
<optname>
, <val>
)
Parameters
optname
Use this CHAR(128) parameter to specify a string containing the name of one of the HTTP options.
CharsetConversion
Use this option to control whether the result set is to be automatically converted from the character
set encoding of the database to the character set encoding of the client. The only permitted values are
ON and OFF. The default value is ON.
AcceptCharset
Use this option to specify the web server's preferences for a response character set encoding. One or
more character set encodings may be specified in order of preference. The syntax for this option
conforms to the syntax used for the HTTP Accept-Charset request-header field specification in
RFC2616 Hypertext Transfer Protocol.
An HTTP client such as a web browser may provide an Accept-Charset request header which specifies
a list of character set encodings ordered by preference. Optionally, each encoding may be given an
associated quality value (q=<qvalue>) which represents the client's preference for that encoding. By
default, the quality value is 1 (q=1). Here is an example:
A plus sign (+) in the AcceptCharset HTTP option value may be used as a shortcut to represent the
current database character set encoding. The plus sign also indicates that the database character set
encoding should take precedence if the client also specifies the encoding in its list, regardless of the
quality value assigned by the client.
An asterisk (*) in the AcceptCharset HTTP option may be used to indicate that the web service should
use a character set encoding preferred by the client, as long as it is also supported by the server, when
client and server do not have an intersecting list.
When sending the response, the first character set encoding preferred by both client and web service
is used. The client's order of preference takes precedence. If no mutual encoding preference exists,
then the web service's most preferred encoding is used, unless an asterisk (*) appears in the web
service list in which case the client's most preferred encoding is used.
If a client does not send an Accept-Charset header then one of the following actions are taken:
● If the AcceptCharset HTTP option has not been specified then the web server will use the
database character set encoding.
● If the AcceptCharset HTTP option has been specified then the web server will use its most
preferred character set encoding.
If a client does send an Accept-Charset header then one of the following actions are taken:
● If the AcceptCharset HTTP option has not been specified then the web server will attempt to use
one of the client's preferred character set encodings, starting with the most preferred encoding. If
the web server does not support any of the client's preferred encodings, it will use the database
character set encoding.
● If the AcceptCharset HTTP option has been specified then the web server will attempt to use the
first preferred character set encoding common to both lists, starting with the client's most
preferred encoding. For example, if the client sends an Accept-Charset header listing, in order of
preference, encodings iso-a, iso-b, and iso-c and the web server prefers iso-b, then iso-a, and
finally iso-c, then iso-a will be selected.
If the intersection of the two lists is empty, then the web server's first preferred character set is
used. From the following example, encoding iso-d will be used.
If an asterisk ('*') was included in the AcceptCharset HTTP option, then emphasis would be placed
on the client's choice of encodings, resulting in iso-a being used. Essentially, the use of an asterisk
guarantees that the intersection of the two lists will not be empty.
The ideal situation occurs when both client and web service use the database character set encoding
since this eliminates the need for character set translation and improves the response time of the web
server.
If the CharsetConversion option has been set to OFF, then AcceptCharset processing is not performed.
SessionID
Use this option to create, delete or rename an HTTP session. The database connection is persisted
when a web service sets this option to create an HTTP session but sessions are not persisted across
server restarts. If already within a session context, this call will rename the session to the new session
ID. When called with a NULL value, the session will be deleted when the web service terminates.
The generated session keys are limited to 128 characters in length and unique across databases if
multiple databases are loaded.
SessionTimeout
Use this option to specify the amount of time, in minutes, that the HTTP session persists during
inactivity. This timeout period is reset whenever an HTTP request uses the given session. The session
Use this LONG VARCHAR parameter to specify the value to which the named option should be set.
Remarks
Use this procedure within statements or procedures that handle web services to set options.
When sa_set_http_option is called from within a procedure invoked through a web service, and either the
option or option value is invalid, an error is returned.
Privileges
Side effects
None
Example
The following example illustrates the use of sa_set_http_option to indicate the web service's preference for
database character set encoding. The UTF-8 encoding is specified as a second choice. The asterisk (*)
indicates that the web service is willing to use the character set encoding most preferred by the client,
provided that it is supported by the web server.
The following example illustrates the use of sa_set_http_option to correctly identify the character encoding
in use by the web service. In this example, the web server is connected to a 1251CYR database and is
prepared to serve HTML documents containing the Cyrillic alphabet to any web browser.
To illustrate the process of establishing the correct character set encoding to use, consider the following
Accept-Charset header delivered by a web browser such as Firefox to the web service. It indicates that the
browser prefers ISO-8859-1 and UTF-8 encodings but is willing to accept others.
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
The web service will not accept the ISO-8859-1 character set encoding since the web page to be
transmitted contains Cyrillic characters. The web service prefers ISO-8859-5 or UTF-8 encodings as
indicated by the call to sa_set_http_option. In this example, the UTF-8 encoding will be chosen since it is
agreeable to both parties. The database connection property CharSet indicates which encoding has been
selected by the web service. The sa_set_http_header procedure is used to indicate the HTML document's
encoding to the web browser.
If the web browser does not specify an Accept-Charset, then the web service defaults to its first preference,
ISO-8859-5. The sa_set_http_header procedure is used to indicate the HTML document's encoding.
BEGIN
DECLARE sessionid VARCHAR(30);
DECLARE tm TIMESTAMP;
SET tm = NOW(*);
SET sessionid = 'MySessions_' ||
CONVERT( VARCHAR, SECONDS(tm)*1000 + DATEPART(millisecond,tm));
SELECT sessionid;
CALL sa_set_http_option('SessionID', sessionid);
END;
The following example sets the timeout for an HTTP session to 5 minutes:
Syntax
sa_stack_trace(
[ <stack_frames>
[, <detail_level>
[, <connection_id> ] ] ]
)
Parameters
stack_frames
'procedure'
Return procedures but not the outer-most statement. This is the default behavior.
'caller'
Return only the outer-most statement (the statement that arrived from the client).
'procedure+caller', 'caller+procedure'
'stack'
Include procedure names and line numbers. This is the default behavior.
'stack+sql', 'sql+stack'
Include the procedure names and line numbers, as well as the SQL text of the statement being
executed at each level.
connection_id
Use this optional UNSIGNED INTEGER parameter to filter the results returned to the specified connection
ID. If not specified, information for the current connection is returned.
Result set
LineNumber UNSIGNED INTEGER The line number of the call within the
procedure, trigger, or batch.
Remarks
Each record in the result set represents a single call on the stack. If the compound statement is not part of a
procedure, function, trigger, or event, then the type of batch (watcom_batch or tsql_batch) is returned instead
of the procedure name.
This function returns line numbers as found in the proc_defn column of the SYSPROCEDURE system table for
the procedure. These line numbers might differ from those of the source definition used to create the
procedure.
Privileges
Side effects
None.
Example
This example shows how to obtain the result set columns from the sa_stack_trace system procedure:
When this statement is executed outside of the context of a stored procedure, the result set is empty.
The following example shows the implementation of a general stack trace procedure that sends its results
to the client window:
CALL Proc1();
Results:
CALL Proc1();
-- Stack Trace: Snapshot from Proc3
-- 1 DBA proc3 3 call StackDump('Snapshot from Proc3')
-- 2 DBA proc2 3 call Proc3()
-- 3 DBA proc1 3 call Proc2()
-- Procedure completed
Related Information
Syntax
sa_table_page_usage( )
Result set
Remarks
The results include the same information provided by the Information utility. When the progress_messages
database option is set to Raw or Formatted, progress messages are sent from the database server to the client
while the sa_table_page_usage system procedure is running.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the MANAGE ANY DBSPACE system
privilege.
None
Example
The following example obtains information about the page usage of the SalesOrderItems table.
Related Information
Syntax
sa_text_index_stats( )
Result Set
text_config_id UNSIGNED BIGINT ID of the text configuration referenced by the TEXT index
doc_count UNSIGNED BIGINT Total number of indexed column values in the TEXT index
Remarks
Use sa_text_index_stats to view statistical information for each TEXT index in the database.
The pending_length, deleted_length, and last_refresh values are NULL for IMMEDIATE REFRESH
TEXT indexes.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have one of the following system privileges:
Side Effects
None
Example
The following example returns statistical information for each TEXT index in the database:
CALL sa_text_index_stats( );
Lists all terms that appear in a TEXT index, and the total number of indexed values in which each term appears.
Syntax
sa_text_index_vocab (
'<text-index-name>',
'<table-name>',
'<table-owner>'
)
Parameters
text-index-name
A CHAR(128) parameter that specifies the name of the table on which the TEXT index is built.
table-owner
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have one of the following:
Remarks
sa_text_index_vocab returns all terms that appear in a TEXT index, and the total number of indexed values
in which each term appears (which is less than the total number of occurrences, if the term appears multiple
times in some indexed values).
Side Effects
None
Example
The following example executes sa_text_index_vocab to return all the terms that appear in the TEXT index
MyTextIndex on table Customers owned by GROUPO:
sa_text_index_vocab
('MyTextIndex','Customers','GROUPO');
Term Frequency
a 1
Able 1
Acres 1
Active 5
Advertising 1
Again 1
... ...
Related Information
Syntax
sa_validate(
[ <tbl_name>
[, <owner_name> ] ]
[, <check_type> ] ]
[, <isolation_type> ] ]
)
Parameters
tbl_name
Use this optional CHAR(128) parameter to specify the name of a table or materialized view to validate. The
default is NULL, in which case the entire database is validated.
owner_name
Use this optional CHAR(128) parameter to specify an owner. When specified by itself, all tables and
materialized views owned by the owner are validated. The default is NULL.
check_type
Use this optional CHAR(10) parameter to specify the type of validation to perform. The possible values are
EXPRESS
If this parameter is EXPRESS, each table is checked using a VALIDATE TABLE statement with the WITH
EXPRESS CHECK clause.
NULL If this parameter is NULL (the default), each table is checked using a VALIDATE TABLE
statement.
isolation_type
Use this optional parameter when validating tables that have active transactions to prevent receiving false
errors about corrupt tables. The possible values are:
DATA LOCK Prevents transactions from modifying the table schema or data by applying exclusive data
locks on the specified tables. Concurrent transactions can read, but not modify the table data or
schema.
SNAPSHOT Ensures that only committed data is checked by applying snapshot isolation. Transactions
can read and modify the data. This clause requires that the database have snapshot isolation enabled
(with the allow_snapshot_isolation database option). Because this clause uses snapshot isolation,
performance is often affected.
Remarks
<tbl_name> and <owner_name> The specified table or materialized view owned by the
specified user, and all of its indexes, are validated.
Caution
If <isolation_type> is not specified, then only perform validation while no connections are making
changes to the database; otherwise, false errors may be reported indicating some form of database
corruption.
For databases with checksums enabled, a checksum is calculated for each database page and this value is
stored when the page is written to disk. You can use the Validation utility (dbvalid), the VALIDATE statement,
the sa_validate system procedure, or the Validate Database Wizard in SQL Central to perform checksum
validation, which consists of reading the database pages from disk and calculating the checksum for the page.
If the calculated checksum does not match the stored checksum for a page, the page has been modified or
corrupted while on disk or while writing to the page. If one or more pages has been corrupted, an error is
returned and information about the invalid pages appears in the database server messages window.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the VALIDATE ANY OBJECT system
privilege.
Side effects
If <isolation type> is DATA LOCK, then exclusive data locks are applied to the specified table(s) or view(s).
Example
The following statement performs a validation of tables and materialized views owned by user pjones:
Related Information
Syntax
sa_verify_password( <curr_pswd> )
curr_pswd
Use this CHAR(128) parameter to specify the password of the current database user.
Returns
Remarks
This procedure is used by sp_password. If the password matches, 0 is returned and no error occurs. If the
password does not match, an error is diagnosed. The connection is not terminated if the password does not
match.
Privileges
Side effects
None
Example
The following example attempts to validate the current connection's password when the current user is
DBA or User1. An error occurs if the current password does not match.
Alters a previously-defined secure feature key by modifying the authorization key and/or the feature list.
Syntax
sp_alter_secure_feature_key (
<name>,
<auth_key>,
<features> )
Parameters
name
A VARCHAR (128) name for the secure feature key you want to alter. A key with the given name must
already exist.
auth_key
A CHAR (128) authorization key for the secure feature key. The authorization key must be either a non-
empty string of at least six characters, or NULL, indicating that the existing authorization key is not to be
changed.
features
A LONG VARCHAR, comma-separated list of secure features that the key can enable. The feature_list can
be NULL, indicating that the existing feature_list is not to be changed.
Remarks
This procedure allows you to alter the authorization key or feature list of an existing secure feature key.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. In addition, you must be the database server owner and have the manage_keys feature
enabled on the connection.
Side Effects
None
Generates a report which maps authorities to corresponding system roles and role id. This procedure returns a
row for each authority.
Syntax
sp_auth_sys_role_info()
Result Set
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Syntax
sp_create_secure_feature_key (
<name>,
<auth_key>,
<features> )
name
A VARCHAR (128) name for the new secure feature key. This argument cannot be NULL or an empty string.
auth_key
A CHAR (128) authorization key for the secure feature key. The authorization key must be a non-empty
string of at least six characters.
features
A LONG VARCHAR comma-separated list of secure features that the new key can enable. Specifying "-"
before a feature means that the feature is not re-enabled when the secure feature key is set.
Remarks
This procedure creates a new secure feature key that can be given to any user. The system secure feature key is
created using the -sk database server option.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. In addition, you must be the database server owner and have the manage_keys feature
enabled on the connection.
Side Effects
None
Displays all roles granted to a user-defined role or a user, or displays the entire hierarchical tree of roles.
Syntax
sp_displayroles(
[ <user_role_name> ],
[ <display_mode> ],
[ <grant_type> ] )
user_role_name
● EXPAND_UP – shows all roles granted the input role or system privilege; that is the role hierarchy tree
for the parent levels.
● EXPAND_DOWN – shows all roles or system privileges granted to the input role or user; that is, the role
hierarchy tree for the child levels.
If no argument is specified (default), only the directly granted roles or system privileges appear.
grant_type
Result Set
For:
● Name = System privilege name – the results show the system privilege name instead of the system
privilege role name.
● Mode = Expand_down – parent_role_name is NULL for level 1 (directly granted roles). If no mode is
specified (default), role_level is 1 and parent_role_name is NULL, since only directly granted roles appear.
● Name = User name, with Mode = expand_up – no results are returned since a user resides at the top level
in any role hierarchy. Similarly, if Name = an immutable system privilege name, with Mode = Expand_down,
no results are returned because an immutable system privilege resides at the bottom level in any role
hierarchy.
● Default Mode – parent_role_name column is NULL and role_level is 1.
Side Effects
None
Examples
These examples assume that the following GRANT statements have been executed:
r7 NULL ADMIN 1
r1 r7 NO ADMIN 2
r2 r1 ADMIN 3
CHECKPOINT r1 NO ADMIN 3
r3 r2 NO ADMIN 4
MONITOR r2 NO ADMIN 4
r4 r3 ADMIN ONLY 5
r7 NULL ADMIN 1
r1 r7 NO ADMIN 2
r2 r1 ADMIN 3
CHECKPOINT r1 NO ADMIN 3
r3 r2 NO ADMIN 4
MONITOR r2 NO ADMIN 4
● In the following example, sp_displayroles( 'r3', 'expand_up', 'NO_ADMIN' ) produces out put
similar to:
r1 r7 NO ADMIN -2
r2 r1 ADMIN -1
r3 r2 NO ADMIN 0
r1 r7 NO ADMIN 0
Syntax
sp_drop_secure_feature_key ( <name> )
Parameters
name
Remarks
If the named key does not exist, an error is returned. If the named key exists, it is deleted as long as it is not the
last secure feature key that is allowed to manage secure features and secure feature keys. For example, the
system secure feature key cannot be dropped until there is another key that has the manage_features and
manage_keys secure features enabled.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. In addition, you must be the database server owner and have the manage_keys feature
enabled on the connection.
Side Effects
None
Syntax
Syntax 1
call sp_expireallpasswords
Syntax 2
sp_expireallpasswords
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. You must also have the MANAGE ANY USER system privilege.
Side Effects
None
Related Information
Syntax
stmt_text
Use this optional LONG VARCHAR parameter to specify a SQL statement string. The default is NULL.
stmt_hash Use this optional UNSIGNED BIGINT parameter to specify a statement hash. The default is
NULL.
Result set
Remarks
This procedure returns one or more rows for each logged statement, with each row indicating the execution
plan that was used.
Specify the stmt_text parameter if you want to see the hash for the specified statement, as well as the logged
results for the statement, if any. If there is no data for the hash, a single row with hash and NULL is returned. If
you want to fetch data for the statement and know the hash, specify the stmt_hash parameter. Otherwise, do
not specify either parameter. Specifying both parameters concurrently is not permitted by the server.
This system procedure returns all of the data collected by the server, unless you provide a parameter to refine
the results. By viewing these statistics, you can identify irregularities that can explain slow running statements.
Note
If the list of returned statements is long, then it is possible that not all of the data has been captured due to
space limitations.
You must have the MONITOR and MANAGE PROFILING privileges on the system procedure.
Side effects
None.
Example
The following query returns performance statistics for each logged statement that has both of the statements
logged:
SELECT *
FROM dbo.sp_find_top_statements( ) TS
INNER JOIN SYS.GTSYSPERFCACHESTMT PS ON TS.stmt_hash = PS.stmt_hash
ORDER BY TS.stmt_hash;
Lists the HTTP and HTTPS connection listeners used for the specified database.
Syntax
sp_http_listeners( <database-ID> )
Parameters
database-ID The ID of the database that the HTTP and HTTPS connection listeners are servicing. The
default is the current database ID.
uri_prefix LONG VARCHAR Returns the prefix of any URI that can
be serviced by the connection listener.
Includes the http:// or https:// identi
fier, the IP address, the port number
(optional), and the database name if re
quired.
Remarks
One row appears in the result set for each HTTP and HTTPS connection listener running. A row only appears if
a connection listener is available to execute web services on the specified database.
Privileges
To execute this system procedure for other databases, you must have any one of the following system
privileges:
● SERVER OPERATOR
● MONITOR
● MANAGE LISTENERS
Example
If you connect to database2 and run the same statement, then the database server returns the following
result set:
Returns information about all temporary and permanent mutexes and semaphores, including which
connection is holding each mutex and whether a semaphore is being waited for.
Syntax
sp_list_mutexes_semaphores( [<oid>] )
Parameters
oid (For internal use only) The unsigned bigint object ID parameter. Use the default parameter value NULL.
Result set
Remarks
None
Privileges
You must have EXECUTE privilege on the system procedure, and the MONITOR and UPDATE ANY MUTEX
SEMAPHORE system privileges.
Example
The following statement returns information about all of the mutexes and semaphores in the database:
CALL dbo.sp_list_mutexes_semaphores();
Syntax
sp_list_secure_feature_keys ( )
Result Set
features LONG VARCHAR The secure features enabled by the secure feature key.
Remarks
This procedure returns the names of existing secure feature keys, as well as the set of secure features that can
be enabled by each key.
If the user has the manage_features and manage_keys secure features enabled, then the procedure returns a
list of all secure feature keys.
If the user only has the manage_keys secure feature enabled, then the procedure returns keys that have the
same features or a subset of the same features that the current user has enabled.
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499]. In addition, you must be the database server owner and have the manage_keys feature
enabled on the connection.
Side Effects
None
Syntax
sp_login_environment( )
Remarks
Do not edit this procedure. Instead, to change the login environment, set the login_procedure option to point to
a different procedure.
Privileges
Side effects
None
Generates a report on object privileges granted to the specified role, or user name, or the object privileges
granted on the specified object or dbspace.
Syntax
Parameters
object_name
(Optional) The name of an object or dbspace or a user or a role. If not specified, object privileges of the
current user are reported. Default value is NULL.
object_owner
(Optional) The name of the object owner for the specified object name. The object privileges of the
specified object owned by the specified object owner are displayed. This parameter must be specified to
obtain the object privileges of an object owned by another user or role. Default value is NULL.
object_type
If no value is specified, privileges on all object types are returned. Default value is NULL.
Result Set
Remarks
● If input is an object (table, view, procedure, function, sequence, and so on), procedure displays list of all
roles and user that have different object privilege on the object.
● If input is a role or user, procedure displays list of all object privileges granted to the role or input. When
executing sp_objectpermission to display object privileges of a user or a role, the object privileges that
are inherited through role grants also.
● If input is a dbspace name, procedure displays list of all user or roles that have CREATE privilege on the
specified dbspace.
● By default, object type is NULL and the object privileges for all existing object types matching the specified
object name appear.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].. Users can execute sp_objectpermission to obtain all the object privileges granted
to them. Object owners can also execute this procedure to obtain the object privileges for self-owned objects.
Additional system privileges are needed to obtain object privileges for the following:
Object privileges granted to other users or granted on objects owned by other users
You must also have the MANAGE ANY OBJECT PRIVILEGE system privilege
Object privileges that are granted on objects owned by a role or granted to a role
You must also have the MANAGE ANY OBJECT PRIVILEGE system privilege or be a role administrator on
the role
Object privileges of a dbspace
Side Effects
None
● r5 owns a table named test_tab and a procedure named test_proc in the database.
● u5, which has administrative rights over r5, grants the following privileges:
○ GRANT SELECT ON r5.test_tab TO r2 WITH GRANT OPTION;
○ GRANT SELECT (c1), UPDATE (c1) ON r5.test_tab TO r6 WITH GRANT OPTION;
○ GRANT EXECUTE ON r5.test_proc TO r3;
● u6, which has administrative rights over r6, grants the following privileges:
○ GRANT SELECT (c1), REFERENCES (c1) ON r5.test_tab TO r3;
u5 r2 test_tab
u6 r3 test_tab
u6 r3 test_tab
u6 r3 test_proc
(Continued)
r5 TABLE u5
r5 COLUMN u6
r5 COLUMN u6
r5 PROCEDURE u6
(Continued)
Y NULL SELECT
N c1 SELECT
Y c1 REFERENCES
N NULL EXECUTE
u5 r2 test_tab
u5 r6 test_tab
u5 r6 test_tab
u6 r3 test_tab
u6 r3 test_tab
(Continued)
r5 TABLE u5
r5 COLUMN u5
r5 COLUMN u5
r5 COLUMN u6
r5 COLUMN u6
(Continued)
NULL SELECT Y
c1 SELECT Y
c1 UPDATE Y
c1 SELECT N
c1 REFERENCES N
Generates a report of the minimum system privileges required to run a stored procedure and pass the privilege
check for the procedure.
Syntax
sp_proc_priv ( [ <proc_name> ] )
If multiple system privileges, separated by a comma, are displayed for a stored procedure, this implies that any
one of them would suffice to execute the stored procedure. If multiple rows are displayed for a stored
procedure, then one system privilege from each row is required to execute the stored procedure.
This procedure lists only those system privileges for a stored procedure that will always pass the privilege
check for the procedure. There may be other system privileges which would pass the privilege check to execute
the procedure given conditions, but these are not listed by this procedure.
Result Set
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
Side Effects
None
Examples
If sp_proc_priv is invoked without any parameter specified, the procedure displays all the stored procedures
and the system privileges required to execute each. Stored procedures which do not require any system
privileges for their execution are not displayed.
proc_name privileges
sp_iqrowdensity MONITOR, MANAGE ANY DBSPACE, CREATE ANY INDEX, ALTER ANY INDEX, CREATE
ANY OBJECT, ALTER ANY OBJECT
sp_iqworkmon MONITOR
sp_iqindexsize MANAGE ANY DBSPACE, ALTER ANY INDEX, ALTER ANY OBJECT
sp_iqemptyfile INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE, ALTER ANY TABLE,
LOAD ANY TABLE, TRUNCATE ANY TABLE, ALTER ANY OBJECT
... ...
If sp_proc_priv is invoked with a procedure name parameter, it returns the system privileges required to
execute that procedure. If no system privileges are required, it lists "No Privilege Required" against the
procedure.
proc_name privileges
Returns values for all database server properties tracked by the database.
Syntax
Parameters
property Use this VARCHAR(255) to specify the name of the database server property to report. If NULL,
then all currently monitored properties are reported. The default is NULL.
min_ticks Specify a tick value to return all recorded property values with a ticks value that is equal to or
greater than the specified tick value. The default is NULL.
Result set
time_recorded TIMESTAMP WITH TIME ZONE The system time when this value was
recorded.
Remarks
This system procedure returns results for database server properties being tracked by any database running
on the database server, as well as by the -phl database server option. The database server uses ticks,
measured by your computer's system clock, to track the chronological order in which property values are
recorded. Each recorded value has a tick value that increases monotonically, along with an associated
timestamp measured in GMT.
If <property-name> is NULL, then all database server property values are returned.
If <min_ticks> is NULL, then all property values for the selected properties (or all properties if <property-
name> is NULL) are returned.
If the database is restarted, then property history data is only kept for properties currently being tracked by
another running database.
If the database server is restarted, then property history data and tracking settings are lost. Desired tracking
settings must be re-supplied.
Database-specific property tracking settings are also lost if all [of] the following are true:
Tip
To maintain database-specific tracking settings, create a database start-up event to mimic the persistence
of these settings.
Privileges
Side effects
None
Example
To list all of the recorded database server property values in descending order, execute the following
statement:
Produces a list of the columns in a remote table, and a description of their data types.
Syntax
sp_remote_columns(
<@server_name>
, <@table_name>
[, <@table_owner>
[, <@table_qualifier> ] ]
)
Parameters
@server_name
Use this CHAR(128) parameter to specify a string containing the server name as specified by the CREATE
SERVER statement.
@table_name
Use this CHAR(128) parameter to specify the name of the remote table.
@table_owner
Use this optional CHAR(128) parameter to specify the owner of <table_name>. The default is %.
@table_qualifier
Use this optional CHAR(128) parameter to specify the name of the database in which <table_name> is
located. The default is %.
Remarks
The server must be defined with the CREATE SERVER statement to use this system procedure.
If you are entering a CREATE EXISTING TABLE statement and you are specifying a column list, it may be helpful
to get a list of the columns that are available on a remote table. sp_remote_columns produces a list of the
columns on a remote table and a description of their data types. If you specify a database, you must either
specify an owner or provide the value NULL.
If the table does not exist on the remote server, the procedure returns an empty result set.
Privileges
Side effects
None
N/A
Example
The following example returns information about the columns in the ULProduct table on the remote SAP IQ
database server named RemoteSA. The table owner is DBA.
The following example returns information about the columns in the SYSOBJECTS table in the Adaptive
Server Enterprise database Production using the remote server named RemoteASE. The table owner is
unspecified.
The following example returns information about the columns in the Customers table in the Microsoft
Access database c:\users\me\documents\MyAccesDB.accdb using the remote server MyAccessDB.
The Microsoft Access database does not have a table owner so NULL is specified.
Provides information about tables with foreign keys on a specified primary table.
Syntax
sp_remote_exported_keys(
<@server_name>
, <@table_name>
[, <@table_owner>
[, <@table_qualifier> ] ]
)
Parameters
@server_name
Use this CHAR(128) parameter to specify the server the primary table is located on.
@table_name
Use this CHAR(128) parameter to specify the table containing the primary key.
@table_owner
Use this optional CHAR(128) parameter to specify the primary table's owner. The default is '%'.
Use this optional CHAR(128) parameter to specify the database containing the primary table. The default
is '%'.
Result set
Remarks
The server must be defined with the CREATE SERVER statement to use this system procedure.
This procedure provides information about the remote tables that have a foreign key on a particular primary
table. The result set for the sp_remote_exported_keys system procedure includes the database, owner, table,
column, and name for both the primary and the foreign key, and the foreign key sequence for the foreign key
columns. The result set may vary because of the underlying ODBC and JDBC calls, but information about the
table and column for a foreign key is always returned.
Privileges
None
Example
The following example returns information about the foreign key relationships in the ULEmployee table on
the remote server named RemoteSA:
Provides information about remote tables with primary keys that correspond to a specified foreign key.
Syntax
sp_remote_imported_keys(
<@server_name>
, <@table_name>
[, <@table_owner>
[, <@table_qualifier> ] ]
)
Parameters
@server_name
Use this CHAR(128) parameter to specify the server the foreign key table is located on. A value is required
for this parameter.
@table_name
Use this CHAR(128) parameter to specify the table containing the foreign key. A value is required for this
parameter.
@table_owner
Use this optional CHAR(128) parameter to specify the foreign key table's owner. The default is '%'.
@table_qualifier
Use this optional CHAR(128) parameter to specify the database containing the foreign key table. The
default is '%'.
Remarks
The server must be defined with the CREATE SERVER statement to use this system procedure.
Foreign keys reference a row in a separate table that contains the corresponding primary key. This procedure
allows you to obtain a list of the remote tables with primary keys that correspond to a particular foreign table.
The sp_remote_imported_keys result set includes the database, owner, table, column, and name for both the
primary and the foreign key, and the foreign key sequence for the foreign key columns. The result set may vary
because of the underlying ODBC and JDBC calls, but information about the table and column for a primary key
is always returned.
Privileges
Side effects
None
The following example returns the tables with primary keys that correspond to a foreign key on the ULOrder
table on the remote server named RemoteSA:
Provides primary key information about remote tables using remote data access.
Syntax
sp_remote_primary_keys(
<@server_name>
, <@table_name>
[, <@table_owner>
[, <@table_qualifier> ] ]
)
Parameters
@server_name
Use this CHAR(128) parameter to specify the name of the remote table.
@table_owner
Use this optional CHAR(128) parameter to specify the owner of the remote table. The default is '%'.
@table_qualifier
Use this optional CHAR(128) parameter to specify the name of the remote database. The default is '%'.
Result set
Remarks
This system procedure provides primary key information about remote tables using remote data access.
Because of differences in the underlying ODBC calls, the information returned differs slightly from the catalog/
database value depending upon the remote data access class that is specified for the server.
Privileges
Standards
N/A
Side effects
None
Example
The following example returns information about the primary keys in tables owned by DBA in a SAP IQ
remote server named RemoteSA.
To get a list of the primary keys in all the tables owned by Fred in the production database in an Adaptive
Server Enterprise server named RemoteASE:
Syntax
sp_remote_tables(
<@server_name>
[, <@table_name>
[, <@table_owner>
[, <@table_qualifier>
[, <@with_table_type> ] ] ] ]
)
Parameters
@server_name
Use this optional CHAR(128) parameter to specify the name of the remote table. The default is '%'.
@table_owner
Use this optional CHAR(128) parameter to specify the owner of the remote table. The default is '%'.
@table_qualifier
Use this optional CHAR(128) parameter to specify the database in which <table_name> is located. The
default is '%'.
@with_table_type
Use this optional BIT parameter to specify the inclusion of remote table types. The default is 0. Specify 1 if
you want the result set to include a column that lists table types or specify 0 if you do not.
Result set
The server must be defined with the CREATE SERVER statement to use this system procedure.
It may be helpful when you are configuring your database server to get a list of the remote tables available on a
particular server. This procedure returns a list of the tables on a server.
The procedure accepts five parameters. If a table, owner, or database name is given, the list of tables will be
limited to only those that match the arguments.
Privileges
Side effects
None
Standards
N/A
Example
The following example returns information about the tables owned by DBA in a SAP IQ remote server
named RemoteSA.
To get a list of all the tables owned by Fred in the production database in an Adaptive Server Enterprise
server named RemoteASE:
To get a list of all the Microsoft Excel worksheets available from an ODBC data source referenced by a
server named RemoteExcel:
Syntax
sp_servercaps( <@server_name> )
Parameters
@server_name
Use this CHAR(128) parameter to specify a server defined with the CREATE SERVER statement.
<@server_name> is the same server name used in the CREATE SERVER statement.
Results
Remarks
The server must be defined with the CREATE SERVER statement to use this system procedure.
This procedure displays information about a remote server's capabilities. The capability information is used to
determine how much of a SQL statement can be forwarded to a remote server. The ISYSCAPABILITY system
table, which lists the server capabilities, is not populated until a connection is made to the first remote server.
Standards
N/A
Side effects
None
Example
Syntax
sp_start_listener(
<type>
, <address>
[ , <options> ]
)
Parameters
type
Use this VARCHAR (12) parameter to specify the type of connection listener to start. The value is one of
sharedmemory, shmem, tcpip, tcp, http, or https.
address
Use this VARCHAR (100) parameter to specify the address of the connection listener to start. The address
is a numeric IP address with a port number (for example, 0.0.0.0:9998) separated by a colon (:) or an IP
address without a port number. For IPv6 addresses with a port number, enclose the address in parentheses
and then append the colon and port number. If you do not specify a port number, then the default port
(TCPIP:2638, HTTP:80, HTTPS:443) is used.
If you specify a port number for TCP/IP and HTTP(S), then the address parameter can be a port number
between 1 and 65535. In this case, listeners are started on all available IP addresses using that port
number, and the database server acts as though the port number was supplied as the ServerPort (PORT)
protocol option to the -x TCPIP or -xs HTTPS(S) database server options.
The personal database server only accepts loopback IP addresses, for example 127.0.0.1.
This parameter is ignored for shared memory. For shared memory, specify NULL.
options
Use this LONG VARCHAR parameter to specify a semicolon-delimited list of network protocol options. This
parameter is ignored if you are starting shared memory or TCP/IP connection listeners.
Note
You cannot specify either the ServerPort (PORT) protocol option or the MyIP (ME) protocol option
when using the <options> parameter.
Remarks
The new connection listener uses whichever available port number is found first from the following list:
TCP/IP connection listeners use the encryption setting specified by the -ec database server option when the
database server is started.
Shared memory connection listeners can be created regardless of whether or not the -es database server
option was specified when the database server was started. Shared memory connection listeners started with
the sp_start_listener system procedure always allow unencrypted connections to the database server.
Privileges
Example
Assume that a database server is started allowing local connections only. A problem occurs that is
convenient to debug remotely. To allow remote connections to the database server using port 9998, a user
connects to the database server by using shared memory and executes the following statement:
Syntax
sp_stop_listener(
<type>
, <address>
[ , <force> ]
)
Parameters
type
Use this VARCHAR (12) parameter to specify the type of connection listener to stop. The value is one of
sharedmemory, shmem, tcpip, tcp, http, https.
address
Use this VARCHAR (100) parameter to specify the address of the connection listener to stop. The address
is an IP address with a port number separated by a colon (:) or an IP address without a port number. For
IPv6 addresses with a port number, enclose the address in parentheses and then append the colon and
port number. If a port number is not specified, the default port (TCPIP:2638, HTTP:80, HTTPS:443) is
used. For TCP/IP and HTTP(S), the address parameter can be a port number between 1 and 65535. If you
only specify a port number, then the database server stops any listeners of the specified type using that
port.
To indicate all available IPv4 or IPv6 addresses, specify an IP address of "0.0.0.0" or "(::)".
The personal database server only accepts loopback IP addresses, for example 127.0.0.1.
This parameter is ignored for shared memory. For shared memory, specify NULL.
force
Specify 1 to force the connection listener to stop if it is the last network driver listener running. The default
is 0.
Remarks
The sp_stop_listener system procedure only stops new connections from being started on the connection
listener. Existing connections are not changed.
● The connection listener is the last TCP/IP listener and shared memory is not enabled.
● The connection listener is shared memory and there are no TCP/IP listeners running.
● The connection listener is the last HTTP listener and there are no HTTPS listeners running.
Privileges
Example
Assume that a database server is started allowing local connections only. A problem occurs that is
convenient to debug remotely. To allow remote connections to the database server using port 9998, a user
connects to the database server by using shared memory and executes the following statement:
Once the problem has been solved, shut down the connection listener by executing the following
statement:
Generates a report to map a system privilege to the corresponding system role. A single row is returned for
each system privilege.
Syntax
sp_sys_priv_role_info()
Result Set
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Returns a specified number of statement/plan combinations with the highest maximum runtimes.
Syntax
sp_top_k_statements( [ <k> ] )
Parameters
Use this optional UNSIGNED INTEGER to specify the number of records to return. The default value is
1000.
Result set
Remarks
Use this system procedure to determine which statement/plan combination is taking the longest to run.
Specify the number of longest running statements returned with the (<k>) parameter.
Note
If the list of returned statements is long, then it is possible that not all of the data has been captured due to
space limitations.
Privileges
You must have the MONITOR and MANAGE PROFILING privileges on the system procedure.
Side effects
None.
Example
The following query returns the top statements with the longest maximum observed runtime:
SELECT *
FROM dbo.sp_top_k_statements( ) TS
LEFT OUTER JOIN SYS.GTSYSPERFCACHESTMT PS ON TS.stmt_hash = PS.stmt_hash
ORDER BY TS.stmt_hash;
Sets connection options when users connect from jConnect or Open Client applications.
Syntax
sp_tsql_environment( )
Remarks
The sp_login_environment procedure is the default procedure specified by the login_procedure database
option. For each new connection, the procedure specified by login_procedure is called. If the connection uses
the TDS communications protocol (that is, if it is an Open Client or jConnect connection), then
sp_login_environment in turn calls sp_tsql_environment.
This procedure sets database options so that they are compatible with default Adaptive Server Enterprise
behavior.
To change the default behavior, create new procedures and alter your login_procedure option to point to these
new procedures.
Privileges
None
Example
CALL dbo.sp_tsql_environment();
Syntax
Parameter
name
A CHAR (128) authorization key for the secure feature key being enabled. The authorization key must be at
least six characters.
Remarks
This procedure enables the secure features that are turned on by the specified secure feature key.
Privileges
To run this procedure, you must have EXECUTE privilege on the procedure. See GRANT EXECUTE Privilege
Statement [page 1499].
None
Syntax
xp_cmdshell(
<command>
[, <redir_output> | 'no_output' ]
)
Parameters
command
Use this VARCHAR(8000) parameter to specify a system command. The default is NULL.
redir_output
Use this optional CHAR(254) parameter to specify whether to display output in a command window. The
default behavior is to display output in a command window. If you specify 'no_output', output is not
displayed in a command window. The default value is ' '.
Returns
Remarks
xp_cmdshell executes a system command and then returns control to the calling environment. The value
returned by xp_cmdshell is the exit code from the executed shell process. The return value is 2 if an error
occurs when the child process is started.
The second parameter affects only command line applications on Windows operating systems. For Unix, no
command window appears, regardless of the setting for the second parameter.
Use the sa_enable_auditing_type and sa_disable_auditing_type system procedures to enable and disable
auditing of the xp_cmdshell system procedure (using the xp_cmdshell type). When auditing is enabled for
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SERVER OPERATOR system
privilege.
Example
The following statement lists the files in the current directory in the file c:\temp.txt:
The following statement carries out the same operation, but does so without displaying a Command
window.
Syntax
xp_get_mail_error_code( )
Returns
This function returns an INTEGER value representing the SMTP or MAPI error code.
Remarks
When the return value of a mail procedure (xp_startmail, xp_startsmtp, xp_sendmail, xp_stopmail, and
xp_stopsmtp) is -1, use this function to retrieve the SMTP or MAPI error code.
When the return value of a mail procedure is 5, 6, or 7, use this function to retrieve the error number for the
most recent socket error.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Side effects
None
Example
This example gets the most recent SMTP or MAPI error code.
SELECT dbo.xp_get_mail_error_code( )
This example uses SMTP to initiate the sending of a plain text message.
BEGIN
DECLARE err_smtp INTEGER;
DECLARE err_code INTEGER;
DECLARE err_msg LONG VARCHAR;
SELECT dbo.xp_startsmtp( 'doe@sample.com', 'corporatemail.sample.com' )
INTO err_smtp;
SELECT dbo.xp_get_mail_error_code( ), xp_get_mail_error_text( ) INTO
err_code, err_msg;
SELECT err_smtp, err_code, err_msg;
END;
Syntax
xp_get_mail_error_text( )
Return value
This function returns a LONG VARCHAR value representing the SMTP or MAPI error or status message text. If
no error text is available, an empty string or NULL is returned.
Use this function to obtain the error or status message text for any of the mail procedures (xp_startmail,
xp_startsmtp, xp_sendmail, xp_stopmail, and xp_stopsmtp).
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Side effects
None
Example
This example gets the most recent SMTP or MAPI message text.
SELECT xp_get_mail_error_text( )
This example uses SMTP to initiate the sending of a plain text message.
BEGIN
DECLARE err_smtp INTEGER;
DECLARE err_code INTEGER;
DECLARE err_msg LONG VARCHAR;
SELECT dbo.xp_startsmtp( 'doe@sample.com', 'corporatemail.sample.com' )
INTO err_smtp;
SELECT dbo.xp_get_mail_error_code( ), xp_get_mail_error_text( ) INTO
err_code, err_msg;
SELECT err_smtp, err_code, err_msg;
END;
Syntax
xp_getenv( <environment_variable> )
environment_variable
Use this VARCHAR(8000) parameter to specify the environment variable. This parameter is case
insensitive on Windows operating systems and case sensitive on all other operating systems, independent
of the case sensitivity of the database. The default value is NULL.
Returns
Remarks
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SERVER OPERATOR system
privilege.
The GETENV feature must be enabled for the connection (-sf server option).
Side effects
None
Example
The following example uses the xp_getenv system procedure to return the value of the environment
variable PATH.
The following example uses the xp_getenv and sa_split_list system procedures to return the value of the
Windows environment variable PATH as a list. Use ':' as the separator character on Unix operating systems.
Syntax
xp_msver( <the_option> )
Parameters
the_option
Use this CHAR(254) parameter to specify a string. The string must be one of the following, enclosed in
string delimiters.
Argument Description
17.1.0.1691
Returns
Privileges
Example
The following statement requests the version and operating system description:
Sample output is as follows. The value for Version will likely be different on your system.
Version Description
Reads a file and returns the contents of the file as a LONG BINARY variable.
Syntax
xp_read_file(
<filename>
[, <lazy> ]
)
Parameters
filename
Use this LONG VARCHAR parameter to specify the name of the file for which to return the contents.
lazy
When you specify this optional INTEGER parameter and its value is not 0, the contents of the file are not
read until they are requested. Reads only occur when the LONG BINARY value is accessed and only on the
portion of the file that is requested. The default is 0, or non-lazy.
This function returns the contents of the named file as a LONG BINARY value. If the file does not exist or cannot
be read, NULL is returned.
Remarks
The function can be useful for inserting entire documents or images stored in files into tables. If the file cannot
be read, the function returns NULL.
If the data file is in a different character set, you can use the CSCONVERT function to convert it.
You can also use the CSCONVERT function to address character set conversion requirements you have when
using the xp_read_file system procedure.
If disk sandboxing is enabled, the file referenced in <filename> must be in an accessible location.
The function returns NULL if the specified file does not exist.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the READ FILE system privilege.
Example
The following statement inserts an image into a column named Photo of the Products table.
UPDATE Products
SET Photo=dbo.xp_read_file( 'c:\\sqlany\\scripts\\adata\
\HoodedSweatshirt.jpg' )
WHERE Products.ID=600;
The following statement reads a text file and displays each line with a line number.
Syntax
xp_scanf(
Parameters
input_buffer
Use this CHAR(254) parameter to specify the format of the input string, using place holders (%s) for each
<param> argument. There can be up to fifty place holders in the <format> argument, and there must be
the same number of place holders as <param> arguments.
param1, param2, ...
Use one or more of these CHAR(254) parameters to store the substrings extracted from
<input_buffer>. There can be up to 50 of these parameters.
Privileges
Remarks
The xp_scanf system procedure extracts substrings from an input string using the specified format, and puts
the results in the specified parameter values.
Only the %s string format is supported. Other format specifiers such as %d and %f are not supported and
scanning the input string stops if they are encountered.
Example
The following statements extract the substrings Hello and World! from the input buffer Hello World!, and
puts them into variables string1 and string2, and then selects them:
The following statements show how to take a date string and split it into its year, month, and day
components:
Sends an email message to the specified recipients once a session has been started with xp_startmail or
xp_startsmtp. The procedure accepts messages of any length.
Syntax
xp_sendmail(
recipient = <mail-address>
[, subject = <subject> ]
[, cc_recipient = <mail-address> ]
[, bcc_recipient = <mail-address> ]
[, query = <sql-query> ]
[, "message" = <message-body> ]
[, attachname = <attach-name> ]
[, attach_result = <attach-result> ]
[, echo_error = <echo-error> ]
[, include_file = <filename> ]
[, no_column_header = <no-column-header> ]
[, no_output = <no-output> ]
[, width = <width> ]
[, separator = <separator-char> ]
[, dbuser = <user-name> ]
[, dbname = <db-name> ]
[, type = <type> ]
[, include_query = <include-query> ]
[, content_type = <content-type> ]
)
Parameters
Some arguments supply fixed values and are available for use to ensure Transact-SQL compatibility, as noted
below.
recipient
This LONG VARCHAR parameter specifies the recipient mail address. When specifying multiple recipients,
each mail address must be separated by a semicolon.
subject
This LONG VARCHAR parameter specifies the subject field of the message. The default is NULL.
cc_recipient
This LONG VARCHAR parameter specifies the cc recipient mail address. When specifying multiple cc
recipients, each mail address must be separated by a semicolon. The default is NULL.
This LONG VARCHAR parameter specifies the bcc recipient mail address. When specifying multiple bcc
recipients, each mail address must be separated by a semicolon. The default is NULL.
query
This LONG VARCHAR is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
NULL.
"message"
This LONG VARCHAR parameter specifies the message contents. The default is NULL. The "message"
parameter name requires double quotes around it because MESSAGE is a reserved word.
attachname
This LONG VARCHAR parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The
default is NULL.
attach_result
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
0.
echo_error
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
1.
include_file
This LONG VARCHAR parameter specifies an attachment file. The default is NULL.
no_column_header
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
0.
no_output
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
0.
width
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
80.
separator
This CHAR(1) parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
CHAR(9).
dbuser
This LONG VARCHAR parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The
default is guest.
dbname
This LONG VARCHAR parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The
default is master.
type
This LONG VARCHAR parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The
default is NULL.
This INTEGER parameter is provided for Transact-SQL compatibility. It is not used by SAP IQ. The default is
0.
content_type
This LONG VARCHAR parameter specifies the content type for the "message" parameter (for example,
text/html, ASIS, and so on). The default is NULL. The value of content_type is not validated; setting an
invalid content type results in an invalid or incomprehensible email being sent.
To set headers manually, set the content_type parameter to ASIS. When you do this, the xp_sendmail
procedure assumes that the data passed to the message parameter is a properly formed email with
headers, and does not add any additional headers. When specifying ASIS, you must set all the headers
manually in the message parameter, even headers that would normally be filled in by passing data to the
other parameters.
Returns
Remarks
The argument values for xp_sendmail are strings. The length of each argument is limited to the amount of
available memory on your system.
The content_type argument is intended for users who understand the requirements of MIME email.
xp_sendmail accepts ASIS as a content_type. When content_type is set to ASIS, xp_sendmail assumes that the
message body ("message") is a properly formed email with headers, and does not add any additional headers.
Specify ASIS to send multipart messages containing more than one content type.
Any attachment specified by the include_file parameter is sent as application/octet-stream MIME type, with
base64 encoding, and must be present on the database server.
Email sent with an SMTP email system is encoded if the subject line contains characters that are not 7-bit
ASCII. Also, email sent to an SMS-capable device may not be decoded properly if the subject line contains
characters that are not 7-bit ASCII.
You must have executed xp_startmail to start an email session using MAPI, or xp_startsmtp to start an email
session using SMTP.
If you are sending mail using MAPI, the content_type parameter is not supported.
If <message-body> contains lines that are longer than 998 characters, the SMTP server may insert newline
characters as well as ! characters into the body of the email. To avoid these extra characters, ensure
<message-body> does not contain lines longer than 998 characters.
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Example
This example uses SMTP to send an HTML formatted message with an attachment.
This example uses SMTP to send an inline HTML formatted message with an attachment.
This example uses SMTP to send an inline HTML formatted message with a signature and two
attachments, one of which is a ZIP file.
BEGIN
DECLARE content LONG VARCHAR;
SET content =
'Content-Type: multipart/mixed; boundary="xxxxx";\n' ||
'This part of the email should not be shown. If this ' ||
'is shown then the email client is not MIME compatible\n\n' ||
'--xxxxx\n' ||
'Content-Type: text/html;\n' ||
'Content-Disposition: inline;\n\n' ||
'Plain text.<BR><BR><B>Bold text.</B><BR><BR>' ||
'<a href="www.sap.com">SAP Home Page</a>\n\n' ||
xp_read_file( '\\temp\\johndoe.sig.html' ) ||
'--xxxxx\n' ||
'Content-Type: application/zip; name="sendmail4.zip"\n' ||
'Content-Transfer-Encoding: base64\n' ||
'Content-Disposition: attachment; filename="sendmail4.zip"\n\n' ||
base64_encode( xp_read_file( '\\temp\\sendmail4.zip' ) ) ||
'\n\n' ||
'--xxxxx--\n';
CALL dbo.xp_startsmtp( 'doe@sample.com', 'corporatemail.sample.com' );
CALL dbo.xp_sendmail( recipient='jane.smith@sample.com',
Syntax
xp_sprintf(
<buffer>
, <format>
[ , <param1> [, <param2> ... ] ]
)
Parameters
buffer
This is a CHAR(254) OUT parameter that is filled in with the formatted result.
format
Use this CHAR(254) parameter to specify how to format the result string, using place holders (%s) for
each <param> argument. There can be up to fifty place holders in the <format> argument, and there
should be the same number of place holders as <param> arguments. Only the %s string format is
supported.
param1, param2
The input strings that are used in the result string. You can specify up to 50 of these CHAR(254)
arguments.
Remarks
Privileges
The following statements put the string Hello World! into the result variable.
The following statements format the year, month, and day into a date string.
Syntax
xp_startmail(
[ mail_user = <mail-login-name>
[, mail_password = <mail-password> ] ]
)
Parameters
mail_user
Use this LONG VARCHAR parameter to specify the MAPI login name. The default is NULL.
mail_password
Use this LONG VARCHAR parameter to specify the MAPI password. The default is NULL.
Returns
Remarks
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Syntax
xp_startsmtp(
smtp_sender = <email-address>
, smtp_server = <smtp-server>
[, smtp_port = <port-number> ]
[, timeout = <timeout> ]
[, smtp_sender_name = <username> ]
[, smtp_auth_username = <auth-username> ]
[, smtp_auth_password = <auth-password> ]
[, trusted_certificates = { <public-certificate> | * }
[, secure = { 1 | 0 } ]
[, certificate_company = <organization> ]
[, certificate_unit = <organization-unit> ]
[, certificate_name = <common-name> ]
[, skip_certificate_name_check= { 1 | 0 } ]
)
Parameters
smtp_sender
This LONG VARCHAR parameter specifies the email address of the sender.
smtp_server
This LONG VARCHAR parameter specifies which SMTP server to use and is comprised of the SMTP server
name or IP address.
smtp_port
This optional INTEGER parameter specifies the port number to connect to on the SMTP server. The default
is 25.
timeout
This optional INTEGER parameter specifies how long to wait, in seconds, for a response from the database
server before aborting the current call to xp_sendmail. The default is 60 seconds.
This optional LONG VARCHAR parameter specifies an alias for the sender's email address. For example,
JSmith instead of <email-address>. The default is NULL.
smtp_auth_username
This optional LONG VARCHAR parameter specifies the user name to provide to SMTP servers requiring
authentication. The default is NULL.
smtp_auth_password
This optional LONG VARCHAR parameter specifies the password to provide to SMTP servers requiring
authentication. The default is NULL.
trusted_certificates
This optional LONG VARCHAR parameter is a list of keyword=value pairs separated by semicolons. The
default is NULL. When this parameter is NULL, a standard SMTP connection is made. The possible keys are
listed below. Only one of the file, certificate, and cert_name options should be specified.
This parameter takes the filename given by FILE=key and contains a list of PEM-encoded X.509 trusted
root certificates.
The trusted certificate can be a server's self-signed certificate, a public root certificate, or a certificate
belonging to a commercial Certificate Authority. Generate your certificates using RSA.
To use a certificate from the operating system's certificate store, specify file=*.
To make secure SMTP (SMTPS) connections, which use TLS authentication and encryption, specify
SMTPS=YES.
To accept root certificates and database server certificates that are either expired or are not yet valid,
specify allow_expired_certs=yes .
The secure and trusted_certificates options can be used together to indicate how to connect to the server.
The following table describes the different possibilities.
Key Value
file= The path and file name of a file that contains one or more
trusted certificates.
SMTPS= YES | NO
allow_expired_certs= YES | NO
secure
This optional parameter specifies whether the connection is secure and whether to use a specified trusted
certificate or a certificate from the operating system's certificate store. The default is NULL.
trusted_certificate='*' Secure. Uses operating sys Returns an error. Secure. Uses operating sys
tem certificate store. tem certificate store.
trusted_certifi- Secure. Uses specified cer Returns an error. Secure. Uses specified cer
cate=<filename> tificate. tificate.
certificate_company
This optional LONG VARCHAR parameter specifies that the client accepts server certificates only when the
Organization field of the certificate matches this value. This parameter is ignored when the
trusted_certificates value is NULL. The default is NULL.
certificate_unit
This optional LONG VARCHAR parameter specifies that the client accepts server certificates only when the
Organization Unit field of the certificate matches this value.
certificate_name
This optional LONG VARCHAR parameter specifies that the client accepts server certificates only when the
Common Name field on the certificate matches this value. This parameter is ignored when the
trusted_certificates value is NULL. The default is NULL.
skip_certificate_name_check
This optional BIT parameter controls whether the SMTP server's host is checked against the SMTP server
certificate. Specifying 1 enables this option. The default is 0. This parameter is ignored when the
trusted_certificates value is NULL, or when any of the following parameters are specified:
certificate_company, certificate_unit, or certificate_name.
Note
Setting this parameter to 1 is not recommended because this setting prevents the database server
from fully authenticating the SMTP server.
Returns
Remarks
xp_startsmtp is a system procedure that starts a mail session for a specified email address by connecting to an
SMTP server. This connection can time out. You should call xp_startsmtp just before executing xp_sendmail.
The database server supports CRAM-MD5 authentication, as well as PLAIN authentication. When you use the
xp_startsmtp system procedure with the smtp_auth_username and smtp_auth_password parameters, the
database server uses CRAM-MD5 authentication. If the SMTP server does not support CRAM-MD5
CRAM-MD5 authentication is more secure than PLAIN authentication, but neither encrypts what is sent to the
SMTP server. To encrypt what is sent to the SMTP server, including email messages, use secure SMTP. Secure
SMTP uses TLS encryption to encrypt and can be used with CRAM-MD5 or PLAIN authentication.
Virus scanners can affect xp_startsmtp, causing it to return error code 100. For McAfee VirusScan version 8.0.0
and later, settings for preventing mass mailing of email worms also prevent xp_sendmail from executing
properly. If your virus scanning software allows you to specify processes that can bypass the mass mailing
protections, specify dbeng16.exe and start_iq.exe. For example, with McAfee VirusScan you can allow mass
mailing for these two processes by adding them to the list of Excluded Processes in the Properties area.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Syntax
xp_stopmail( )
Returns
Remarks
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
CALL dbo.xp_stopmail( );
Syntax
xp_stopsmtp( )
Returns
Remarks
Privileges
You must have EXECUTE privilege on the system procedure, as well as the SEND EMAIL system privilege.
Example
CALL dbo.xp_stopsmtp( );
Syntax
xp_write_file(
<filename>
, <file_contents>
)
Parameters
filename
Use this LONG BINARY parameter to specify the contents to write to the file.
Returns
Remarks
The function writes <file_contents> to the file <filename>. It returns 0 if successful, and non-zero if it
fails.
The <filename> value can be prefixed by either an absolute or a relative path. If <filename> is prefixed by a
relative path, then the file name is relative to the current working directory of the database server. If the file
already exists, its contents are overwritten.
This function can be useful for unloading long binary data into files.
You can also use the CSCONVERT function to address character set conversion requirements you have when
using the xp_write_file system procedure.
If disk sandboxing is enabled, the file referenced in <filename> must in an accessible location.
Privileges
You must have EXECUTE privilege on the system procedure, as well as the WRITE FILE system privilege.
This example uses xp_write_file to create a file accountnum.txt containing the data 123456:
This example queries the Contacts table of the sample database, and then creates a text file for each
contact living in New Jersey. Each text file is named using a concatenation of the contact's first name
(GivenName), last name (Surname), and then the string .txt (for example, Reeves_Scott.txt), and
contains the contact's street address (Street), city (City), and state (State), on separate lines.
SELECT dbo.xp_write_file(
Surname || '_' || GivenName || '.txt',
Street || '\n' || City || '\n' || State )
FROM Contacts WHERE State = 'NJ';
This example uses xp_write_file to create an image file (JPG) for every product in the Products table. Each
value of the ID column becomes a file name for a file with the contents of the corresponding value of the
Photo column:
In the example above, ID is a row with a UNIQUE constraint. This is important to ensure that a file isn't
overwritten with the contents of subsequent row. Also, you must specify the file extension applicable to the
data stored in the column. In this case, the Products.Photo column stores image data (JPEGs).
System procedures are built-in stored procedures used for getting reports from and updating system tables.
Catalog stored procedures retrieve information from the system tables in tabular form.
Note
While these procedures perform the same functions as they do in SAP ASE, they are not identical. If you
have preexisting scripts that use these procedures, you might want to examine the procedures. To see the
text of a stored procedure, run:
sp_helptext '<owner.procedure_name>'
For all system stored procedures delivered by SAP, the owner is dbo. To see the text of a stored procedure
of the same name owned by a different user, you must specify that user, for example:
sp_helptext 'myname.myprocedure'
In this section:
sp_addlogin <userid>, Adds a new user account to a database. Requires the MANAGE ANY USER sys
<password>[, <defdb>[, tem privilege.
<deflanguage>[,
<fullname>]]]
sp_addtype <typename>, Creates a user-defined data type. SAP Requires the CREATE DATATYPE or
<data-type>, ["<identity>" IQ does not support IDENTITY columns. CREATE ANY OBJECT system privilege.
| <nulltype>]
sp_adduser <userid>[, Adds a new user to a database. Requires MANAGE ANY USER system
<name_in_db>[, <grpname>]] privilege to create a new user. Requires
MANAGE ANY USER and MANAGE
ROLES system privileges to create a
new user and add the user to the role
specified.
sp_droplogin <userid> Drops a user from a database. Requires MANAGE ANY LOGIN POLICY
system privilege.
sp_dropmessage <message- Drops user-defined messages. Requires the DROP MESSAGE system
number>[, <language>] privilege.
sp_droptype <typename> Drops a user-defined data type. Requires the DROP DATATYPE system
privilege.
sp_dropuser <userid> Drops a user from a database. Requires the MANAGE ANY USER sys
tem privilege.
Note
Procedures like sp_dropuser provide minimal compatibility with SAP ASE stored procedures. If you are
accustomed to SAP ASE, compare their text with SAP IQ procedures before using the procedure in
Interactive SQL. To compare, use the command:
sp_helptext '<owner.procedure_name>'
For system stored procedures delivered by SAP IQ, the owner is always dbo. To see the text of a stored
procedure of the same name owned by a different user, you must specify that user, for example:
sp_helptext 'myname.myprocedure'
Related Information
SAP IQ implements most of the SAP Adaptive Server Enterprise catalog procedures with the exception of the
sp_column_privileges procedure.
SAP IQ also has similar customized stored procedures for some of these SAP ASE catalog procedures.
SAP IQ Proce
SAP ASE Catalog Procedure Description dure
● sp_column_privileges
● sp_databases
● sp_datatype_info
● sp_server_info
SAP IQ supports system tables, system views, consolidated views, compatibility views, and SAP Adaptive
Server Enterprise T-SQL compatibility views.
In this section:
The structure of every SAP IQ database is described in a number of system tables. The system tables are
designed for internal use.
The DUMMY system table is the only system table you are permitted to access directly. For all other system
tables, listed below, you access their underlying data through their corresponding views:
In this section:
The DUMMY system table is provided as a table that always has exactly one row.
This can be useful for extracting information from the database, as in the following example that gets the
current user ID and the current date from the database:
Queries using the DUMMY table are run by SAP SQL Anywhere (the catalog store), rather than by SAP IQ. You
can create a dummy table in the SAP IQ database, such as the following example:
The example statement allows you to use the following table explicitly:
In this section:
The DUMMY table is provided as a read-only table that always has exactly one row.
This can be useful for extracting information from the database, as in the following example that gets the
current user ID and the current date from the database.
dummy_col
This column is not used. It is present because a table cannot be created with no columns.
The cost of reading from the DUMMY table is less than the cost of reading from a similar user-created table
because there is no lock placed on the table page of DUMMY.
Access plans are not constructed with scans of the DUMMY table. Instead, references to DUMMY are
replaced with a Row Constructor algorithm, which virtualizes the table reference. This eliminates
contention associated with the use of DUMMY. DUMMY still appears as the table and/or correlation name
in short, long, and graphical plans.
This table indicates the database characteristics as defined when the SAP IQ database was created using
CREATE DATABASE. It always contains only one row.
create_time TIMESTAMP NOT NULL The date and time when the database was
created.
update_time TIMESTAMP NOT NULL The date and time of the last update.
file_format_version UNSIGNED INT NOT NULL The file format number of files for this da
tabase.
cat_format_version UNSIGNED INT NOT NULL The catalog format number for this data
base.
sp_format_version UNSIGNED INT NOT NULL The stored procedure format number for
this database.
block_size UNSIGNED INT NOT NULL The block size specified for the database.
chunk_size UNSIGNED INT NOT NULL The number of blocks per chunk as deter
mined by the block size and page size
specified for the database.
file_format_date CHAR(10) NOT NULL The date when file format number was last
changed.
last_multiplex_mode TINYINT NULL The mode of the server that last opened
the catalog read-write. One of the following
values.
● 0 – Single Node.
● 1 – Reader.
● 2 – Coordinator.
● 3 – Writer.
ISYSIQLOGICALSERVER stores logical server and the correspondence between logical server and associated
logical server policy information.
ISYSIQLSLOGINPOLICYOPTION stores the login policy option values that have logical server level settings.
A number of predefined system views are provided that present the information in the system tables in a
readable format.
The definitions for the system views are included with their descriptions. Some of these definitions are
complicated, but you do not need to understand them to use the views.
In this section:
For example, consolidated views often provide commonly needed joins. Consolidated views differ from system
views in that they are not just a straightforward view of raw data in an underlying system table. For example,
Compatibility views are deprecated views provided for compatibility with earlier versions of SAP SQL Anywhere
and SAP IQ.
Where possible, use system views and consolidated views instead of compatibility views, as support for
compatibility views may be eliminated in future versions of SAP IQ.
In this section:
SAP IQ provides a set of views owned by the special user DBO, which correspond to the SAP Adaptive Server
Enterprise system tables and views.
Related Information
System tables are hidden; however, there is a system view for each table. To ensure compatibility with future
versions of the IQ main store, make sure your applications use system views and not the underlying system
tables, which may change.
In this section:
Each row in the GTSYSPERFCACHEPLAN system view contains a graphical plan string for an execution plan of
the specified statement.
Remarks
A statement can have multiple execution plans represented in the system view. If statement performance
summary data is not collected, then no statement plans are reported.
Plans are not recorded for statements with short (0.005 seconds or less) execution times.
Privileges
You must have the MONITOR system privilege to access this view.
Related Information
Each row in the GTSYSPERFCACHESTMT system view represents SQL text for a statement with the constants
removed.
Remarks
The statement performance summary feature uses the SQL statement stored in this view.
The SQL for short running statements (0.005 seconds or less) is not recorded.
Privileges
You must have the MONITOR system privilege to access this view.
Related Information
Each row of the ST_GEOMETRY_COLUMNS system view describes a spatial column defined in the database.
Note
Spatial data, spatial references systems, and spatial units of measure can be used only in the catalog store.
table_id UNSIGNED INT The numeric identifier for the table con
taining the column.
Each row of the ST_SPATIAL_REFERENCE_SYSTEMS system view describes an SRS defined in the database.
This view offers a slightly different amount of information than the SYSSPATIALREFERENCESYSTEM system
view.
Note
Spatial data, spatial references systems, and spatial units of measure can be used only in the catalog store.
srs_id INTEGER The numeric identifier (SRID) for the spatial reference system.
srs_type CHAR(11) he type of SRS as defined by the SQL/MM standard. Values can be one of:
round_earth CHAR(1) Whether the SRS type is ROUND EARTH (Y) or PLANAR (N).
axis_order CHAR(12) Describes how the database server interprets points with regards to latitude and longi
tude (for example when using the ST_Lat and ST_Long methods). For non-geographic
spatial reference systems, the axis order is x/y/z/m. For geographic spatial reference
systems, the default axis order is long/lat/z/m; lat/long/z/m is also supported.
snap_to_grid DOUBLE Defines the size of the grid used when performing calculations.
semi_ma DOUBLE Distance from center of the ellipsoid to the equator for a ROUND EARTH SRS.
jor_axis
semi_mi DOUBLE Distance from center of the ellipsoid to the poles for a ROUND EARTH SRS.
nor_axis
inv_flattening DOUBLE The inverse flattening used for the ellipsoid in a ROUND EARTH SRS. This is a ratio cre
ated by the following equation: 1/f = (semi-major-axis) / (semi-
major-axis - semi-minor-axis)
organization LONG VAR The name of the organization that created the coordinate system used by the spatial
CHAR reference system.
organiza INTEGER The ID given to the coordinate system by the organization that created it.
tion_coord
sys_id
polygon_format LONG VAR The orientation of the rings in a polygon. One of CounterClockwise, ClockWise, or Even
CHAR Odd.
storage_format LONG VAR Whether the data is stored in normalized format (Internal), unnormalized format (Origi
CHAR nal), or both (Mixed).
transform_defi- LONG VAR Transform definition settings for use when transforming data from this SRS to another.
nition CHAR
Each row of the ST_UNITS_OF_MEASURE system view describes a unit of measure defined in the database.
This view offers more information than the SYSUNITOFMEASURE system view.
Note
Spatial data, spatial references systems, and spatial units of measure can be used only in the catalog store.
Each row of the SYSARTICLE system view describes an article in a publication. The underlying system table for
this view is ISYSARTICLE.
Related Information
Each row of the SYSARTICLECOL system view identifies a column in an article. The underlying system table for
this view is ISYSARTICLECOL.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSCAPABILITIES view specifies the status of a capability for a remote database server. This
view gets its data from the ISYSCAPABILITY system table.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row of the SYSCAPABILITY system view specifies the status of a capability on a remote database server.
The underlying system table for this view is ISYSCAPABILITY.
Each row in the SYSCAPABILITYNAME system view provides a name for each capability ID in the
SYSCAPABILITY system view.
Remarks
The SYSCAPABILITYNAME system view is defined using a combination of sa_rowgenerator and the following
server properties:
● RemoteCapability
● MaxRemoteCapability
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row of the SYSCERTIFICATE system view stores a certificate in text PEM-format. The underlying system
table for this view is ISYSCERTIFICATE.
update_time TIMESTAMP The local date and time of the last cre
ate or replace.
update_time_utc TIMESTAMP WITH TIME ZONE The UTC date and time of the last cre
ate or replace.
The SYSCOLLATION compatibility view contains the collation sequence information for the database. It is
obtainable via built-in functions and is not kept in the catalog. Following is definition for this view:
The SYSCOLLATIONMAPPINGS compatibility view contains only one row with the database collation mapping.
It is obtainable via built-in functions and is not kept in the catalog. Following is definition for this view:
The GRANT statement can give UPDATE, SELECT, or REFERENCES privileges to individual columns in a table.
Each column with UPDATE, SELECT, or REFERENCES privileges is recorded in one row of the SYSCOLPERM
system view. The underlying system table for this view is ISYSCOLPERM.
table_id UNSIGNED INT The table number for the table contain
ing the column.
grantee UNSIGNED INT The ID of the user that has been given
the privilege on the column. If the user
ID is the PUBLIC role, then all users
have the privilege on the column.
grantor UNSIGNED INT The ID of the user that granted the priv
ilege.
update_time_utc TIMESTAMP WITH TIME ZONE The UTC time of the last update of the
column statistics.
Note
For databases created using SAP IQ 16 or later, the underlying system table for this view is always
encrypted to protect the data from unauthorized access.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The SYSCOLUMN view is provided for compatibility with older versions of the software that offered a
SYSCOLUMN system table.
However, the previous SYSCOLUMN table has been replaced by the ISYSTABCOL system table, and its
corresponding SYSTABCOL system view. Use the SYSTABCOL system view instead.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSCOLUMNS view describes one column of each table and view in the catalog.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
This view is owned by user DBO. syscolumns contains one row for every column in every table and view, and a
row for each parameter in a procedure.
Related Information
syscomments contains entries for each view, rule, default, trigger, table constraint, partition, procedure,
computed column, function-based index key, and other forms of compiled objects.
The text column contains the original definition statements. If the text column is longer than 255 bytes, the
entries span rows. Each object can occupy as many as 65,025 rows.
Related Information
Each row in the SYSCONSTRAINT system view describes a named constraint in the database. The underlying
system table for this view is ISYSCONSTRAINT.
table constraint
P
primary key
F
foreign key
U
unique constraint
Each row in the SYSDATABASEVARIABLE system view describes one database-scope variable in the database.
The underlying system table for this view is ISYSDATABASEVARIABLE.
Remarks
Updates to database-scope variable values, for example using the SET statement, do not persist after a
database restart. Also, updated values are not reflected in this view; only the initial/default value is visible in
this view.
Privileges
None.
Each row in the SYSDBFILE system view describes a dbspace file. The underlying system table for this view is
ISYSDBFILE.
server_id UNSIGNED INT (nullable) The ID of the physical server where the
DAS dbfile exists. Applies to shared-
nothing multiplex architecture.
Each row in the SYSDBSPACE system view describes a dbspace file. The underlying system table for this view
is ISYSDBSPACE.
Each row in the SYSDBSPACEPERM system view describes a privilege on a dbspace file. The underlying system
table for this view is ISYSDBSPACEPERM.
grantee UNSIGNED INT The user ID of the user getting the privi
lege.
Each row in the SYSDEPENDENCY system view describes a dependency between two database objects. The
underlying system table for this view is ISYSDEPENDENCY.
A dependency exists between two database objects when one object references another object in its definition.
For example, if the query specification for a view references a table, the view is dependent on the table. The
database server tracks dependencies of views on tables, views, materialized views, and columns.
The SYSDOMAIN system view records information about built-in data types (also called domains). The
contents of this view does not change during normal operation. The underlying system table for this view is
ISYSDOMAIN.
Each row in the SYSEVENT system view describes an event created with CREATE EVENT. The underlying
system table for this view is ISYSEVENT.
source LONG VARCHAR The original source for the event; this
column comes from ISYSSOURCE.
The SYSEVENTTYPE system view defines the system event types that can be referenced by CREATE EVENT.
Remarks
The SYSEVENTTYPE system view is defined using a combination of sa_rowgenerator and the following server
properties:
● EventTypeName
● EventTypeDesc
● MaxEventType
Each row in the SYSEXTERNENV system view describes the information needed to identify and launch each of
the external environments. The underlying system table for this view is ISYSEXTERNENV.
Each row in the SYSEXTERNENVOBJECT system view describes an installed external object. The underlying
system table for this view is ISYSEXTERNENVOBJECT.
update_time_utc TIMESTAMP WITH TIME ZONE This column identifies the last UTC time
the object was modified (or installed).
Each row in the SYSEXTERNLOGIN system view describes an external login for remote data access. The
underlying system table for this view is ISYSEXTERNLOGIN.
remote_login VARCHAR(128) The login name for the user, for the re
mote server.
Previous versions of the catalog contained a SYSEXTERNLOGINS system table. That table has been renamed
to be ISYSEXTERNLOGIN (without an 'S'), and is the underlying table for this view.
Each row in the SYSFILE system view describes a dbspace for a database. Every database consists of one or
more dbspaces; each dbspace corresponds to an operating system file.
dbspaces are automatically created for the main database file, temporary file, transaction log file, and
transaction log mirror file. Information about the transaction log, and transaction log mirror dbspaces does not
appear in the SYSFILE system view.
Each row of SYSFKCOL describes the association between a foreign column in the foreign table of a
relationship and the primary column in the primary table. This view is deprecated; use the SYSIDX and
SYSIDXCOL system views instead.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSFKEY system view describes a foreign key constraint in the system. The underlying system
table for this view is ISYSFKEY.
foreign_index_id UNSIGNED INT The index number for the foreign key.
SIMPLE
2
FULL
129
SIMPLE UNIQUE
130
FULL UNIQUE
The SYSFOREIGNKEY view is provided for compatibility with older versions of the software that offered a
SYSFOREIGNKEY system table. However, the previous SYSFOREIGNKEY system table has been replaced by
the ISYSFKEY system table, and its corresponding SYSFKEY system view, which you should use instead.
A foreign key is a relationship between two tables: the foreign table and the primary table. Every foreign key is
defined by one row in SYSFOREIGNKEY and one or more rows in SYSFKCOL. SYSFOREIGNKEY contains
general information about the foreign key while SYSFKCOL identifies the columns in the foreign key and
associates each column in the foreign key with a column in the primary key of the primary table.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSFOREIGNKEYS view describes one foreign key for each table in the catalog.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
There is one row in the SYSGROUP system view for each member of each group. This view describes the many-
to-many relationship between groups and members. A group may have many members, and a user may be a
member of many groups.
There is one row in the SYSGROUPS view for each member of each group. This view describes the many-to-
many relationship between groups and members. A group may have many members, and a user may be a
member of many groups.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSHISTORY system view records a system operation on the database, such as a database
start, a database calibration, and so on. The underlying system table for this view is ISYSHISTORY.
INIT
object_id UNSIGNED INT For any operation other than DTT and
LAST_DTT, the value in this column will
be 0. For DTT and LAST_DTT opera
tions, this is the dbspace_id of the
dbspace as defined in the SYSDB
SPACE system view.
DTT_SET
last_time TIMESTAMP The most recent local date and time the
database was started on a particular
operating system with a particular ver
sion of the software.
first_time_utc TIMESTAMP WITH TIME ZONE The UTC date and time the database
was first started on a particular operat
ing system with a particular version of
the software.
last_time_utc TIMESTAMP WITH TIME ZONE The most recent UTC date and time the
database was started on a particular
operating system with a particular ver
sion of the software.
Each row in the SYSIDX system view defines a logical index in the database. The underlying system table for
this view is ISYSIDX.
Primary key
2
Foreign key
3
Text indexes
Each row in the SYSIDXCOL system view describes one column of an index described in the SYSIDX system
view. The underlying system table for this view is ISYSIDXCOL.
The SYSINDEX view is provided for compatibility with older versions of the software that offered a SYSINDEX
system table. However, the SYSINDEX system table has been replaced by the ISYSIDX system table, and its
corresponding SYSIDX system view, which you should use instead.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSINDEXES view describes one index in the database. As an alternative to this view, you could
also use the SYSIDX and SYSIDXCOL system views.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
sysindexes contains one row for each clustered index, one row for each nonclustered index, one row for each
table that has no clustered index, and one row for each table that contains text or image columns.
This table also contains one row for each function-based index or index created on a computed column.
Related Information
The SYSINFO view indicates the database characteristics, as defined when the database was created. It always
contains only one row. This view is obtainable via built-in functions and is not kept in the catalog. Following is
the definition for the SYSINFO view:
This view presents group information from ISYSIQBACKUPHISTORY in a readable format. Each row in this view
describes a particular backup operation that finished successfully.
Column
Column Name Column Type Constraint Description
bu_id UNSIGNED BIGINT NOT NULL Transaction identifier of the checkpoint of the opera
tion. Backup ID for backup operations.
bu_time TIMESTAMP NOT NULL Time of backup operation that is recorded in backup
record.
● 0 = FULL
● 1 = INCREMENTAL
● 2 = INCREMENTAL SINCE FULL
● 0 = NONE
● 1 = DECOUPLED
● 2 = ENCAPSULATED
Remarks
The view SYSIQBACKUP projects equivalent string values for columns type, subtype, and bkp_virtual.
Related Information
This view describes all the dbfile records present in the database at backup time. Each row in this view
describes a particular backup operation that finished successfully.
dbfile_id SMALLINT The dbfile ID present in dbspace during ongoing backup op
eration.
dbspace_createid UNSIGNED BIGINT The transaction ID of the transaction that created the
dbspace.
dbspace_alterid UNSIGNED BIGINT Transaction ID that marked the dbspace RO. If not marked,
then the create ID.
dbfile_createid UNSIGNED BIGINT The transaction ID of the transaction that created this dbfile.
dbfile_alterid UNSIGNED BIGINT The transaction ID of the transaction that last altered the
read-write status of this dbfile.
Remarks
Related Information
Note
data_offset UNSIGNED INT Identifies the byte location of where the SAP IQ data starts,
relative to the beginning of the raw partition.
last_modified TIMESTAMP Date and time the file was last modified.
Related Information
last_modified TIMESTAMP Time at which the dbspace's read-write status was last
modified.
● Main
● Temp
● Msg
● 'T' – online
● 'F' – offline
● 'T' – on
● 'F' – off
stripe_size_kb UNSIGNED INT Number of kilobytes written to each file of the dbspace be
fore the disk striping algorithm moves to the next dbfile.
Related Information
Presents group information from ISYSIQIDX in a readable format. Each row in the SYSIQIDX view describes
an IQ index.
Note
table_id UNSIGNED INT The table number uniquely identifies the table to which this
index applies.
index_id UNSIGNED INT Each index for one particular table is assigned a unique in
dex number.
delimited_by VARCHAR(1024) (WD indexes only) List of separators used to parse a col
umn’s string into the words to be stored in that column’s WD
index.
limit UNSIGNED INT (WD indexes only) Maximum word length for WD index.
The ISYSIQINFO system table indicates the database characteristics as defined when the SAP IQ database was
created using CREATE DATABASE. It always contains only one row.
create_time TIMESTAMP NOT NULL Date and time that the database was cre
ated
update_time TIMESTAMP NOT NULL Date and time of the last update
file_format_version UNSIGNED INT NOT NULL File format number of files for this data
base
cat_format_version UNSIGNED INT NOT NULL Catalog format number for this database
sp_format_version UNSIGNED INT NOT NULL Stored procedure format number for this
database
block_size UNSIGNED INT NOT NULL Block size specified for the database
chunk_size UNSIGNED INT NOT NULL Number of blocks per page as determined
by the block size and page size specified
for the database
file_format_date CHAR(10) NOT NULL Date when file format number was last
changed
multiplex name CHAR(128) NULL Name of the multiplex that this database is
a member of
● 0 – single node
● 1 – reader
● 2 – coordinator
● 3 – writer
Column
Name Column Type Description
ls_id UNSIGNED BIGINT NOT NULL The ID number of the logical server.
ls_object_id UNSIGNED BIGINT NOT NULL The logical server object ID number.
ls_policy_id UNSIGNED BIGINT NOT NULL The ID number of the logical server policy.
Remarks
The ISYSIQLOGICALSERVER system table stores logical server information and associated logical server policy
information.
Primary key(ls_id)
login_policy_id UNSIGNED BIGINT NOT NULL The ID number of the login policy.
ls_id UNSIGNED BIGINT NOT NULL The ID number of the logical server.
Remarks
The ISYSIQLOGINPOLICYLSINFO system table stores the login policy logical server assignment information.
Describes all the logical server assignments from the login policies.
login_policy_id UNSIGNED BIGINT NOT NULL The ID number of the login policy.
login_policy_id UNSIGNED BIGINT NOT NULL The ID number of the login policy.
login_option_name CHAR(128) NOT NULL The name of the login policy option.
login_option_value LONG VARCHAR NOT NULL The value of the login policy option.
Remarks
The ISYSIQLSLOGINPOLICYOPTION table stores the logical server level settings for login policy option values.
Presents group information from the ISYSIQLSMEMBER table, which stores logical server membership
information.
ls_id UNSIGNED BIGINT NOT NULL The ID number of the logical server.
mpx_server_id UNSIGNED INT NOT NULL The ID number of the multiplex server.
Remarks
ISYSIQLSMEMBER stores the logical servers and their corresponding multiplex servers.
ls_id UNSIGNED BIGINT NOT NULL The ID number of the logical server.
ls_policy_Id UNSIGNED BIGINT NOT NULL The ID number of the logical server policy.
ls_policy_name CHAR(128) NOT NULL UNIQUE The logical server policy name.
Primary key(ls_policy_id)
ls_policy_id UNSIGNED BIGINT NOT NULL The ID number of the login policy.
ls_policy_option_name CHAR(128) NOT NULL The logical server policy option name.
ls_policy_option_value LONG VARCHAR NOT NULL The logical server policy option value.
Presents a readable version of the table ISYSIQMPXSERVER. The ISYSIQMPXSERVER system table stores
membership properties and version status data for the given multiplex node.
active_version LONG BINARY NULL The list of active versions on the server (encoded).
connection_info LONG VARCHAR NULL String containing host name and port pairs for public
domain connections, delimited by semicolons.
db_path LONG VARCHAR NOT NULL Full path to the database file for the server.
private_connection_info LONG VARCHAR NULL String containing host name and port pairs for private
network connections, delimited by semicolons.
● 0 – disabled
● 1 – enabled
Primary key(server_id)
Presents a readable version of the table ISYSIQMPXSERVERAGENT, which stores agent connection definitions
for the specified multiplex server.
agent_connection_info LONG VARCHAR NOT NULL String containing host name and port pairs for SAP IQ
Cockpit agent connections on each multiplex node, sep
arated by semicolons.
agent_user_name LONG VARCHAR NOT NULL String containing user name for the SAP IQ Cockpit
agent.
agent_pwd VARBINARY(1024) NOT NULL String containing encrypted password for the SAP IQ
Cockpit agent.
Primary key(server_id)
sysiqobjects presents one row for each system table, user table, view, procedure, trigger, event, constraint,
domain (sysdomain), domain (sysusertype), column, and index. This view is owned by user DBO.
Related Information
Each row in the SYSIQPARTITIONCOLUMN view describes a column in a partition described in the
SYSIQPARTITION view in a partitioned table described in the SYSPARTITIONSCHEME view.
SYSIQPARTITIONCOLUMN only describes partitions of columns that are not stored on the dbspace of the
partition.
Presents group information from ISYSIQRVLOG in a readable format. Each row in the SYSIQRVLOG view
corresponds to a log for a RLV-enabled table . The row with table_id 0 represents the server-wide commit log.
table_id UNSIGNED INT Indicates the table the log stream be
longs to. NULL indicates a commit log
stream.
A log entry is added for each row-level versioning (RLV) enabled table each time a merge between the RLV
store and the IQ main store begins. Log entries are updated when the merge is complete.
● STARTED
● COMPLETED
● FAILED
● AUTOMATIC
● DML
● DDL
● SHUTDOWN
● USER
● BLOCKING
● NON-BLOCKING
Presents group information from ISYSIQTAB in a readable format. Each row in the SYSIQTAB view describes
an IQ table.
Note
Related Information
Presents group information from ISYSIQTABCOL in a readable format. Each row in the SYSIQTABCOL view
describes a column in an IQ table.
Note
max_length UNSIGNED INT Indicates the maximum length allowed by the column.
cardinality ROWID The actual number of unique values (cardinality) of this col
umn.
is_nbit CHAR(1) Indicates whether the column is NBit (T) or Flat FP (F).
Remarks
Related Information
Related Information
Related Information
Each row of the SYSIXCOL describes a column in an index, and is provided for compatibility with older versions
of the software that offered a SYSIXCOL system table.
The SYSIXCOL system table has been replaced by the ISYSIDXCOL system table, and its corresponding
SYSIDXCOL system view. You should switch to using the SYSIDXCOL system view.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSJAR system view defines a JAR file stored in the database. The underlying system table for
this view is ISYSJAR.
object_id UNSIGNED BIGINT The internal ID for the JAR file, uniquely
identifying it in the database.
update_time TIMESTAMP The local time the JAR file was last up
dated.
update_time_utc TIMESTAMP WITH TIME ZONE The UTC time the JAR file was last up
dated.
Each row in the SYSJARCOMPONENT system view defines a JAR file component, which includes class files,
manifest files, and any other JAR resource. The underlying system table for this view is ISYSJARCOMPONENT.
Each row in the SYSJAVACLASS system view describes one Java class stored in the database. The underlying
system table for this view is ISYSJAVACLASS.
update_time_utc TIMESTAMP WITH TIME ZONE The UTC last update time of the class.
The ISYSLDAPSERVER system table defines a set of attributes for the LDAP server.
ldsrv_id UNSIGNED BIGINT NOT NULL A unique identifier for the LDAP server that is
the primary key and is used by the login policy
to refer to the LDAP server.
ldsrv_name CHAR(128) NOT NULL The name assigned to the LDAP server.
● 1 – RESET
● 2 – READY
● 3 – ACTIVE
● 4 – FAILED
● 5 – SUSPENDED
Note
A numeric value is stored in system table; a
corresponding text value appears in the
system view.
Valid range:
● 1 – ON
● 0 – OFF (default)
Default value: 3
ldsrv_timeout UNSIGNED INT NOT NULL Controls the timeout value (in milliseconds) for
connections or searches.
ldsrv_last_state_change TIMESTAMP NOT NULL Indicates the time the last state change occur
red. The value is stored in Coordinated Univer
sal Time (UTC), regardless of the local time
zone of the LDAP server.
ldsrv_search_url CHAR(1024) NULL The LDAP URL to be used to find the Distin
guished Name (DN) for a user based on their
user ID.
ldsrv_auth_url CHAR(1024) NULL The LDAP search string to be used to find the
DN for a user given their user ID.
ldsrv_access_dn CHAR(1024) NULL The DN used to access the LDAP server for
searches to obtain the DN for a user ID.
ldsrv_access_dn_pwd VARBINARY(1024) NULL The password for the access account. The
password is symmetrically encrypted when
stored on disk.
The SYSLOGINMAP system view contains one row for each user that can connect to the database using either
an integrated login, or Kerberos login. For that reason, access to this view is restricted. The underlying system
table for this view is ISYSLOGINMAP.
login_option_value LONG VARCHAR The value of the login policy at the time
it was created.
This view is owned by user DBO. SYSLOGINS contains one row for each valid SAP Adaptive Server Enterprise
user account.
Each row in the SYSMUTEXSEMAPHORE system view provides information about a user-defined mutex or
semaphore in the database. The underlying system table for this view is ISYSMUTEXSEMAPHORE.
You must have the SELECT ANY TABLE privilege to access this view.
Each row in the SYSMVOPTION system view describes the setting of one option value for a materialized view or
text index at the time of its creation. The name of the option can be found in the SYSMVOPTIONNAME system
view. The underlying system table for this view is ISYSMVOPTION.
option_value LONG VARCHAR The value of the option when the mate
rialized view was created.
Each row in the SYSMVOPTION system view gives the name option value for a materialized view or text index at
the time of its creation. The value for the option can be found in the SYSMVOPTION system view. The
underlying system table for this view is ISYSMVOPTIONNAME.
Each row in the SYSOBJECT system view describes a database object. The underlying system table for this
view is ISYSOBJECT.
1 (valid)
creation_time TIMESTAMP The local date and time when the object
was created.
creation_time_utc TIMESTAMP WITH TIME ZONE The UTC date and time when the object
was created.
sysobjects contains one row for each table, view, stored procedure, extended stored procedure, log, rule,
default, trigger, check constraint, referential constraint, computed column, function-based index key, and
temporary object, and other forms of compiled objects.
It also contains one row for each partition condition ID when object type is N.
Related Information
The SYSOPTION system view contains the options one row for each option setting stored in the database.
Each user can have their own setting for a given option. In addition, settings for the PUBLIC role define the
default settings to be used for users that do not have their own setting. The underlying system table for this
view is ISYSOPTION.
Each row in the SYSOPTIONS view describes one option created using the SET command. Each user can have
their own setting for each option. In addition, settings for the PUBLIC user define the default settings to be
used for users that do not have their own setting.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The SYSOPTSTAT system view stores the cost model calibration information as computed by the ALTER
DATABASE CALIBRATE statement. The contents of this view are for internal use only and are best accessed via
the sa_get_dtt system procedure. The underlying system table for this view is ISYSOPTSTAT.
partitioned_object_id UNSIGNED BIGINT Unique number assigned to each partitioned object (table).
partition_object_id UNSIGNED BIGINT Each table partition is an object itself and is assigned a
unique number from the table object or index object.
partition_values LONG VARCHAR Contains partitioning criteria for range or list partitioning:
position UNSIGNED INT Ordinal number of partition. For ranged partition, for position
2 and above, the partition at (position-1) contains its exclu
sive lower bound.
Remarks
Each row in the SYSPARTITION view describes a partitioned object (table or index) in the database. The
underlying system table for this view is ISYSPARTITION.
partitioned_object_id UNSIGNED BIGINT Each partitioned object (table) is assigned a unique object
number.
column_id UNSIGNED INT The column ID identifies the table column as part of the par
titioning key.
Remarks
Each row in the SYSPARTITIONKEY view describes a partitioned object (table or index) in the database.
Presents group information from the ISYSPARTITIONS system table in a readable format
table_id UNSIGNED INT The object ID of the table to which the index corresponds.
partition_object_id UNSIGNED BIGINT Each table partition is an object itself and is assigned a
unique number from the table object or index object.
partition_values LONG VARCHAR Contains the upper bound for this range partition.
Remarks
Each row in the SYSPARTITIONS view describes a partitioned object (table or index) in the database. The
underlying system table for this view is ISYSPARTITIONS.
partitioned_object_id UNSIGNED BIGINT Each partitioned object (table) is assigned a unique number.
● 1 – for range
● 3 – for hash (2 is unused)
● NULL - no subpartitioning
● 1 – for range partitioning
● 3 – for hash partitioning (2 is unused)
Remarks
Each row in the SYSPARTITIONSCHEME view describes a partitioned object (table or index) in the database.
Each row in the SYSPHYSIDX system view defines a physical index in the database. The underlying system
table for this view is ISYSPHYSIDX.
Each row in the SYSPROCAUTH view describes a set of privileges granted on a procedure. As an alternative,
you can also use the SYSPROCPERM system view.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSPROCEDURE system view describes one procedure in the database. The underlying system
table for this view is ISYSPROCEDURE.
Each row in the SYSPROCPARM system view describes one parameter, result set column, or return value of a
procedure or function in the database. The underlying system table for this view is ISYSPROCPARM.
Remarks
The SYSPROCPARM system view is updated when a procedure or function is created or altered, including using
the ALTER PROCEDURE...RECOMPILE statement.
Additionally, SYSPROCPARM is updated whenever a checkpoint is run if the out-of-date procedure or function
meets the following conditions:
Each row in the SYSPROCPARMS view describes a parameter to a procedure in the database.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row of the SYSPROCPERM system view describes a user who has been granted EXECUTE privilege on a
procedure. The underlying system table for this view is ISYSPROCPERM.
The SYSPROCS view shows the procedure or function name, the name of its creator and any comments
recorded for the procedure or function.
The tables and columns that make up this view are provided in the ALTER VIEW statement below.
Each row of the SYSPROXYTAB system view describes the remote parameters of one proxy table. The
underlying system table for this view is ISYSPROXYTAB.
srvid UNSIGNED INT The unique ID for the remote server as
sociated with the proxy table.
Each row in the SYSPUBLICATION system view describes a publication. The underlying system table for this
view is ISYSPUBLICATION.
0 (logscan)
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSREMARK system view describes a remark (or comment) for an object. The underlying
system table for this view is ISYSREMARK.
object_id UNSIGNED BIGINT The internal ID for the object that has
an associated remark.
Each row in the SYSREMOTEOPTION system view describes the value of a message link parameter. The
underlying system table for this view is ISYSREMOTEOPTION.
Some columns in this view contain potentially sensitive data. The SYSREMOTEOPTION2 view provides public
access to the data in this view except for the potentially sensitive columns.
Joins together, and presents in a more readable format, the columns from SYSREMOTEOPTION and
SYSREMOTEOPTIONTYPE system views.
Values in the setting column are hidden from users that do not have the SELECT ANY TABLE system privilege.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row of the SYSREMOTEOPTIONS view describes the values of a message link parameter.
Values in the setting column are hidden from users that do not have the SELECT ANY TABLE system privilege.
The SYSREMOTEOPTION2 view provides public access to the insensitive data.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSREMOTEOPTIONTYPE system view describes one of the message link parameters. The
underlying system table for this view is ISYSREMOTEOPTIONTYPE.
The SYSREMOTETYPE system view contains information about remote tables. The underlying system table for
this view is ISYSREMOTETYPE.
Each row of the SYSREMOTETYPES view describes one remote message type, including the publisher address.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSREMOTEUSER system view describes a user ID with the REMOTE system privilege (a
subscriber), together with the status of messages that were sent to and from that user. The underlying system
table for this view is ISYSREMOTEUSER.
user_id UNSIGNED INT The user number of the user with RE
MOTE privilege.
log_sent UNSIGNED BIGINT The log offset for the most recently sent
operation.
confirm_sent UNSIGNED BIGINT The log offset for the most recently
confirmed operation from this sub
scriber.
time_sent_utc TIMESTAMP WITH TIME ZONE The UTC time the most recent message
was sent to this subscriber.
time_received_utc TIMESTAMP WITH TIME ZONE The UTC time when the most recent
message was received from this sub
scriber.
Each row of the SYSREMOTEUSERS view describes a user ID with the REMOTE system privilege (a subscriber),
together with the status of messages that were sent to and from that user.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The SYSROLEGRANT system view contains one row for each grant of a system or user defined role. The
underlying system table for this view is ISYSROLEGRANT.
grantee UNSIGNED INT ID of the user being granted the role, as per ISYSUSER.
grant_type TINYINT Describes type of grant using three digits. The first digit is
whether privilege has been granted. The second digit is
whether administration rights have been given. The third
digit is whether system privileges are inheritable.
grant_scope TINYINT Used by SET USER and CHANGE PASSWORD to set the
scope of the grant. Values can be one or more of the follow
ing:
grantor CHAR (128) The unique identifier of the grantor of the role.
The SYSROLEGRANTEXT system view contains syntax extensions pertaining to the SET USER and CHANGE
PASSWORD system privilege and is related to the SYSROLEGRANT system view.
Remarks
When you grant or revoke the SET USER or CHANGE PASSWORD privilege, either with the user-list option or
with ANY WITH ROLES role-list option, this view is updated with the values from the extended syntax.
The SYSROLEGRANTS system view is the same as the SYSROLEGRANT system view but includes two
additional columns: the name of the role (not just the role ID) and the name of the grantee (not just user ID).
grant_id UNSIGNED INT A unique identifier for each grant statement issued.
role_id UNSIGNED INT The unique identifier for the role granted to a user (as defined in the
ISYSUSER table).
role_name CHAR(128) The name of the role corresponding to the role_id value.
grantee UNSIGNED INT The unique identifier for the user granted the role.
grantee_name CHAR(128) The name of the grantee corresponding to the grantee value.
grant_type TINYINT Identifies how the role and its underlying privileges were granted. Val
ues:
Note
This value is applicable to all legacy, non-inheritable roles ex
cept SYS_AUTH_DBA_ROLE and SYS_AUTH_RE
MOVE_DBA_ROLE.
Note
This value is applicable only to the legacy, non-inheritable
roles SYS_AUTH_DBA_ROLE and SYS_AUTH_RE
MOVE_DBA_ROLE.
grant_scope TINYINT Defines the range to which the grant applies. Values include:
● 1 – User list
● 2 – Any users granted membership in the specified roles
● 3 – All users
Note
This value is applicable to the SET USER and CHANGE PASS
WORD system privileges only and can store any valid combination
of these values.
grantor CHAR (128) The unique identifier of the grantor of the role.
Each row in the SYSSCHEDULE system view describes a time at which an event is to fire, as specified by the
SCHEDULE clause of CREATE EVENT. The underlying system table for this view is ISYSSCHEDULE.
● x01 = Sunday
● x02 = Monday
● x04 = Tuesday
● x08 = Wednesday
● x10 = Thursday
● x20 = Friday
● x40 = Saturday
● HH = hours
● NN = minutes
● SS = seconds
Each row in the SYSSERVER system view describes a remote server. The underlying system table for this view
is ISYSSERVER.
Note
Previous versions of the catalog contained a SYSSERVERS system table. That table has been renamed to
be ISYSSERVER (without an 'S'), and is the underlying table for this view.
Each row in the SYSSOURCE system view contains the source code, if applicable, for an object listed in the
SYSOBJECT system view. The underlying system table for this view is ISYSSOURCE.
Each row of the SYSSPATIALREFERENCESYSTEM system view describes an SRS defined in the database. The
underlying system table for this view is ISYSSPATIALREFERENCESYSTEM.
srs_id INTEGER The numeric identifier (SRID) for the spatial reference system.
round_earth CHAR(1) Whether the SRS type is ROUND EARTH (Y) or PLANAR (N).
axis_order CHAR(12) Describes how the database server interprets points with regards to
latitude and longitude (for example when using the ST_Lat and
ST_Long methods). For non-geographic spatial reference systems,
the axis order is x/y/z/m. For geographic spatial reference systems,
the default axis order is long/lat/z/m; lat/long/z/m is also sup
ported.
snap_to_grid DOUBLE Defines the size of the grid used when performing calculations.
semi_major_axis DOUBLE Distance from center of the ellipsoid to the equator for a ROUND
EARTH SRS.
semi_minor_axis DOUBLE Distance from center of the ellipsoid to the poles for a ROUND
EARTH SRS.
inv_flattening The inverse flattening used for the ellipsoid in a ROUND EARTH
SRS.
organization LONG VARCHAR The name of the organization that created the coordinate system
used by the spatial reference system.
organization_coordsys_id INTEGER The ID given to the coordinate system by the organization that cre
ated it.
srs_type CHAR(11) The type of SRS as defined by the SQL/MM standard. Values can
be one of:
linear_unit_of_measure UNSIGNED BIGINT The linear unit of measure used by the spatial reference system.
angular_unit_of_measure UNSIGNED BIGINT The angular unit of measure used by the spatial reference system.
polygon_format LONG VARCHAR The orientation of the rings in a polygon. One of CounterClockwise,
ClockWise, or EvenOdd.
storage_format LONG VARCHAR Whether the data is stored in normalized format (Internal), unnor
malized format (Original), or both (Mixed).
definition LONG VARCHAR The WKT definition of the spatial reference system in the format de
fined by the OGC standard.
transform_definition LONG VARCHAR Transform definition settings for use when transforming data from
this SRS to another.
Remarks
This view offers slightly different amount of information than the ST_SPATIAL_REFERENCE_SYSTEMS system
view.
Note
Spatial data, spatial references systems, and spatial units of measure can be used only in the catalog store.
The SYSSQLSERVERTYPE system view contains information relating to compatibility with Adaptive Server
Enterprise. The underlying system table for this view is ISYSSQLSERVERTYPE.
Presents group information from the ISYSSUBPARTITIONKEY system table in a readable format.
partitioned_object_id UNSIGNED BIGINT Unique number assigned to each partitioned object (table or in
dex).
column_id UNSIGNED INT Identifies which column of the table as part of the partitioning key,
Together, partitioned_object_id and column_id identify one column
described in the SYSTABCOL system view.
position SMALLINT Position of the column in the partitioning key. A value of 0 indicates
the 1st column in the partitioning key. A value of 1 indicates the 2nd
column in the partitioning key.
The SYSSUBPARTITIONKEY system view contains one row for each column of a partition described in
ISYSPARTITION view in a partitioned table described in the ISYSPARTITIONSCHEME view.
Each row in the SYSSUBSCRIPTION system view describes a subscription from one user ID (which must have
the REMOTE system privilege) to one publication. The underlying system table for this view is
ISYSSUBSCRIPTION.
Each row describes a subscription from one user ID (which must have the REMOTE system privilege) to one
publication.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The "option" and server_connect columns of the underlying table, ISYSSYNC, contain sensitive information
such as passwords. You must have the SELECT ANY TABLE and ACCESS USER PASSWORD system privileges
to select from this view. The SYSSYNC2 consolidated view provides public access to the same data without the
sensitive data.
progress UNSIGNED BIGINT The log offset of the last successful up
load.
The SYSSYNC2 view provides public access to the data found in the SYSSYNC system view (information
related to synchronization) without exposing potentially sensitive data.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular column, use the links provided beneath the view definition.
The server_connect and option columns display three asterisks (***) if a value is present in the database and
NULL if no value is present.
The SYSSYNCPUBLICATIONDEFAULTS view provides the default synchronization settings associated with
publications involved in synchronization.
The tables and columns that make up this view are provided in the SQL statement below.
The server_connect and option columns display three asterisks (***) if a value is present in the database and
NULL if no value is present.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The server_connect and option columns display three asterisks (***) if a value is present in the database and
NULL if no value is present.
Each row in the SYSSYNCSCRIPT system view identifies a stored procedure for scripted upload. This view is
almost identical to the SYSSYNCSCRIPTS view, except that the values in this view are in their raw format.
Each row in the SYSSYNCSCRIPTS view identifies a stored procedure for scripted upload. This view is almost
identical to the SYSSYNCSCRIPT system view, except that the values are in human-readable format, as
opposed to raw data.
The SYSSYNCSUBSCRIPTIONS view contains the synchronization settings associated with synchronization
subscriptions.
The tables and columns that make up this view are provided in the SQL statement below.
The server_connect and option columns display three asterisks (***) if a value is present in the database and
NULL if no value is present.
The tables and columns that make up this view are provided in the SQL statement below.
The server_connect and option columns display three asterisks (***) if a value is present in the database and
NULL if no value is present.
Each row of the SYSTAB system view describes one table or view in the database. Additional information for
views can be found in the SYSVIEW system view. The underlying system table for this view is ISYSTAB.
creator UNSIGNED INT The user number of the owner of the ta
ble or view.
Base table
2
Materialized view
3
View
Local server
2
IQ table
3
Remote server
tab_page_list LONG VARBIT For internal use only. The set of pages
that contain information for the table,
expressed as a bitmap.
ext_page_list LONG VARBIT For internal use only. The set of pages
that contain row extensions and large
object (LOB) pages for the table, ex
pressed as a bitmap.
clustered_index_id UNSIGNED INT The ID of the clustered index for the ta
ble. If none of the indexes are clustered,
then this field is NULL.
BASE
Base table
MAT VIEW
Materialized view
GBL TEMP
View
last_modified_at_utc TIMESTAMP WITH TIME ZONE The UTC time at which the data in the
table was last modified. This column is
only updated at checkpoint time.
The SYSTABAUTH view contains information from the SYSTABLEPERM system view, but in a more readable
format.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The SYSTABCOL system view contains one row for each column of each table and view in the database. The
underlying system table for this view is ISYSTABCOL.
"default" LONG VARCHAR The default value for the column. This
value, if specified, is only used when an
INSERT statement does not specify a
value for the column.
The SYSTABLE view is provided for compatibility with older versions of the software that offered a SYSTABLE
system table. However, the SYSTABLE system table has been replaced by the ISYSTAB system table, and its
corresponding SYSTAB system view, which you should use instead.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Privileges granted on tables and views by the GRANT statement are stored in the SYSTABLEPERM system view.
Each row in this view corresponds to one table, one user ID granting the privilege (grantor) and one user ID
granted the privilege (grantee). The underlying system table for this view is ISYSTABLEPERM.
Remarks
There are several types of privileges that can be granted. Each privilege can have one of the following three
values.
No, the grantee has not been granted this privilege by the grantor.
Y
Yes, the grantee has been given this privilege by the grantor.
G
The grantee has been given this privilege and can grant the same privilege to another user.
Note
The grantee might have been given the privilege for the same table by another grantor. If so, this
information would be found in a different row of the SYSTABLEPERM system view.
Each row in the SYSTEXTCONFIG system view describes one text configuration object, for use with the full text
search feature. The underlying system table for this view is ISYSTEXTCONFIG.
prefilter LONG VARCHAR The function and library name for an ex
ternal prefilter library.
external_term_breaker LONG VARCHAR The function and library name for an ex
ternal term breaker library.
Each row in the SYSTEXTIDX system view describes one text index. The underlying system table for this view is
ISYSTEXTIDX.
pending_length UNSIGNED BIGINT The total size of indexed values that will
be added to the text index at the next
refresh.
MANUAL
2
AUTO
3
IMMEDIATE
last_refresh_utc TIMESTAMP WITH TIME ZONE The UTC time of the last refresh.
Each row in the SYSTEXTIDXTAB system view describes a generated table that is part of a text index. The
underlying system table for this view is ISYSTEXTIDXTAB.
Each row in the SYSTRIGGER system view describes one trigger in the database. This view also contains
triggers that are automatically created for foreign key definitions which have a referential triggered action (such
as ON DELETE CASCADE). The underlying system table for this view is ISYSTRIGGER.
object_id UNSIGNED BIGINT The object ID for the trigger in the data
base.
INSERT, DELETE
B
INSERT, UPDATE
C
UPDATE COLUMNS
D
DELETE
E
DELETE, UPDATE
I
INSERT
M
UPDATE
RESOLVE
S
foreign_key_id UNSIGNED INT The ID of the foreign key for the table
referenced by foreign_table_id. The for
eign_key_id value reflects the value of
ISYSIDX.index_id.
CASCADE
D
SET DEFAULT
N
SET NULL
R
RESTRICT
source LONG VARCHAR The SQL source for the trigger. This
value is stored in the ISYSSOURCE sys
tem table.
Each row in the SYSTRIGGERS view describes one trigger in the database. This view also contains triggers that
are automatically created for foreign key definitions which have a referential triggered action (such as ON
DELETE CASCADE).
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
The SYSTYPEMAP system view contains the compatibility mapping values for entries in the
SYSSQLSERVERTYPE system view. The underlying system table for this view is ISYSTYPEMAP.
systypes contains one row for each system-supplied and user-defined datatype. Domains (defined by rules)
and defaults are given, if they exist.
This view is owned by user DBO. You cannot alter the rows that describe system-supplied datatypes.
Related Information
Each row in the SYSUSER system view describes a user in the system.
Standalone roles are also stored in this view as well, but only the user_id, object_id, user_name, and user_type
columns are meaningful for these roles. The underlying system table for this view is ISYSUSER.
failed_login_attempts UNSIGNED INT The number of times that a user can fail
to log in before the account is locked.
last_login_time TIMESTAMP The local time that the user last logged
in.
password_creation_time_utc TIMESTAMP WITH TIME ZONE The UTC time that the password was
created for the user.
last_login_time_utc TIMESTAMP WITH TIME ZONE The UTC time that the user last logged
in.
Each row of the SYSUSERAUTH view describes a user, without exposing their user ID and password hash.
Instead, each user is identified by their user name.
You must have the SELECT ANY TABLE system privilege to access this view.
The SYSUSERAUTH view is provided for compatibility with older versions of the software. Use the
SYSROLEGRANTS consolidated view instead.
The password column displays three asterisks (***) if a value is present in the database and NULL if no value is
present.
Although the title of this view contains the word auth (for authorities), the security model is based on roles and
privileges. The data in the view is therefore compiled using role information from the tables and views
mentioned in the view definition.
The SYSUSERAUTHORITY view is provided for compatibility with older versions of the software. Use the
SYSROLEGRANTS consolidated view instead.
Each row of SYSUSERAUTHORITY system view describes an authority granted to one user ID.
Although the title of this view contains the word authority, the security model is based on roles and privileges.
The data in the view is therefore compiled using role information from the tables and views mentioned in the
view definition.
The SYSUSERAUTH view is provided for compatibility with older versions of the software.
Each row of the SYSUSERLIST view describes a user, without exposing their user_id and password. Each user is
identified by their user name.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSUSERMESSAGE system view holds a user-defined message for an error condition. The
underlying system table for this view is ISYSUSERMESSAGE.
Previous versions of the catalog contained a SYSUSERMESSAGES system table. That table has been renamed
to be ISYSUSERMESSAGE (without an 'S'), and is the underlying table for this view.
uid UNSIGNED INT The user number that defined the mes
sage.
The SYSUSEROPTIONS view contains the option settings that are in effect for each user. If a user has no
setting for an option, this view displays the public setting for the option.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
This view is deprecated because it only shows the authorities and permissions available in previous versions.
Change your application to use the SYSROLEGRANTS consolidated view.
You must have the SELECT ANY TABLE system privilege to access this view.
The password column displays three asterisks (***) if a value is present in the database and NULL if no value is
present. To see actual password information, see the SYSUSERPASSWORD system view.
The tables and columns that make up this view are provided in the SQL statement below.
Each row of the SYSUSERPERMS view describes one user ID. However, password information is not included.
All users are allowed to read from this view.
This view is deprecated because it only shows the authorities and permissions available in previous versions.
Change your application to use the SYSROLEGRANTS consolidated view.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
SYSUSERPERM.scheduleauth,SYSUSERPERM.user_group,SYSUSERPERM.publishauth,SYSUSERPE
RM.remotedbaauth,SYSUSERPERM.remarks
from SYS.SYSUSERPERM
sysusers contains one row for each user allowed in the database, and one row for each group or roles.
Related Information
Each row in the SYSUSERTYPE system view holds a description of a user-defined data type. The underlying
system table for this view is ISYSUSERTYPE.
"default" LONG VARCHAR The default value for the data type.
"check" LONG VARCHAR The CHECK condition for the data type.
Each row in the SYSVIEW system view describes a view in the database.
You can find additional information about views in the SYSTAB system view. The underlying system table for
this view is ISYSVIEW.
You can also use the sa_materialized_view_info system procedure for a readable format of the
information for materialized views. Materialized views are only supported for SAP SQL Anywhere tables in the
IQ catalog store.
mv_last_refreshed_at_utc TIMESTAMP WITH TIME ZONE Indicates the UTC date and time that
the materialized view was last re
freshed.
mv_known_stale_at_utc TIMESTAMP WITH TIME ZONE The UTC time at which the materialized
view became stale. This value corre
sponds to the time at which one of the
underlying base tables was detected as
having changed. A value of 0 indicates
that the view is either fresh, or that it
has become stale but the database
server has not marked it as such be
cause the view has not been used since
it became stale. Use the
sa_materialized_view_info
system procedure to determine the sta
tus of a materialized view. This column
contains 0 when mv_last_refreshed_at
is 0 and NULL when mv_last_re
freshed_at is NULL.
Each row of the SYSVIEWS view describes one view, including its view definition.
The tables and columns that make up this view are provided in the SQL statement below. To learn more about a
particular table or column, use the links provided beneath the view definition.
Each row in the SYSWEBSERVICE system view holds a description of a web service. The underlying system
table for this view is ISYSWEBSERVICE.
In SAP ASE, there is a single master database containing a set of system tables holding information that
applies to all databases on the server. Many databases may exist within the master database, and each has
additional system tables associated with it.
In SAP IQ, each database exists independently, and contains its own system tables. There is no master
database that contains system information on a collection of databases. Each server may run several
databases at a time, dynamically loading and unloading each database as needed.
The SAP ASE and SAP IQ system catalogs are different. The SAP ASE system tables and views are owned by
the special user dbo, and exist partly in the master database, partly in the sybsecurity database, and partly
in each individual database; the SAP IQ system tables and views are owned by the special user SYS and exist
separately in each database.
To assist in preparing compatible applications, SAP IQ provides a set of views owned by the special user dbo,
which correspond to the SAP ASE system tables and views. Where architectural differences make the contents
of a particular SAP ASE table or view meaningless in a SAP IQ context, the view is empty, containing only the
column names and data types.
These topics list the SAP ASE system tables and their implementation in the SAP IQ system catalog. The owner
of all tables is dbo in each DBMS.
In this section:
Related Information
Not all SAP Adaptive Server Enterprise system tables are implemented in the SAP IQ system catalog.
Supported
Table Name Description Data? by SAP IQ?
syscolumns One row for each column in a table or view, and for each parameter Yes Yes
in a procedure. In SAP IQ, use the owner name dbo when querying,
i.e. dbo.syscolumns.
syscomments One or more rows for each view, rule, default, and procedure, giving Yes Yes
SQL definition statement.
sysconstraints One row for each referential and check constraint associated with a No No
table or column.
sysdepends One row for each procedure, view, or table that is referenced by a No No
procedure, view.
sysindexes One row for each clustered or nonclustered index, and one row for Yes Yes
each table with no indexes, and an additional row for each table
containing text or image data. In SAP IQ, use the owner name dbo
when querying, i.e. dbo.sysindexes.
sysiqobjects One row for each system table, user table, view, procedure, trigger, Yes Yes
event, constraint, domain (sysdomain), domain (sysusertype), col
umn, and index.
syskeys One row for each primary, foreign, or common key; set by user (not No No
maintained by SAP ASE).
sysobjects One row for each table, view, procedure, rule, default, log, and (in Contains Yes
tempdb only) temporary object. compatible
data only
sysprocedures One row for each view, rule, default, and procedure, giving internal No No
definition.
sysreferences One row for each referential integrity constraint declared on a table No No
or column.
syssegments One row for each segment (named collection of disk pieces). No No
systhresholds One row for each threshold defined for the database. No No
systypes One row for each system-supplied and user-defined data type. Yes Yes
sysusers One row for each user allowed in the database. Yes Yes
Related Information
Not all SAP Adaptive Server Enterprise master database tables are implemented in the SAP IQ system catalog.
Supported
Table Name Description Data? by SAP IQ?
sysconfigures One row for each configuration parameter that can be set by a user No No
sysdevices One row for each tape dump device, disk dump device, disk for da No No
tabases, and disk partition for databases
syslanguages One row for each language (except U.S. English) known to the No No
server
sysloginroles One row for each server login that possesses a system-defined role No No
syslogins One row for each valid user account Yes Yes
No SAP Adaptive Server Enterprise sybsecurity database tables are implemented in the SAP IQ system
catalog.
Supported
Table Name Description Data? by SAP IQ?
Descriptions of the SQL statements available in SAP IQ, including some that can be used only from Embedded
SQL or Interactive SQL.
In this section:
Language elements that are found in the syntax of many SQL statements.
42
-4.038
.001
3.4e10
1e-10
Related Information
Keywords
All SQL keywords appear in UPPERCASE; however, SQL keywords are case-insensitive, so you can type
keywords in any case. For example, SELECT is the same as Select, which is the same as select.
Placeholders
Items that must be replaced with appropriate identifiers or expressions are shown in <italics>.
Continuation
Lines beginning with an ellipsis ( … ) are a continuation from the previous line.
Optional portions
This example indicates that the <savepoint-name> is optional. Do not type the square brackets.
Repeating items
The example indicates that you can specify <column-name> more than once, separated by commas. Do
not type the square brackets.
Alternatives
When one option must be chosen, the alternatives are enclosed in curly braces. For example:
[ QUOTES { ON | OFF } ]
The example indicates that if you choose the QUOTES option, you must provide one of ON or OFF. Do not
type the braces.
One or more options
If you choose more than one, separate your choices by commas. For example:
If two sets of brackets are used, the statement can be used in both environments. For example, [ESQL] [SP]
means a statement can be used either in Embedded SQL or in stored procedures.
In this section:
Syntax
Parameters
Lets you specify the number of variables within the descriptor area. The default size is 1.
Remarks
You must declare the following in your C code prior to using this statement:
You must still call fill_sqlda to allocate space for the actual data items before doing a fetch or any
statement that accesses the data within a descriptor area.
Privileges
None
Standards
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
EXEC SQL INCLUDE SQLCA;
#include <sqldef.h>
EXEC SQL BEGIN DECLARE SECTION;
int x;
short type;
int numcols;
char string[100];
a_sql_statement_number stmt = 0;
EXEC SQL END DECLARE SECTION;
int main(int argc, char * argv[])
{
struct sqlda * sqlda1;
if( !db_init( &sqlca ) ) {
return 1;
}
db_string_connect(&sqlca, "UID=dba;PWD=<password>;DBF=d:\\IQ-16_1\
\sample.db");
EXEC SQL ALLOCATE DESCRIPTOR sqlda1 WITH MAX 25;
EXEC SQL PREPARE :stmt FROM
'select * from Employees';
EXEC SQL DECLARE curs CURSOR FOR :stmt;
EXEC SQL OPEN curs;
EXEC SQL DESCRIBE :stmt into sqlda1;
EXEC SQL GET DESCRIPTOR sqlda1 :numcols=COUNT;
// how many columns?
if( numcols > 25 ) {
// reallocate if necessary
EXEC SQL DEALLOCATE DESCRIPTOR sqlda1;
EXEC SQL ALLOCATE DESCRIPTOR sqlda1
WITH MAX :numcols;
}
type = DT_STRING; // change the type to string
EXEC SQL SET DESCRIPTOR sqlda1 VALUE 2 TYPE = :type;
fill_sqlda( sqlda1 ); // allocate space for the variables
EXEC SQL FETCH ABSOLUTE 1 curs USING DESCRIPTOR sqlda1;
EXEC SQL GET DESCRIPTOR sqlda1 VALUE 2 :string = DATA;
printf("name = %s", string );
EXEC SQL DEALLOCATE DESCRIPTOR sqlda1;
EXEC SQL CLOSE curs;
EXEC SQL DROP STATEMENT :stmt;
db_string_disconnect( &sqlca, "" );
db_fini( &sqlca );
return 0;
}
Related Information
Syntax
<alter-options> ::=
{PORT <portnum>
| USER <username> IDENTIFIED BY PASSWORD <agentpwd>, ... }
Parameters
alter-options
Specifies the port, user, and password for an SAP IQ Cockpit SAP IQ agent.
Remarks
The SYS.ISYSIQMPXSERVERAGENT system table stores the agent connection definitions for the server.
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
The following example alters the agent for server mpxdemo_svr2 by changing the password and port number
for user smit:
ALTER AGENT FOR MULTIPLEX SERVER mpxdemo_svr2 USER smith IDENTIFIED BY smith_pwd
PORT 1112
Upgrades a database created with a previous version of the software, adds or removes jConnect for JDBC
support, or defines management of system procedure execution. Run this statement with DBISQL Interactive
SQL.
Syntax
Parameters
PROCEDURE ON
Drops and re-creates all dbo- and sys-owned procedures in the database. When executed via SQL and
passed to the connection as a batch command, the existing connection terminates and the next statement
in the batch is not executed. You must re-establish the connection after the command is executed and
execute any further SQL in the batch.
JCONNECT { ON | OFF }
Specify ON to allow the SAP IQ jConnect JDBC driver to access system catalog information. This installs
jConnect system tables and procedures. To exclude the jConnect system objects, specify OFF. You can still
use JDBC, as long as you do not access system catalog information. The default is to include jConnect
support (JCONNECT ON).
RESTART { ON | OFF }
When you specify ON (default) and the AutoStop connection parameter is set to NO, the database restarts
after it is upgraded. Otherwise, the database is stopped after an upgrade.
SYSTEM PROCEDURE AS DEFINER { ON | OFF }
Defines whether a privileged system procedure runs with the privileges of the invoker (the person
executing the procedure) or the definer (the owner of the procedure):
● OFF – all privileged system procedures execute with the privileges of the invoker. Use
sp_proc_priv() to identify the system privileges required to run a system procedure.
● ON (default), or not specified:
○ When upgrading a pre-16.0 database – pre-16.0 privileged system procedures execute with the
privileges of the definer and 16.0 or later privileged system procedures execute with the privileges
of the invoker.
Note
Changing the execution model after upgrade may result in loss of functionality on custom stored
procedures and applications that explicitly grant EXECUTE privilege on system procedures. It may also
impact the ability to run system procedures. See System Procedures [page 572].
Remarks
The ALTER DATABASE statement upgrades databases created with earlier versions of the software. This
applies to maintenance releases as well as major releases.
You can also use ALTER DATABASE UPGRADE simply to add jConnect features, if the database was created
with the current version of the software.
Note
● See the SAP IQ Installation and Update Guide for backup recommendations before you upgrade.
● Be sure to start the server in a way that restricts user connections before you run ALTER DATABASE
UPGRADE. For instructions and other upgrade caveats, see the SAP IQ Installation and Update Guide for
your platform.
● Use the iqunload utility to upgrade databases created in versions earlier than 15.0. See the SAP IQ
Installation and Update Guide for your platform.
Note
For parameters that accept variable names, an error is returned if one of the following conditions is true:
Privileges
Requires the ALTER DATABASE system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Automatic commit
Standards
Examples
Related Information
Changes the read/write mode, changes the size, or extends an existing dbspace.
Syntax
<iq-file-opts> ::=
[ [ SIZE ] <file-size> KB | MB | GB | TB ] ]
[ RESERVE <reserve-size> [ KB | MB | GB | TB ] ]
Parameters
ADD new-file-spec
Adds one or more files to the specified dbspace. The dbfile name and the physical file path are required for
each file and must be unique. You can add files to IQ main, IQ shared temporary, IQ temporary, or cache
dbspaces. You may add a file to a read-only dbspace, but the dbspace remains read-only. You can add files
to multiplex shared temporary dbspaces only in read-only mode (the default for ADD FILE).
A catalog dbspace may contain only one file, so ADD FILE may not be used on catalog dbspaces.
● An RLV dbspace – use ADD FILE on SAP IQ servers only. You cannot add a file to a multiplex RLV
dbspace.
● A cache dbspace – use ADD FILE on multiplex or SAP IQ servers.
When used in the ALTER FILE clause, extends the size of the file in units of pages, kilobytes (KB),
megabytes (MB), gigabytes (GB), or terabytes (TB). The default is MB. You can ADD only if the free list (an
allocation map) has sufficient room and if the dbspace has sufficient reserved space.
DROP FILE logical-file-name
Removes the specified file from a dbspace. The file must be empty. You cannot drop the last file from the
specified dbspace. Instead use DROP DBSPACE if the dbspace contains only one file.
RENAME TO newname
When used with the DROP FILE clause, renames the pathname of the dbspace that contains a single file. It
is semantically equivalent to the RENAME PATH clause. An error is returned if the dbspace contains more
than one file. You cannot rename IQ_SYSTEM_MAIN, IQ_SYSTEM_MSG, IQ_SYSTEM_TEMP,
IQ_SHARED_TEMP, or SYSTEM.
When used with the ALTER FILE clause, renames the specified file's logical name to a new name. The new
name must be unique in the database.
READONLY
When used with the DROP clause, changes any dbspace except IQ_SYSTEM_MAIN, IQ_SYSTEM_TEMP,
When used with the ALTER FILE clause, changes the specified file to read-only. The file must be associated
with an IQ main dbspace. You cannot change files in IQ_SYSTEM_MSG, IQ_SHARED_TEMP, and SYSTEM to
read-only. Disallows DML modifications to any object currently assigned to the dbspace. Can only be used
for the cache dbspace, and dbspaces in the IQ main store.
When used with the ALTER FILE clause, changes the specified file to READONLY status.
READWRITE
When used with the ALTER FILE clause, changes the specified cache dbspace, IQ main, or temporary store
dbfile to read-write. The file must be associated with a cache dbspace, IQ main, or temporary dbspace.
ONLINE
Puts an offline dbspace and all associated files online if both the online value of the file's associated
dbspace and the online value of the file in SYS.ISYSIQDBFILE are true. Can only be used for dbspaces in
the cache dbspace and IQ main store.
OFFLINE
Puts an online read-only dbspace and all associated files offline. (Returns an error if the dbspace is read-
write, offline already, or not of the cache dbspace or IQ main store.) Can only be used for dbspaces in the
cache dbspace or IQ main store.
STRIPING
Changes the disk striping on the dbspace as specified. When disk striping is set ON, data is allocated from
each file within the dbspace in a round-robin fashion. For example, the first database page written goes to
the first file, the second page written goes to the next file within given dbspace, and so on. Read-only
dbspaces are skipped.
STRIPESIZEKB size-in-KB
Specifies the number of kilobytes (KB) to write to each file before the disk striping algorithm moves to the
next stripe for the specified dbspace.
FORCE READWRITE
When used with the ALTER FILE clause, changes the status of the specified shared temporary store dbfile
to read-write, although there may be known file status problems on secondary nodes. The file may be
associated with an IQ main, shared temporary, or temporary dbspace, but because new dbfiles in
IQ_SYSTEM_MAIN and user main are created read-write, this clause only affects shared temporary
dbspaces.
SIZE
Specifies the new size of the file in units of kilobytes (KB), megabytes (MB), gigabytes (GB), or terabytes
(TB). The default is megabytes. You can increase the size of the dbspace only if the free list (an allocation
map) has sufficient room and if the dbspace has sufficient reserved space. You can decrease the size of the
dbspace only if the portion to be truncated is not in use.
RENAME PATH
When used with the ALTER FILE clause, renames the file pathname associated with the specified file. This
clause merely associates the file with the new file path instead of the old path. The clause does not actually
change the operating system file name. You must change the file name through your operating system.
The dbspace must be OFFLINE to rename the file path. The new path is used when the dbspace is altered
ONLINE or when the database is restarted.
Note
The renamed file path must be on the same server. If you rename the file path to a location on another
server, you will not be able to alter the dbspace ONLINE.
Enclose the physical file path to the dbfile in single quotation marks.
Remarks
ALTER DBSPACE changes the read-write mode, changes the online/offline state, alters the file size, renames
the dbspace name, file logical name or file path, or sets the dbspace striping parameters. For details about
existing dbspaces, run sp_iqdbspace procedure, sp_iqdbspaceinfo procedure, sp_iqfile procedure,
sp_iqdbspaceobjectinfo, and sp_iqobjectinfo. Dbspace and dbfile names are always case-insensitive.
The physical file paths are case-sensitive, if the database is CASE RESPECT and the operating system supports
case-sensitive files. Otherwise, the file paths are case-insensitive.
You may optionally delimit dbspace and dbfile names with double quotation marks.
In Windows, if you specify a path, any backslash characters (\) must be doubled if they are followed by an n or
an x. This prevents them being interpreted as a newline character (\n) or as a hexadecimal number (\x),
according to the rules for strings in SQL. It is safer to always double the backslash.
Privileges
Requires the MANAGE ANY DBSPACE system privilege. See GRANT System Privilege Statement [page 1511]
for assistance with granting privileges.
Side Effects
● Automatic commit
● Automatic checkpoint
● A mode change to READONLY causes immediate relocation of the internal database structures on the
dbspace to one of the read-write dbspaces.
Standards
● The following example adds 500 MB to the dbspace DspHist by adding the file FileHist3 of size 500
MB:
● On a UNIX system, the following example adds two 500 MB files to the dbspace DspHist:
● The following example increases the size of the dbspace IQ_SYSTEM_TEMP by 2 GB:
● The following example removes two files from dbspace DspHist (both files must be empty):
● The following example increases the size of the dbspace IQ_SYSTEM_MAIN by 1000 pages. (ADD clause
defaults to pages):
● The following example removes dbfile iqdas2 from the cache dbspace myDAS:
● The following example makes the myDAS cache dbspace dbfile iqdas2 read-only:
Related Information
Syntax
Parameters
user-type
Remarks
The ALTER DOMAIN statement updates the name of the user-defined domain or data type in the
SYSUSERTYPE system table.
Re-create any procedures, views or events that reference the user-defined domain or data type, so they do not
continue to reference the former name.
Privileges
If you are the database user who created the domain no further privileges are required. If you are not the
creator, you require one of the following privileges:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
Automatic commit
Related Information
Changes the definition of an event or its associated handler for automating predefined actions. Also alters the
definition of scheduled actions.
Syntax
<event-type> ::=
BackupEnd
| "Connect"
| ConnectFailed
| DatabaseStart
| DBDiskSpace
| "Disconnect"
| GlobalAutoincrement
| GrowDB
| GrowLog
| GrowTemp
| IQMainDBSpaceFree
| IQTempDBSpaceFree
| LogDiskSpace
| "RAISERROR"
| ServerIdle
| TempDiskSpace
<trigger-condition> ::=
event_condition( <condition-name> )
{ =
| <
| >
| !=
| <=
| >= } <value>
Parameters
DELETE TYPE
Changes the definition of a schedule. Only one schedule can be altered in any one ALTER EVENT
statement.
WHERE
Determines the condition under which an event is fired. The WHERE NULL option deletes a condition. You
can specify a variable name for the event_condition value.
Note
Remarks
ALTER EVENT lets you alter an event definition created with CREATE EVENT. Possible uses include:
When you alter an event using ALTER EVENT, specify the event name and, optionally, the schedule name.
Each event has a unique event ID. Use the event_id columns of SYSEVENT and SYSSCHEDULE to match the
event to the associated schedule.
Note
For required parameters that accept variable names, an error is returned if one of the following conditions
is true:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
Automatic commit
Examples
● The following example lists event names by querying the system table SYSEVENT:
● The following example lists schedule names by querying the system table SYSSCHEDULE:
Related Information
Modifies an existing function. Include the entire modified function in the ALTER FUNCTION statement.
Syntax
Syntax 1
Parameters
SET HIDDEN
Scrambles the definition of the associated function and causes it to become unreadable. The function can
be unloaded and reloaded into other databases.
Caution
The SET HIDDEN clause setting is irreversible. If you need the original source again, you must maintain
it outside the database.
RECOMPILE
Recompiles a user-defined function. When you recompile a function, the definition stored in the catalog is
re-parsed and the syntax is verified. The preserved source for a function is not changed by recompiling.
When you recompile a function, the definitions scrambled by the SET HIDDEN clause remain scrambled
and unreadable.
Remarks
Syntax 1
Syntax 1 is identical in syntax to the CREATE FUNCTION statement except for the first word. Either version of
the CREATE FUNCTION statement can be altered. Existing permissions on the function are maintained and do
not have to be reassigned. If a DROP FUNCTION and CREATE FUNCTION were carried out, execute permissions
must be reassigned.
Note
For required parameters that accept variable names, an error is returned if one of the following conditions
is true:
The privilege required varies by function type. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Alter an external C/C++ Scalar or Aggre Requires CREATE EXTERNAL REFERENCE system privilege.
gate, or external Java function
For external C/C++ Scalar or Aggregate, or external Java functions owned by
others, you also require one of:
Side Effects
Automatic commit
Standards
Examples
The following example creates and then alters a function using a variable in the NAMESPACE clause
2. The following statement creates a function named FtoC that uses a variable in the NAMESPACE clause:
Related Information
Renames indexes in base or global temporary tables, foreign key role names of indexes and foreign keys
explicitly created by a user, or changes the clustered nature of an index on a catalog store table. You cannot
rename indexes created to enforce key constraints.
Syntax
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
(back to top)
ON [owner.]table-name
Specifies the name of the table that contains the index or foreign key to rename.
RENAME TO | AS new-name
Moves the specified index, unique constraint, foreign key, or primary key to the specified dbspace. For
unique constraint or foreign key, you must specify its unique index name.
cluster-clause
Specifies whether the index should be changed to CLUSTERED or NONCLUSTERED. Applies to catalog
store tables only and only one index on a table can be clustered.
Remarks
(back to top)
You must have CREATE privilege on the new dbspace and be the table owner or have the MANAGE ANY
DBSPACE system privilege.
Note
Attempts to alter an index in a local temporary table return the error index not found. Attempts to alter
a nonuser-created index, such as a default index (FP), return the error Cannot alter index. Only
indexes in base tables or global temporary tables with an owner type of USER can
be altered.
Privileges
(back to top)
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit. Clears the Results tab in the Results pane in Interactive SQL. Closes all cursors for the
current connection.
(back to top)
Examples
(back to top)
● The following example moves the primary key, HG for c5, from dbspace Dsp4 to Dsp8:
● The following example rename an index COL1_HG_OLD in the table jal.mytable to COL1_HG_NEW:
● The following example rename a foreign key role name ky_dept_id in table dba.Employees to
emp_dept_id:
Related Information
Any changes to an LDAP server configuration object are applied on subsequent connections. Any connection
already started when the change is applied does not immediately reflect the change.
Syntax
<ldapua-server-attribs> ::=
SEARCH DN
URL { '<URL_string>' | NULL }
| ACCESS ACCOUNT { '<DN_string>' | NULL }
| IDENTIFIED BY { '<password>' | NULL }
| IDENTIFIED BY ENCRYPTED { <encrypted-password> | NULL }
| AUTHENTICATION URL { '<URL_string>' | NULL }
| CONNECTION TIMEOUT <timeout_value>
| CONNECTION RETRIES <retry_value>
| TLS { ON | OFF }
Parameters
Identifies the host (by name or by IP address), port number, and the search to be performed for the DN
lookup for a given user ID. This value is validated for correct LDAP URL syntax before it is stored in the
ISYSLDAPSERVER system table. The maximum size for this string is 1024 bytes.
ACCESS ACCOUNT { 'DN_string' | NULL }
User created in the LDAP server for use by SAP IQ, not a user within SAP IQ. The distinguished name (DN)
for this user is used to connect to the LDAP server. This user has permissions within the LDAP server to
search for DNs by user ID in the locations specified by the SEARCH DN URL. The maximum size for this
string is 1024 bytes.
Provides the password associated with the ACCESS ACCOUNT user. The password is stored using
symmetric encryption on disk. Use the value NULL to clear the password and set it to none. The maximum
size of a clear text password is 255 bytes.
Configures the password associated with the ACCESS ACCOUNT distinguished name in an encrypted
format. The binary value is the encrypted password and is stored on disk as is. Use the value NULL to clear
the password and set it to none. The maximum size of the binary is 289 bytes. The encrypted key should
be a valid varbinary value. Do not enclose the encrypted key in quotation marks.
Identifies the host (by name or IP address) and the port number of the LDAP server to use for
authentication of the user. This is the value defined for URL_string and is validated for correct LDAP URL
syntax before it is stored in ISYSLDAPSERVER system table. The DN of the user obtained from a prior DN
Specifies the connection timeout from SAP IQ to the LDAP server for both DN searches and
authentication. This value is in milliseconds, with a default value of 10 seconds.
CONNECTION RETRIES retry_value
Specifies the number of retries on connections from SAP IQ to the LDAP server for both DN searches and
authentication. The valid range of values is 1– 60, with a default value of 3.
TLS { ON | OFF }
Defines whether the TLS or Secure LDAP protocol is used for connections to the LDAP server for both DN
searches and authentication. When set to ON, the TLS protocol is used and the URL would begin with
"ldap://" When set to OFF (or not specified), Secure LDAP protocol is used and the URL begins with
“ldaps://”. When using the TLS protocol, specify the database security option
TRUSTED_CERTIFICATES_FILE with a file name containing the certificate of the Certificate Authority (CA)
that signed the certificate used by the LDAP server.
WITH ACTIVATE
Activates the LDAP server configuration object for immediate use upon creation. This permits the
definition and activation of LDAP User Authentication in one statement. The LDAP server configuration
object state changes to READY when WITH ACTIVATE is used.
Remarks
In addition to resetting LDAP server configuration object values for attributes, the ALTER LDAP SERVER
statement allows an administrator to make manual adjustments to a server's state and behavior by putting the
LDAP server configuration object in maintenance mode and returning it to service from maintenance mode.
Privileges
Requires the MANAGE ANY LDAP SERVER system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Standards
● The following example suspends the LDAP server configuration object named apps_primary:
● The following example changes the LDAP server configuration object named apps_primary to use a
different URL for authentication on host fairfax, sets the port number to 1066, sets the number of
connection retries to 10, and finally activates the LDAP server configuration object:
Related Information
Modifies configuration for the existing user-defined logical server in the database. This statement enforces
consistent shared system temporary store settings across physical nodes shared by logical servers.
Syntax
<alter-ls-clause> ::=
{ ADD MEMBERSHIP '(' { <ls-member>, ... } ')'
| DROP MEMBERSHIP '(' { <ls-member>, ... } ')'
| POLICY <policy-name> }
Parameters
logical-server-name
Automatically shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP
database option is changed directly or indirectly.
The SYS.ISYSIQLSMEMBER system table stores definitions for the logical server memberships.
A member node that is added to or dropped from a logical server starts or stops accepting logical server
connections only after the TLV log corresponding to ALTER LOGICAL SERVER is played on that node. Existing
connections of a logical server continue to run on a node when that node is dropped from the logical server,
however, distributed processing is stopped for these connections.
● Any ls-member specified with the ADD MEMBERSHIP clause is already a member of the logical server.
● Any ls-member specified with the DROP MEMBERSHIP clause is not an existing member of the logical
server.
● A logical server membership change causes a node to belong to multiple logical servers assigned to a
single login policy. Logical server membership in a login policy cannot overlap.
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
● The following example alters a user-defined logical server by adding multiplex nodes n1 and n2 to logical
server ls1:
● The following example adds logical membership of COORDINATOR and drop a named membership of the
current coordinator node n1 from logical server ls1:
● The following example changes the logical server policy for logical server ls2 to policy lsp1:
Related Information
Syntax
Syntax 1
<ls-assignment-list> ::=
{ { <ls-name>, ...}
| ALL
| COORDINATOR
| SERVER
| NONE
| DEFAULT }
<ls-override-list> ::=
{ <ls-name>, … }
<ls-name> ::=
{ OPEN | <user-defined-ls-name> }
Syntax 2
<policy-option> ::=
<policy-option-name> = <policy-option-value>
<policy-option-name> ::=
AUTO_UNLOCK_TIME
| CHANGE_PASSWORD_DUAL_CONTROL
| DEFAULT_LOGICAL_SERVER
| LOCKED
| MAX_CONNECTIONS
| MAX_DAYS_SINCE_LOGIN
| MAX_FAILED_LOGIN_ATTEMPTS
| MAX_NON_DBA_CONNECTIONS
| PAM_FAILOVER_TO_STD
| PAM_SERVICENAME
| PASSWORD_EXPIRY_ON_NEXT_LOGIN
| PASSWORD_GRACE_TIME
| PASSWORD_LIFE_TIME
| ROOT_AUTO_UNLOCK_TIME
| LDAP_PRIMARY_SERVER
| LDAP_SECONDARY_SERVER
| LDAP_AUTO_FAILBACK_PERIOD
| LDAP_FAILOVER_TO_STD
| LDAP_REFRESH_DN
<policy-option-value> ::=
{ UNLIMITED | DEFAULT | <value> }
policy-name
The name of the login policy. Specify root to modify the root login policy.
policy-option-value
The value assigned to the login policy option. If you specify UNLIMITED, no limits are used. If you specify
DEFAULT, the default limits are used. See Login Policy Options and LDAP Login Policy Options for supported
values for each option.
policy-option-name
The name of the policy option. See Login Policy Options and LDAP Login Policy Options for details about
each option.
Remarks
If you do not specify a policy option, values for this login policy come from the root login policy. New policies do
not inherit the MAX_NON_DBA_CONNECTIONS and ROOT_AUTO_UNLOCK_TIME policy options.
All new databases include a root login policy. You can modify the root login policy values, but you cannot delete
the policy.
For details on available login policy options for root and user-defined logins, LDAP user authentication, and
multiplex servers, see CREATE LOGIN POLICY Statement.
Privileges
Requires the MANAGE ANY LOGIN POLICY system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Examples
The following example sets the password_life_time value to UNLIMITED and the max_failed_login_attempts
value to 5 in the Test1 login policy:
In this section:
Assume that the root login policy allows access to logical servers ls4 and ls5 and login policy lp1 exists with
no logical server assignment. The statement effectively assigns login policy lp1 to logical servers ls4 and ls5.
This statement allows access of logical servers ls2 and ls3 from login policy lp1:
Modify login policy lp1 to allow access to ls3 and ls4 only:
Alternatively:
Drop current logical server assignments of login policylp1 and allow it to inherit the logical server
assignments of the root login policy:
ADD, DROP, or SET clauses let you configure the logical server assignments of a login policy:
Use only one ADD, DROP, or SET clause. Use SERVER, NONE, and DEFAULT clauses only with the SET clause.
Specify a particular logical server name only once per ls-assignment list or ls-override list.
● Any logical server specified with the ADD clause is already assigned to the login policy.
● Any logical server specified with the DROP clause is currently not assigned to the login policy.
● Logical server assignment change may cause a membership overlap among assigned logical servers.
Modifies some or all option values for the root logical server policy or a user-created logical server policy. This
statement enforces consistent shared system temporary store settings across physical nodes shared by logical
servers.
Syntax
<ls-option-value-list> ::=
{ <ls-option-name> = <ls-policy-option-value> } ...
<ls-option-name> ::=
ALLOW_COORDINATOR_AS_MEMBER
| DQP_ENABLED
| ENABLE_AUTOMATIC_FAILOVER
| LOGIN_REDIRECTION
| REDIRECTION_WAITERS_THRESHOLD
| TEMP_DATA_IN_SHARED_TEMP
Parameters
ls-policy-name
The name of the logical server policy. Specify root to modify the root logical server policy.
ls-option-value-list
The name of the logical server policy option. See Remarks for list of options.
ls-policy-option-value
Any unspecified option inherits its value from the root logical server policy. See Remarks.
WITH STOP SERVER
Automatically shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP option
is changed directly or indirectly.
If you want a smaller IQ_SYSTEM_TEMP dbspace, set TEMP_DATA_IN_SHARED_TEMP to ON, which writes
temporary data to IQ_SHARED_TEMP instead of IQ_SYSTEM_TEMP. In a distributed query processing
environment, however, setting both DQP_ENABLED and TEMP_DATA_IN_SHARED_TEMP to ON may saturate
your SAN with additional data in IQ_SHARED_TEMP, where additional I/O operations against IQ_SHARED_TEMP
may adversely affect DQP performance.
ALLOW_COORDI Can only be set for the ROOT logical server policy. When ON ● Values – ON, OFF
NATOR_AS_MEM (the default), the coordinator can be a member of any user- ● Default – ON
BER defined logical server. OFF prevents the coordinator from be
ing used as a member of any user-defined logical servers.
DQP_ENABLED When set to 0, query processing is not distributed. When set ● Values – 0, 1, 2
to 1 (the default), query processing is distributed as long as a ● Default – 1
writable shared temporary file exists. When set to 2, query
processing is distributed over the network, and the shared
temporary store is not used.
ENABLE_AUTO Can only be set for the ROOT logical server policy. When ON, ● Values – ON, OFF, DEFAULT
MATIC_FAILOVER enables automatic failover for logical servers governed by ● Default – OFF
specified login policy. When OFF (the default), disables auto
matic failover at the logical server level, allowing manual fail
over. Specify DEFAULT to set back to the default value.
LOGIN_REDIREC When ON, enables login redirection for logical servers gov ● Values – ON, OFF
TION erned by specified login policy. When OFF (the default), disa ● Default – OFF
bles login redirection at the logical server level, allowing ex
ternal connection management.
REDIREC Specifies how many connections can queue before SAP IQ ● Values – Integer
TION_WAIT redirects a connection to this logical server to another ● Default – 5
ERS_THRESHOLD server. Can be any integer value; default is 5.
TEMP_DATA_IN_S When ON, all temporary table data and eligible scratch data ● Values – ON, OFF
HARED_TEMP writes to the shared temporary store, provided that the ● Default – OFF
shared temporary store has at least one read-write file
added. You must restart all multiplex nodes after setting this
option or after adding a read-write file to the shared tempo
rary store. (If the shared temporary store contains no read-
write file, or if you do not restart nodes, data is written to
IQ_SYSTEM_TEMP instead.)
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
● The following example alters the logical server policy and causes servers to shut down automatically when
the option value changes:
Related Information
Renames the multiplex and stores the multiplex name in SYS.ISYSIQINFO system table.
Syntax
Remarks
When a multiplex is created, it is named after the coordinator. This statement is automatically committed.
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Related Information
Changes the name, catalog file path, role, or status of the given server.
Syntax
Syntax 1
<host-port-list> ::=
{ HOST '<hostname>' PORT <port number> ...}
{ PRIVATE HOST '<hostname>' PORT <port number> ...}
Syntax 2
Parameters
RENAME new-server-name
Changes the name of the given server. The server automatically shuts down and the next restart requires
the new name.
DATABASE 'dbfile'
Changes the catalog file path for the given server. The server automatically shuts down and the next restart
requires the new catalog path. The user must relocate the catalog file.
ROLE { WRITER | READER | COORDINATOR }
Changes the status of the given server. A failover node cannot be excluded unless it is the last node to be
excluded. The server automatically shuts down after exclusion. After including a node, you synchronize and
restart it.
ASSIGN AS FAILOVER
Designates the given server as the new failover server. The node should not be in the excluded state. The
ASSIGN AS FAILOVER clause is a standalone clause; do not use it with any other ALTER MULTIPLEX
SERVER clause.
The coordinator must be running, but you can run the ALTER MULTIPLEX SERVER statement from any
server in the multiplex. (Run all DDL statements on the coordinator.) In all cases except when altering role
from reader to writer, the named server is automatically shut down.
{ ENABLE | DISABLE } RLV STORE
Allows the coordinator to use an in-memory store for high-performance row-level updates.
host-port-list
Shuts down the target server before you exclude it. If you do not, an excluded server automatically shuts
down and requires ALTER MULTIPLEX SERVER <server-name> STATUS INCLUDED and a synchronize
to rejoin the multiplex.
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
Related Information
Replaces an existing procedure with a modified version. Include the entire modified procedure in the ALTER
PROCEDURE statement, and reassign user permissions on the procedure.
Syntax
Syntax 1
Syntax 2
<result-type> ::=
<table-name> TABLE | <result-col-type> [, ...]
<external-call> ::=
[<column-name>:]<function-name@library>; ...
<environment-name> ::=
DISALLOW | ALLOW SERVER SIDE REQUESTS
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
[owner.]procedure-name
Specifies the name of the procedure you are replacing. The <owner> clause is optional.
REPLICATE { ON | OFF }
If a procedure needs to be relocated to other sites using SAP Replication Server, use the REPLICATE ON
clause.
SET HIDDEN
Caution
This setting is irreversible. You should retain the original procedure definition outside of the database.
RECOMPILE
Recompiles a stored procedure. When you recompile a procedure, the definition stored in the catalog is re-
parsed and the syntax is verified. The procedure definition is not changed by recompiling. You can
recompile procedures with definitions hidden with the SET HIDDEN clause, but their definitions remain
hidden.
RESULT
For procedures that generate a result set but do not include a RESULT clause, the database server
attempts to determine the result set characteristics for the procedure and stores the information in the
catalog. This can be useful if a table referenced by the procedure has been altered to add, remove, or
rename columns since the procedure was created.
environment-name
DISALLOW is the default. ALLOW indicates that server-side connections are allowed.
Note
● Do not specify ALLOW unless necessary. Use of the ALLOW clause slows down certain types of
SAP IQ table joins.
● Do not use UDFs with both ALLOW SERVER SIDE REQUESTS and DISALLOW SERVER SIDE
REQUESTS clauses in the same query.
Remarks
(back to top)
The ALTER PROCEDURE statement must include the entire new procedure. You can use PROC as a synonym
for PROCEDURE. Both Watcom and Transact-SQL dialect procedures can be altered through the use of ALTER
PROCEDURE. Existing permissions on the procedure are not changed. If you execute DROP PROCEDURE followed
by CREATE PROCEDURE, execute permissions are reassigned.
When using the ALTER PROCEDURE statement for table UDFs, the same set of restrictions apply as for the
CREATE PROCEDURE Statement (External Procedures).
Note
For required parameters that accept variable names, an error is returned if one of the following conditions
is true:
Privileges
(back to top)
The privilege required varies by procedure type. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Alter an external C/C++ Scalar or Aggre Requires CREATE EXTERNAL REFERENCE system privilege
gate, or external Java procedure
For external C/C++ Scalar or Aggregate, or external Java procedures owned
by others, you also require one of:
Standards
(back to top)
Examples
(back to top)
This example creates and then alters a procedure using a variable in the NAMESPACE clause:
2. The following statement creates a procedure named FtoC that uses a variable in the NAMESPACE clause:
3. The following statement alters the procedure FtoC so that the temperature parameter accepts a FLOAT
data type:
Related Information
Migrates a compatibility role to a user-defined system role, then automatically drops the compatibility role.
Note
You cannot use the ALTER ROLE statement to migrate SYS_AUTH_SA_ROLE or SYS_AUTH_SSO_ROLE.
These roles are automatically migrated when SYS_AUTH_DBA_ROLE is migrated.
Syntax
Parameters
predefined_sys_role_name
The name of a compatibility role that still exists (has not already been dropped) in the database.
new_role_name
The name of the new role cannot begin with the prefix SYS_ or end with the suffix _ROLE.
new_sa_role_name
Required only when migrating SYS_AUTH_DBA_ROLE. The new role to which the underlying system
privileges of SYS_AUTH_SSO_ROLE are to be migrated to cannot already exist in the database, and the
new role name cannot begin with the prefix SYS_ or end with the suffix _ROLE.
Remarks
Since no role administrator was specified during the migration process, only global role administrators can
manage the new role. Use the CREATE ROLE statement to add role administrators with appropriate
administrative rights to the role.
Privileges
Requires the MANAGE ROLES system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
● The following example migrates SYS_AUTH_DBA_ROLE to the new roles Custom_DBA, Custom_SA, and
Custom_SSO respectively. It then automatically migrates all users, underlying system privileges, and roles
granted to SYS_AUTH_DBA_ROLE to the applicable new roles. Finally, it drops SYS_AUTH_DBA_ROLE,
SYS_AUTH_SA_ROLE, and SYS_AUTH_SSO_ROLE:
Related Information
Alters a sequence. This statement applies to SAP IQ catalog store tables only.
Syntax
Parameters
Defines the amount the next sequence value is incremented from the last value assigned. The default is 1.
Specify a negative value to generate a descending sequence. An error is returned if the INCREMENT BY
value is 0.
MINVALUE clause
Defines the smallest value generated by the sequence. The default is 1. An error is returned if MINVALUE is
greater than ( 2^63-1) or less than -(2^63-1). An error is also returned if MINVALUE is greater than
MAXVALUE.
MAXVALUE clause
Defines the largest value generated by the sequence. The default is 2^63-1. An error is returned if
MAXVALUE is greater than 2^63-1 or less than -(2^63-1).
CACHE clause
Specifies whether values should continue to be generated after the maximum or minimum value is
reached.
Remarks
Privileges
If you own the sequence, no additional privilege is required. For sequences owned by others, your require one of
the following:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None
Standards
The ALTER SEQUENCE statement is part of optional ANSI/ISO SQL Language Feature T176. The CACHE
clause is not in the standard.
Example
The following example sets a new maximum value for a sequence named Test:
Modifies the attributes of a remote server. Changes made by ALTER SERVER do not take effect until the next
connection to the remote server.
Syntax
<server-class> ::=
{ SAODBC
| ASEODBC
| DB2ODBC
| MSSODBC
| ORAODBC
| ODBC }
<connection-info> ::=
{ <machine-name>:<port-number> [ /<dbname> ] | <data-source-name> }
Go to:
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
CLASS 'server-class'
If an ODBC-based server class is used, the USING clause is the <data-source-name>, which is the ODBC
Data Source Name.
CAPABILITY 'cap-name' { ON | OFF }
Turns a server capability ON or OFF. Server capabilities are stored in the system table SYSCAPABILITY.
The names of these capabilities are stored in the system table SYSCAPABILITYNAME. The
SYSCAPABILITY table contains no entries for a remote server until the first connection is made to that
server. At the first connection, SAP IQ interrogates the server about its capabilities and then populates
SYSCAPABILITY. For subsequent connections, the server’s capabilities are obtained from this table.
In general, you need not alter a server’s capabilities. It might be necessary to alter capabilities of a generic
server of class ODBC.
When a user creates a connection to a remote server, the remote connection is not closed until the user
disconnects from the local database. The CONNECTION CLOSE clause allows you to explicitly close
connections to a remote server. You may find this useful when a remote connection becomes inactive or is
no longer needed.
These SQL statements are equivalent and close the current connection to the remote server:
You can close both ODBC and JDBC connections to a remote server using this syntax. You do not need the
SERVER OPERATOR system privilege to execute either of these statements.
You can also disconnect a specific remote ODBC connection by specifying a connection ID, or disconnect
all remote ODBC connections by specifying the ALL keyword. If you attempt to close a JDBC connection by
specifying the connection ID or the ALL keyword, an error occurs. When the connection identified by
<connection-id> is not the current local connection, the user must have the SERVER OPERATOR
system privilege to be able to close the connection.
Privileges
(back to top)
Requires the SERVER OPERATOR system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
● The following example changes the server class of the SAP ASE server named ase_prod so its connection
to SAP IQ is ODBC-based. The Data Source Name is ase_prod:
● The following example closes all connections to the remote server named rem_test:
● The following example closes the connection to the remote server named rem_test that has the
connection ID 142536:
Related Information
Syntax
<service-type-string> ::=
{ 'RAW'
| 'HTML'
| 'XML'
| 'SOAP'
| 'DISH' }
<attributes> ::=
[ AUTHORIZATION { ON | OFF } ]
[ SECURE { ON | OFF } ]
[ USER { <user-name> | NULL } ]
[ URL [ PATH/ ] { ON | OFF | ELEMENTS } ]
[ USING { <SOAP-prefix> | NULL } ]
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
TYPE service-type-string
Identifies the type of the service. The type must be one of the listed service types. There is no default value.
● RAW – Sends the result set to the client without any further formatting. You can produce formatted
documents by generating the required tags explicitly within your procedure.
● HTML – Formats the result set of a statement or procedure into an HTML document that contains a
table.
● XML – Assumes the result set is an XML format. If it is not already so, it is automatically converted to
XML RAW format.
● SOAP – Formats the result set as a Simple Object Access Protocol (SOAP) response. The request must
be a valid SOAP request. For more information about the SOAP standards, see www.w3.org/TR/SOAP
.
● DISH – Determine SOAP Handler, or DISH, service acts as a proxy for one or more SOAP services. In
use, it acts as a container that holds and provides access to a number of SOAP services. A Web
Services Description Language (WSDL) file is automatically generated for each of the included SOAP
Web service names may be any sequence of alphanumeric characters or “/”, “-”, “_”, “.”, “!”, “~”, “*”, “'”, “(“,
or “”)”, except that the first character cannot begin with a slash (/) and the name cannot contain two or
more consecutive slash characters.
attributes
AUTHORIZATION { ON | OFF }
Determines whether users must specify a user name and password when connecting to the service.
The default value is ON.
● If authorization is OFF, the AS clause is required and a single user must be identified by the USER
clause. All requests are run using that user’s account and permissions.
● If authorization is ON, all users must provide a user name and password. Optionally, you can limit
the users that are permitted to use the service by providing a user or role name using the USER
clause. If the user name is NULL, all known users can access the service.
Run production systems with authorization turned on. Grant permission to use the service by adding
users to a role.
SECURE { ON | OFF }
Indicates whether unsecure connections are accepted. ON indicates that only HTTPS connections are
to be accepted. Service requests received on the HTTP port are automatically redirected to the HTTPS
port. If set to OFF, both HTTP and HTTPS connections are accepted. The default value is OFF.
USER { user-name | NULL }
If authorization is disabled, this parameter becomes mandatory and specifies the user ID used to
execute all service requests. If authorization is enabled (the default), this optional clause identifies the
user or role permitted access to the service. The default value is NULL, which grants access to all
users.
URL [ PATH/ ] { ON | OFF | ELEMENTS }
Determines whether URI paths are accepted and, if so, how they are processed. OFF indicates that
nothing must follow the service name in a URI request. ON indicates that the remainder of the URI is
interpreted as the value of a variable named <url>. ELEMENTS indicates that the remainder of the
URI path is to be split at the slash characters into a list of up to 10 elements. The values are assigned to
variables named url plus a numeric suffix of between 1 and 10; for example, the first three variable
names are url1, url2, and url3. If fewer than 10 values are supplied, the remaining variables are set to
NULL. If the service name ends with the character /, then URL must be set to OFF. The default value
is OFF.
USING { SOAP-prefix | NULL }
Applies only to DISH services. The parameter specifies a name prefix. Only SOAP services whose
names begin with this prefix are handled.
AS 'statement'
If the statement is NULL, the URI must specify the statement to be executed. Otherwise, the specified
SQL statement is the only one that can be executed through the service. The statement is mandatory for
SOAP services, and ignored for DISH services. The default value is NULL.
Remarks
(back to top)
You cannot rename Web services.
Privileges
(back to top)
Requires MANAGE ANY WEB SERVICE system privilege. See GRANT System Privilege Statement [page 1511]
for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
The following example set sup a Web server quickly, starts a database server with the -xs switch, then execute
these statements:
Related Information
Syntax
<srs-attribute> ::=
SRID <srs-id>
| DEFINITION { <definition-string> | NULL }
| ORGANIZATION { <organization-name> IDENTIFIED BY <organization-srs-id>
| NULL }
| TRANSFORM DEFINITION { <transform-definition-string> | NULL }
| LINEAR UNIT OF MEASURE <linear-unit-name>
| ANGULAR UNIT OF MEASURE { <angular-unit-name> | NULL }
| TYPE { ROUND EARTH | PLANAR }
| COORDINATE <coordinate-name> { UNBOUNDED | BETWEEN <low-number> AND
<high-number> }
| ELLIPSOID SEMI MAJOR AXIS <semi-major-axis-length>
{ SEMI MINOR AXIS <semi-minor-axis-length> | INVERSE FLATTENING
<inverse-flattening-ratio> }
| TOLERANCE { <tolerance-distance> | DEFAULT }
| SNAP TO GRID { <grid-size> | DEFAULT }
| AXIS ORDER <axis-order>
| POLYGON FORMAT <polygon-format>
| STORAGE FORMAT <storage-format>
<grid-size> ::=
DOUBLE : usually between 0 and 1
<axis-order> ::=
{ 'x/y/z/m' | 'long/lat/z/m' | 'lat/long/z/m' }
<polygon-format> ::=
{ 'CounterClockWise' | 'Clockwise' | 'EvenOdd' }
<storage-format> ::=
{ 'Internal' | 'Original' | 'Mixed' }
Parameters
Set, or override, default coordinate system settings. If any attribute is set in a clause other than the
DEFINITION clause, it takes the value specified in the other clause regardless of what is specified in the
DEFINITION clause.
In Interactive SQL, if you double-click the value returned, an easier to read version of the value appears.
When the DEFINITION clause is specified, definition-string is parsed and used to choose default values for
attributes. For example, definition-string may contain an AUTHORITY element that defines the
organization-name and <organization-srs-id>.
Parameter values in definition-string are overridden by values explicitly set using the SQL statement
clauses. For example, if the ORGANIZATION clause is specified, it overrides the value for ORGANIZATION in
<definition-string>.
ORGANIZATION organization-name
Information about the organization that created the spatial reference system that the spatial reference
system is based on.
IDENTIFIED BY organization-srs-id
The SRID (<srs-id>) for the spatial reference system. If the spatial reference system is defined by an
organization with an <organization-srs-id>, then <srs-id> should be set to that value.
TRANSFORM DEFINITION { transform-definition-string | NULL }
A description of the transform to use for the spatial reference system. Currently, only the PROJ.4 transform
is supported. The transform definition is used by the ST_Transform method when transforming data
between spatial reference systems. Some transforms may still be possible even if there is no transform-
definition-string defined.
LINEAR UNIT OF MEASURE linear-unit-name
The linear unit of measure for the spatial reference system. The value you specify must match a linear unit
of measure defined in the ST_UNITS_OF_MEASURE system view.
If this clause is not specified, and is not defined in the DEFINITION clause, the default is METRE. To add
predefined units of measure to the database, use the sa_install_feature system procedure.
To add custom units of measure to the database, use the CREATE SPATIAL UNIT OF MEASURE statement.
Note
While both METRE and METER are accepted spellings, METRE is preferred, as it conforms to the
SQL/MM standard.
The angular unit of measure for the spatial reference system. The value you specify must match an angular
unit of measure defined in the ST_UNITS_OF_MEASURE system table.
If this clause is not specified, and is not defined in the DEFINITION clause, the default is DEGREE for
geographic spatial reference systems and NULL for non-geographic spatial reference systems.
The angular unit of measure must be non-NULL for geographic spatial reference systems and it must be
NULL for non-geographic spatial reference systems.
To add custom units of measure to the database, use the CREATE SPATIAL UNIT OF MEASURE statement.
TYPE { ROUND EARTH | PLANAR }
Control how the SRS interprets lines between points. For geographic spatial reference systems, the TYPE
clause can specify either ROUND EARTH (the default) or PLANAR. The ROUND EARTH model interprets
lines between points as great elliptic arcs. Given two points on the surface of the Earth, a plane is selected
that intersects the two points and the center of the Earth. This plane intersects the Earth, and the line
between the two points is the shortest distance along this intersection.
For two points that lie directly opposite each other, there is not a single unique plane that intersects the two
points and the center of the Earth. Line segments connecting these anti-podal points are not valid and give
an error in the ROUND EARTH model.
The ROUND EARTH model treats the Earth as a spheroid and selects lines that follow the curvature of the
Earth. In some cases, it may be necessary to use a planar model where a line between two points is
interpreted as a straight line in the equirectangular projection where x=long, y=lat.
In the following example, the blue line shows the line interpretation used in the ROUND EARTH model and
the red line shows the corresponding PLANAR model.
The PLANAR model may be used to match the interpretation used by other products. The PLANAR model
may also be useful because there are some limitations for methods that are not supported in the ROUND
EARTH model (such as ST_Area, ST_ConvexHull) and some are partially supported (ST_Distance only
supported between point geometries). Geometries based on circularstrings are not supported in ROUND
EARTH spatial reference systems.
For non-geographic SRSs, the type must be PLANAR (and that is the default if the TYPE clause is not
specified and either the DEFINITION clause is not specified or it uses a non-geographic definition).
COORDINATE coordinate-name { UNBOUNDED | BETWEEN low-number AND high-number }
The bounds on the spatial reference system's dimensions. coordinate-name is the name of the coordinate
system used by the spatial reference system. For non-geographic coordinate systems, coordinate-name
Specify UNBOUNDED to place no bounds on the dimensions. Use the BETWEEN clause to set low and high
bounds.
The X and Y coordinates must have associated bounds. For geographic spatial reference systems, the
longitude coordinate is bounded between -180 and 180 degrees and the latitude coordinate is bounded
between -90 and 90 degrees by default the unless COORDINATE clause overrides these settings. For non-
geographic spatial reference systems, the CREATE statement must specify bounds for both X and Y
coordinates.
LATITUDE and LONGITUDE are used for geographic coordinate systems. The bounds for LATITUDE and
LONGITUDE default to the entire Earth, if not specified.
ELLIPSOID SEMI MAJOR AXIS semi-major-axis-length { SEMI MINOR AXIS semi-minor-axis-length |
INVERSE FLATTENING inverse-flattening-ratio }
The values to use for representing the Earth as an ellipsoid for spatial reference systems of type ROUND
EARTH. If the DEFINITION clause is present, it can specify ellipsoid definition. If the ELLIPSOID clause is
specified, it overrides this default ellipsoid.
The Earth is not a perfect sphere because the rotation of the Earth causes a flattening so that the distance
from the center of the Earth to the North or South pole is less than the distance from the center to the
equator. For this reason, the Earth is modeled as an ellipsoid with different values for the semi-major axis
(distance from center to equator) and semi-minor axis (distance from center to the pole). It is most
common to define an ellipsoid using the semi-major axis and the inverse flattening, but it can instead be
specified using the semi-minor axis (for example, this approach must be used when a perfect sphere is
used to approximate the Earth). The semi-major and semi-minor axes are defined in the linear units of the
spatial reference system, and the inverse flattening (1/f) is a ratio:
SAP IQ uses the ellipsoid definition when computing distance in geographic spatial reference systems.
TOLERANCE { tolerance-distance | DEFAULT }
Flat-Earth (planar) spatial reference systems, use the TOLERANCE clause to specify the precision to use
when comparing points. If the distance between two points is less than tolerance-distance, the two points
are considered equal. Setting tolerance-distance allows you to control the tolerance for imprecision in the
input data or limited internal precision. By default, tolerance-distance is set to be equal to grid-size.
Flat-Earth (planar) spatial reference systems, use the SNAP TO GRID clause to define the size of the grid
SAP IQ uses when performing calculations. By default, SAP IQ selects a grid size so that 12 significant
digits can be stored at all points in the space bounds for X and Y. For example, if a spatial reference system
bounds X between -180 and 180 and Y between -90 and 90, then a grid size of 0.000000001 (1E-9) is
selected.
POLYGON FORMAT polygon-format
Internally, SAP IQ interprets polygons by looking at the orientation of the constituent rings. As one travels a
ring in the order of the defined points, the inside of the polygon is on the left side of the ring. The same
rules are applied in PLANAR and ROUND EARTH spatial reference systems.
● CounterClockwise – input follows SAP IQ's internal interpretation: the inside of the polygon is on the
left side while following ring orientation.
● Clockwise – input follows the opposite of SAP IQ's approach: the inside of the polygon is on the right
side while following ring orientation.
● EvenOdd – (default) the orientation of rings is ignored and the inside of the polygon is instead
determined by looking at the nesting of the rings, with the exterior ring being the largest ring and
interior rings being smaller rings inside this ring. A ray is traced from a point within the rings and
radiating outward crossing all rings. If the number the ring being crossed is an even number, it is an
outer ring. If it is odd, it is an inner ring.
STORAGE FORMAT storage-format
Control what is stored when spatial data is loaded into the database. Possible values are:
● Internal – SAP IQ stores only the normalized representation. Specify this when the original input
characteristics do not need to be reproduced. This is the default for planar spatial reference systems
(TYPE PLANAR).
● Original – SAP IQ stores only the original representation. The original input characteristics can be
reproduced, but all operations on the stored values must repeat normalization steps, possibly slowing
down operations on the data.
● Mixed – SAP IQ stores the internal version and, if it is different from the original version, SAP SQL
Anywhere stores the original version as well. By storing both versions, the original representation
characteristics can be reproduced and operations on stored values do not need to repeat
normalization steps. However, storage requirements may increase significantly because potentially
two representations are being stored for each geometry. Mixed is the default format for round-Earth
spatial reference systems (TYPE ROUND EARTH).
Remarks
You cannot alter a spatial reference system if there is existing data that references it. For example, if you have a
column declared as ST_Point(SRID=8743), you cannot alter the spatial reference system with SRID 8743. This
is because many spatial reference system attributes, such as storage format, impact the storage format of the
data. If you have data that references the SRID, create a new spatial reference system and transform the data
to the new SRID.
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Standards
Examples
The following example changes the polygon format of a fictitious spatial reference system named mySpatialRef
to EvenOdd:
Related Information
Syntax
Syntax 2
<alter-clause> ::=
ADD <create-clause>
| ALTER <column-name> <column-alteration>
| ALTER [ CONSTRAINT <constraint-name> ] CHECK ( <condition> )
| DROP <drop-object>
<create-clause> ::=
<column-name> <column-definition> [ <column-constraint> ]
| <table-constraint>
| [ PARTITION BY ] <range-partitioning-scheme>
<column-constraint> ::=
[ CONSTRAINT <constraint-name> ]
{ UNIQUE
| PRIMARY KEY
| REFERENCES <table-name> [ (<column-name> ) ] [ <actions> ]
| CHECK ( <condition> )
| IQ UNIQUE ( <integer> ) }
<table-constraint> ::=
[ CONSTRAINT <constraint-name> ]
{ UNIQUE ( <column-name> [ , … ] )
| PRIMARY KEY ( <column-name> [ , … ] )
| <foreign-key-constraint>
| CHECK ( <condition> ) }
<foreign-key-constraint> ::=
FOREIGN KEY [ <role-name> ] [ ( <column-name> [ , … ] ) ]
... REFERENCES <table-name> [ ( <column-name> [ , … ] ) ]
... [ <actions> ]
<alterable-column-attribute> ::=
[ NOT ] NULL
| DEFAULT <default-value>
| [ CONSTRAINT <constraint-name> ] CHECK { NULL | ( <condition> )
}
<partition-key> ::=
<column-name>
Parameters
ALTER OWNER
Registers this table with the RLV store for real-time in-memory updates. Not supported for IQ temporary
tables. This value overrides the value of the database option BASE_TABLES_IN_RLV. In a multiplex, the
RLV store can only be enabled on the coordinator
ADD create-clause Adds a new column or column constraint to the table object.
● ADD <column-definition> – Adds a new column to the table. The table must be empty to specify
NOT NULL. The table might contain data when you add an IDENTITY or DEFAULT AUTOINCREMENT
column. If the column has a default IDENTITY value, all rows of the new column are populated with
sequential values. You can also add FOREIGN constraint as a column constraint for a single column
key. The value of the IDENTITY/DEFAULT AUTOINCREMENT column uniquely identifies every row in a
table.
The IDENTITY/DEFAULT AUTOINCREMENT column stores sequential numbers that are automatically
generated during inserts and updates. DEFAULT AUTOINCREMENT columns are also known as
IDENTITY columns. When using IDENTITY/DEFAULT AUTOINCREMENT, the column must be one of
the integer data types, or an exact numeric type, with scale 0. See CREATE TABLE Statement for more
about column constraints and IDENTITY/DEFAULT AUTOINCREMENT columns.
The database option IDENTITY_INSERT must be set to the table name to perform an explicit insert
or update into an IDENTITY or AUTOINCREMENT column. For information on identity columns, see
The IDENTITY or AUTOINCREMENT Default in SAP IQ Administration: Database. For information on
IDENTITY_INSERT, see SAP IQ SQL Reference.
Note
○ Consider memory usage when specifying high IQ UNIQUE values. If machine resources are
limited, avoid loads with FP_NBIT_ENFORCE_LIMITS='OFF' (default).
Prior to SAP IQ 16.1, an IQ UNIQUE <n> value > 16777216 would rollover to Flat FP. In 16.1,
larger IQ UNIQUE values are supported for tokenization, but may require significant
memory resource requirements depending on cardinality and column width.
○ BIT, BLOB, and CLOB data types do not support NBit dictionary compression. If
FP_NBIT_IQ15_COMPATIBILITY='OFF', a non-zero IQ UNIQUE column specification in
a CREATE TABLE or ALTER TABLE statement that includes these data types returns an
error.
Note
You cannot MODIFY a table or column constraint. To change a constraint, DELETE the old
constraint and ADD the new constraint.
● SET DEFAULT <default-value> – changes the default value of an existing column in a table. You
can also use the MODIFY clause for this task, but ALTER is ISO/ANSI SQL compliant, and MODIFY is
not. Modifying a default value does not change any existing values in the table.
● DROP DEFAULT – removes the default value of an existing column in a table. You can also use the
MODIFY clause for this task, but ALTER is ISO/ANSI SQL compliant, and MODIFY is not. Dropping a
default does not change any existing values in the table.
● DROP <column-name> – drops the column from the table. If the column is contained in any
multicolumn index, uniqueness constraint, foreign key, or primary key, then the index, constraint, or
key must be deleted before the column can be deleted. This does not delete CHECK constraints that
refer to the column. An IDENTITY/DEFAULT AUTOINCREMENT column can only be deleted if
IDENTITY_INSERT is turned off and the table is not a local temporary table.
● DROP CHECK – drops all check constraints for the table. This includes both table check constraints and
column check constraints.
● DROP CONSTRAINT <constraint-name> – drops the named constraint for the table or specified
column.
● DROP UNIQUE ( <column-name, ...> ) – drops the unique constraints on the specified
column(s). Any foreign keys referencing the unique constraint (rather than the primary key) are also
deleted. Reports an error if there are associated foreign-key constraints. Use ALTER TABLE to delete
all foreign keys that reference the primary key before you delete the primary key constraint.
● DROP PRIMARY KEY – drops the primary key. All foreign keys referencing the primary key for this
table are also deleted. Reports an error if there are associated foreign key constraints. If the primary
key is unenforced, DELETE returns an error if associated unenforced foreign key constraints exist.
● DROP FOREIGN KEY <role-name> – drops the foreign key constraint for this table with the given
role name. Retains the implicitly created non-unique HG index for the foreign key constraint. Users can
explicitly remove the HG index with the DROP INDEX statement.
● DROP [ PARTITION ] – drops the specified partition. The rows in partition P1 are deleted and the
partition definition is dropped. You cannot drop the last partition because dropping the last partition
would transform a partitioned table to a non-partitioned table. (To merge a partitioned table, use an
UNPARTITION clause instead.) For example:
RENAME rename-object
● RENAME <new-table-name> – changes the name of the table to the <new-table-name>. Any
applications using the old table name must be modified. Also, any foreign keys that were automatically
assigned the same name as the old table name do not change names.
● RENAME <column-name> TO <new-column-name> – changes the name of the column to <new-
column-name>. Any applications using the old column name must be modified.
● A table object can only reside in one dbspace. Any type of ALTER MOVE blocks any modification to the
table for the entire duration of the move.
Note
● MOVE TO – moves all table objects including columns, indexes, unique constraints, primary key, foreign
keys, and metadata resided in the same dbspace as the table is mapped to the new dbspace. The
ALTER Column MOVE TO clause cannot be requested on a partitioned table.
A BIT data type column cannot be explicitly placed in a dbspace. The following is not supported for BIT
data types:
● MOVE TABLE METADATA – moves the metadata of the table to a new dbspace. For a partitioned table,
MOVE TABLE METADATA also moves metadata that is shared among partitions.
● MOVE PARTITION – moves the specified partition to the new dbspace.
Note
PARTITION BY
● Partitions share the same logical attributes of the parent table, but can be placed in separate dbspaces
and managed individually. SAP IQ supports several table partitioning schemes:
○ Hash-partitions
○ Range-partitions
○ Composite-partitions
● A partition-key is the column or columns that contain the table partitioning keys. Partition keys can
contain NULL and DEFAULT values, but cannot contain:
○ LOB (BLOB or CLOB) columns
○ BINARY, or VARBINARY columns
○ CHAR or VARCHAR columns that are 255 bytes long
○ BIT columns
○ FLOAT/DOUBLE/REAL columns
PARTITION BY RANGE
● Range partitioning is restricted to a single partition key column and a maximum of 1024 partitions. In a
range-partitioning-scheme, the partition-key is the column that contains the table partitioning keys:
range-partition-decl:
<partition-name> is required, and is the name of a new partition on which table rows are stored.
Partition names must be unique within the set of partitions on a table.
● <VALUE> – specifies the inclusive upper bound for each partition (in ascending order). The user must
specify the partitioning criteria for each range partition to guarantee that each row is distributed to
only one partition. NULLs are allowed for the partition column and rows with NULL as partition key
value belong to the first table partition. However, NULL cannot be the bound value.
There is no lower bound (MIN value) for the first partition. Rows of NULL cells in the first column of the
partition key will go to the first partition. For the last partition, you can either specify an inclusive upper
bound or MAX. If the upper bound value for the last partition is not MAX, loading or inserting any row
with partition key value larger than the upper bound value of the last partition generates an error.
● Max – denotes the infinite upper bound and can only be specified for the last partition.
● IN – specifies the dbspace in the <partition-decl> on which rows of the partition should reside.
● These restrictions affect partitions keys and bound values for range partitioned tables:
○ You can only range partition a non-partitioned table if all existing rows belong to the first partition.
○ Partition bounds must be constants, not constant expressions.
○ Partition bounds must be in ascending order according to the order in which the partitions were
created. That is, the upper bound for the second partition must be higher than for the first
partition, and so on.
In addition, partition bound values must be compatible with the corresponding partition-key
column data type. For example, VARCHAR is compatible with CHAR.
○ If a bound value has a different data type than that of its corresponding partition key column, SAP
IQ converts the bound value to the data type of the partition key column, with these exceptions:
○ Explicit conversions are not allowed. This example attempts an explicit conversion from INT to
VARCHAR and generates an error:
○ Implicit conversions that result in data loss are not allowed. In this example, the partition bounds
are not compatible with the partition key type. Rounding assumptions may lead to data loss and an
error is generated:
CREATE TABLE emp_id (id INT) PARTITION BY RANGE(id) (p1 VALUES <=
(10.5), p2 VALUES <= (100.5))
○ In this example, the partition bounds and the partition key data type are compatible. The bound
values are directly converted to float values. No rounding is required, and conversion is supported:
○ Conversions from non-binary data types to binary data types are not allowed. For example, this
conversion is not allowed and returns an error:
Maps data to partitions based on partition-key values processed by an internal hashing function.
● Hash partition keys are restricted to a maximum of eight columns with a combined declared column
width of 5300 bytes or less. For hash partitions, the table creator determines only the partition key
columns; the number and location of the partitions are determined internally.
In a hash-partitioning declaration, the partition-key is a column or group of columns, whose composite
value determines the partition where each row of data is stored:
hash-partitioning-scheme:
HASH ( <partition-key> [ , <partition-key>, … ] )
● Restrictions:
○ You can only hash partition a base table. Attempting to partitioning a global temporary table or a
local temporary table raises an error.
○ You can only hash partition a non-partitioned table that is empty.
○ You cannot add, drop, merge, or split a hash partition.
○ You cannot add or drop a column from a hash partition key.
PARTITION BY HASH RANGE
hash-range-partitioning-scheme:
PARTITION BY HASH ( partition-key [ , partition-key, … ] )
[ SUBPARTITION BY RANGE ( range-partition-decl [ , range-partition-
decl ... ] ) ]
The hash partition specifies how the data is logically distributed and colocated; the range subpartition
specifies how the data is physically placed. The new range subpartition is logically partitioned by hash
with the same hash partition keys as the existing hash-range partitioned table. The range subpartition
key is restricted to one column.
● Restrictions:
○ You can only hash partition a base table. Attempting to partitioning a global temporary table or a
local temporary table raises an error.
○ You can only subpartition a hash-partitioned table by range if the table is empty.
○ You cannot add, drop, merge, or split a hash partition.
○ You cannot add or drop a column from a hash partition key.
Note
Range-partitions and composite partitioning schemes, like hash-range partitions, require the
separately licensed VLDB Management option.
MERGE PARTITION
Removes partitions from a partitioned table. Each column is placed in a single dbspace. Note that the
server does not check CREATE privilege on the dbspace to which data of all partitions is moved. ALTER
TABLE UNPARTITION blocks all database activities.
Remarks
The ALTER TABLE statement changes table attributes (column definitions and constraints) in a table that was
previously created. The syntax allows a list of alter clauses; however, only one table constraint or column
constraint can be added, modified, or deleted in each ALTER TABLE statement. ALTER TABLE is prevented
whenever the statement affects a table that is currently being used by another connection. ALTER TABLE can
be time consuming, and the server does not process requests referencing the same table while the statement
is being processed.
Note
You cannot alter local temporary tables, but you can alter global temporary tables when they are in use by
only one connection.
If the table is in a SAN dbspace, altering the table to add these components in a DAS dbspace results in an
error:
● Column
● Primary key
● Foreign key
● Range partition (adding and splitting)
Table subcomponents cannot be created on DAS Dbspaces if the parent table is not a DAS dbspace table.
SAP IQ enforces REFERENCES and CHECK constraints. Table and/or column check constraints added in an
ALTER TABLE statement are evaluated, only if they are defined on one of the new columns added, as part of
that alter table operation. For details about CHECK constraints, see CREATE TABLE Statement [page 1377].
If SELECT <*> is used in a view definition and you alter a table referenced by the SELECT <*> , then you must
run ALTER VIEW <viewname> RECOMPILE to ensure that the view definition is correct and to prevent
unexpected results when querying the view.
Syntax 1
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Syntax 2
The system privileges required for syntax 2 vary depending on the clause. See GRANT System Privilege
Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502] for assistance with granting
privileges.
FOREIGN KEY column constraint requires above along with one of:
MERGE PARTITION, UNPARTITION Table owned by self requires no additional privilege. Table owned by others re
quires one of:
Side Effects
● Automatic commit. The ALTER and DROP options close all cursors for the current connection. The
Interactive SQL data window is also cleared.
● A checkpoint is carried out at the beginning of the ALTER TABLE operation.
● Once you alter a column or table, any stored procedures, views or other items that refer to the altered
column no longer work.
Standards
● The following example adds a new column to the Employees table showing which office they work in:
● The following example drops the office column from the Employees table:
● The following example adds a column to the Customers table assigning each customer a sales contact:
● The following example adds a new column CustomerNum to the Customers table and assigns a default
value of 88:
● The following example moves FP indexes for c2, c4, and c5, from dbspace Dsp3 to Dsp6. FP index for c1
remains in Dsp1. FP index for c3 remains in Dsp2. The primary key for c5 remains in Dsp4. DATE index
c4_date remains in Dsp5:
● The following example moves only FP index c1 from dbspace Dsp1 to Dsp7:
● The following example uses many ALTER TABLE clauses to move, split, rename, and merge partitions.
Create a partitioned table:
○ This example reports an error, as it requires data movement. Not all existing rows are in the same
partition after split.
This error is reported, as a merge from a higher boundary value partition into a lower boundary value
partition is not allowed:
Partition 'p2' is not adjacent to or before partition 'p3'.
○ This example merges partition p2 into p3:
○ This example partitions table bar. This command reports an error, because all rows must be in the first
partition:
● The following example changes a table tab1 so that it is no longer registered for in-memory real-time
updates in the RLV store.
Related Information
Note
Syntax
<external-call> ::=
[ <system-configuration>:]<function-name>@<library-file-prefix>
[ .{ so | dll} ]
<generic-operating-system> ::=
{ UNIX | Windows }
<specific-operating-system> ::=
{ AIX | HPUX | Linux | OSX | Solaris | WindowsNT }
<processor-architecture> ::=
{ 32 | 64 | ARM | IA64 | PPC | SPARC | X86 | X86_64 }
Go to:
● Remarks
● Privileges
● Side Effects
● Examples
Parameters
(back to top)
DROP stoplist
A string expression used to create or replace the list of terms to ignore when building a TEXT index. Terms
specified in this list are also ignored in a query. Separate stoplist terms with spaces.
Stoplist terms cannot contain whitespace and should not contain non-alphanumeric characters. Non-
alphanumeric characters are interpreted as spaces and break the term into multiple terms. For example,
“and/or” is interpreted as the two terms “and” and “or”. The maximum number of stoplist terms is 7999.
DROP STOPLIST
Specifies the minimum length, in characters, of a term to include in the TEXT index. The value specified in
the MINIMUM TERM LENGTH clause is ignored when using NGRAM TEXT indexes. Terms that are shorter
than this setting are ignored when building or refreshing the TEXT index. The value of this option must be
greater than 0. If you set this option to be higher than MAXIMUM TERM LENGTH, the value of MAXIMUM
TERM LENGTH is automatically adjusted to be the same as the new MINIMUM TERM LENGTH value.
MAXIMUM TERM LENGTH
With GENERIC TEXT indexes, specifies the maximum length, in characters, of a term to include in the TEXT
index. Terms that are longer than this setting are ignored when building or refreshing the TEXT index. The
value of MAXIMUM TERM LENGTH must be less than or equal to 60. If you set this option to be lower than
MINIMUM TERM LENGTH, the value of MINIMUM TERM LENGTH is automatically adjusted to be the same
as the new MAXIMUM TERM LENGTH value.
TERM BREAKER
Specifies the entry_point and the library name of the external pre-filter library provided by external
vendors.
DROP PREFILTER
Drops the external prefilter and sets NULL to the prefilter columns in ISYSTEXTCONFIG table.
Remarks
(back to top)
TEXT indexes are dependent on a text configuration object. SAP IQ TEXT indexes use immediate refresh, and
cannot be truncated; you must drop the indexes before you can alter the text configuration object. To view the
settings for text configuration objects, query the SYSTEXTCONFIG system view.
Privileges
(back to top)
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
All other clauses regard ALTER ANY TEXT CONFIGURATION system privilege
less of object ownership
(back to top)
Automatic commit
Examples
(back to top)
● The following example creates a text configuration object, maxTerm16, and then change the maximum
term length to 16:
● The following example adds stoplist terms to the maxTerm16 configuration object:
● The following example updates the text configuration object, my_text_config, to use the entry point
my_term_breaker in the external library mytermbreaker.dll for breaking the text:
● The following example updates the text configuration object, my_text_config, to use the entry point
my_prefilter in the external library myprefilter.dll for prefiltering the documents:
Related Information
Note
<alter-clause> ::=
<rename-object> | <move-object>
<rename-object> ::=
RENAME { AS | TO } <new-name>
<move-object> ::=
MOVE TO <dbspace-name>
Parameters
RENAME
Privileges
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
Side Effects
Automatic commit
Examples
The following example creates a TEXT index, MyTextIndex, defining it as IMMEDIATE REFRESH, rename the
TEXT index to Text_index_daily, and move the TEXT index to a dbspace named tispace:
Related Information
Replaces a trigger definition with a modified version. You must include the entire new trigger definition in the
ALTER TRIGGER statement. This statement applies to SAP IQ catalog store tables only.
Syntax
Remarks
The ALTER TRIGGER statement is identical in syntax to the CREATE TRIGGER statement except for the
first word.
Either the Transact-SQL or Watcom SQL form of the CREATE TRIGGER syntax can be used.
Obfuscate a trigger definition
Note
Privileges
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
Automatic commit.
Standards
Related Information
Syntax
Parameters
user-name
The password for the user. Clause is not supported (ERROR) when
CHANGE_PASSWORD_DUAL_CONTROL option is enabled in a user's login policy.
LOGIN POLICY policy-name
Name of the login policy to assign the user. No change is made if you do not specify a login policy. No
change is made if the LOGIN POLICY clause is not specified.
FORCE PASSWORD CHANGE { ON | OFF }
Controls whether the user must specify a new password upon logging in. This setting overrides the
PASSWORD_EXPIRY_ON_NEXT_LOGIN option setting in the user's login policy.
Note
This functionality is not currently implemented when logging in to SAP IQ Cockpit. However, when
logging in to SAP IQ outside of SAP IQ Cockpit (for example, using Interactive SQL), users are then
prompted to enter a new password.
REFRESH DN
Clears the saved DN and timestamp for a user, which is used during LDAP authentication.
RESET LOGIN POLICY
Reverts the settings of the user's login to the original values in the login policy. This usually clears all locks
that are implicitly set due to the user exceeding the failed logins or exceeding the maximum number of
days since the last login. When you reset a login policy, a user can access an account that has been locked
for exceeding a login policy option limit such as MAX_FAILED_LOGIN_ATTEMPTS or
MAX_DAYS_SINCE_LOGIN.
IDENTIFIED [ FIRST | LAST ] BY
You do not have to specify a password for the user. A user without a password cannot connect to the
database. This is useful if you are creating a role and do not want anyone to connect to the database using
the role user ID. A user ID must be a valid identifier. User IDs and passwords cannot:
A password can be either a valid identifier, or a string (maximum 255 characters) placed in single quotes.
Passwords are case-sensitive. The password should be composed of 7-bit ASCII characters, as other
characters may not work correctly if the database server cannot convert them from the client's character
set to UTF-8.
You can use the VERIFY_PASSWORD_FUNCTION option to specify a function to implement password rules
(for example, passwords must include at least one digit). If you do use a password verification function, you
cannot specify more than one user ID and password in the GRANT CONNECT statement.
The encryption algorithm used for hashing the user passwords is FIPS-certified encryption support:
Remarks
If you set the PASSWORD_EXPIRY_ON_NEXT_LOGIN value to ON, the passwords of all users assigned to this
login policy expire immediately when he or she next logs in. You can use the ALTER USER and LOGIN POLICY
clauses to force users to change their passwords at the next login.
If the CHANGE_PASSWORD_DUAL CONTROL login policy option is disable (OFF) during the dual password
change process:
● The target user will be unable to log in with the single password part already defined. The ALTER USER
statement must be reissued using single password control syntax.
● If the option is disabled after the dual password change process is complete, but before the target user
logs in, there is no impact on the target user. The target user must log in using both password parts.
If the target user is already logged in when the dual password change process occurs, the user cannot change
their password in the current session until both parts of the new password are set. Once the dual password
change process is complete, the target user can use GRANT CONNECT, ALTER USER, sp_password, or
sp_iqpassword to the password without first logging out. The prompt to enter the current password, use the
new dual control password, not the password originally entered for the current session.
The GRANT CONNECT statement is not supported during for the dual password change process to set either
password part. However, once the dual password change process is complete, the target user can use the
GRANT CONNECT statement, ALTER USER, sp_password, or sp_iqpassword to change their password
without first logging out.
The encryption algorithm used for hashing the user passwords is FIPS-certified encryption support the
following:
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example alters a user named SQLTester. The password is set to welcome. The SQLTester
user is assigned to the Test1 login policy and the password does not expire on the next login:
● The following example clears the distinguished name (DN) and timestamp for a user named Mary used for
LDAP authentication:
● The following example sets the password for user3 to PassPart1PassPart2. This assumes that user1
and user2 have the CHANGE PASSWORD system privilege and the change_password_dual_control
option is enabled (ON) in the login policy for user3:
2. User2 enters:
Related Information
Syntax
ALTER VIEW
… [<owner>.]<view-name> [ ( <column-name> [ , … ] ) ]
… AS <select-statement>
… [ WITH CHECK OPTION ]
ALTER VIEW
… [<owner>.]<view-name>
… { SET HIDDEN | RECOMPILE | DISABLE | ENABLE }
AS select-statement
The SELECT statement on which the view is based must not contain an ORDER BY clause, a subquery in
the SELECT list, or a TOP or FIRST qualification. It may have a GROUP BY clause and may be a UNION.
WITH CHECK OPTION
Rejects any updates and inserts to the view that do not meet the criteria of the views as defined by its
SELECT statement. However, SAP IQ currently ignores this option (it supports the syntax for compatibility
reasons).
SET HIDDEN
Obfuscates the definition of the view and cause the view to become hidden from view, for example in SAP
IQ Cockpit. Explicit references to the view still work.
Caution
When you use SET HIDDEN, you can unload and reload the view into other databases. Debugging using the
debugger does not show the view definition, nor is it available through procedure profiling. If you need to
change the definition of a hidden view, you must drop the view and create it again using the CREATE VIEW
statement.
RECOMPILE
Re-creates the column definitions for the view. Identical in functionality to the ENABLE clause, except you
can use it on a view that is not disabled.
DISABLE
When you use the DISABLE clause, the view is no longer available for use by the database server to answer
queries. Disabling a view is similar to dropping one, except that the view definition remains in the database.
Disabling a view also disables any dependent views. Therefore, the DISABLE clause requires exclusive
access, not only to the view being disabled, but to any dependent views, which are also disabled.
ENABLE
Enables a disabled view, which causes the database server to re-create the column definitions for the view.
Before you enable a view, you must enable any views on which it depends.
Remarks
When you alter a view, existing permissions on the view are maintained and do not require reassignment.
Instead of using the ALTER VIEW statement, you could also drop the view and re-create it using DROP VIEW
and CREATE VIEW, respectively. If you do this, view permissions must be reassigned.
After completing the view alteration using Syntax 1, the database server recompiles the view. Depending on the
type of change you made, if there are dependent views, the database server attempts to recompile them. If you
made changes that impact a dependent view, that view may become invalid, requiring you to alter the definition
for the dependent view.
If the SELECT statement defining the view contains an asterisk (*), the number of the columns in the view
could change if columns were added or deleted from the underlying tables. The names and data types of
the view columns could also change.
Altering the structure of a view requires that you replace the entire view definition with a new definition, much
as you would when creating the view using the CREATE VIEW statement.
Privileges
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
Side Effects
● Automatic commit
● All procedures and triggers are unloaded from memory, so that any procedure or trigger that references
the view reflects the new view definition. The unloading and loading of procedures and triggers can have a
performance impact if you regularly alter views.
Standards
In this section:
Related Information
Check for, and correct, any dependent views that become invalid due to changes to their underlying tables.
Context
Under most circumstances the database server automatically recompiles views to keep them valid if the
underlying tables change. However, if your table alteration removes or materially changes something
referenced by the view definition, then the dependent view becomes invalid. For example, if you remove a
column referenced in the view definition, then the dependent view is no longer valid. Correct the view definition
and manually recompile the view.
Procedure
Results
The sa_dependent_views system procedure returns the list of all dependent views for a given table or view.
Related Information
Syntax
BACKUP DATABASE
[ <backup-option> … ]
TO <archive_device> [ <archive-option>... ]
… [ WITH COMMENT <string> ]
<backup-option> ::=
{ READWRITE FILES ONLY |
READONLY <dbspace-or-file> [, … ] }
CRC { ON | OFF }
ATTENDED { ON | OFF }
BLOCK FACTOR <integer>
{ FULL | INCREMENTAL | INCREMENTAL SINCE FULL }
VIRTUAL { DECOUPLED |
ENCAPSULATED '<shell_command>' }
POINT IN TIME RECOVERY LOGS ONLY
WITH COMMENT <comment>
<dbspace-or-file> ::=
{ DBSPACES <identifier-list> | FILES <identifier-list> | <archive-root> }
<identifier-list> ::=
<identifier> [, … ]
<archive-option> ::=
SIZE <integer> STACKER <integer>
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
TO archive_device
Specifies the name of the <archive_device> to be used for backup, delimited with single quotation
marks. The <archive_device> is a file name or tape drive device name for the archive file. If you use
multiple archive devices, specify them using separate TO clauses; a comma-separated list is not allowed.
Archive devices must be distinct. The number of TO clauses determines the amount of parallelism SAP IQ
attempts with regard to output devices.
WITH COMMENT string
Restricts FULL, INCREMENTAL, and INCREMENTAL SINCE FULL backups to only the set of read-write files
in the database. The read-write dbspaces/files must be SAP IQ dbspaces.
If READWRITE FILES ONLY clause is used with an INCREMENTAL or INCREMENTAL SINCE FULL backup,
the backup will not back up data on read-only dbspaces or dbfiles that has changed since the depends-on
backup. If READWRITE FILES ONLY is not specified for an INCREMENTAL or INCREMENTAL SINCE FULL
backup, the backup backs up all database pages that have changed since the depends-on backup, both on
read-write and read-only dbspaces.
CRC { ON | OFF }
Activates 32-bit cyclical redundancy checking on a per block basis (in addition to whatever error detection
is available in the hardware). When you specify this clause, the numbers computed on backup are verified
during any subsequent restore operation, affecting performance of both commands. The default is ON.
ATTENDED { ON | OFF }
Applies only when backing up to a tape device. If ATTENDED ON clause (the default) is used, a message is
sent to the application that issued the BACKUP DATABASE statement if the tape drive requires
intervention. This might happen, for example, when a new tape is required. If you specify OFF, BACKUP
DATABASE does not prompt for new tapes. If additional tapes are needed and OFF has been specified, SAP
IQ gives an error and aborts the BACKUP DATABASE command. However, a short delay is included to
account for the time an automatic stacker drive requires to switch tapes.
BLOCK FACTOR integer
Specifies the number of blocks to write at one time. The value must be greater than 0, or SAP IQ generates
an error message. Its default is 25 for UNIX systems and 15 for Windows systems (to accommodate the
smaller fixed tape block sizes). This clause effectively controls the amount of memory used for buffers. The
actual amount of memory is this value times the block size times the number of threads used to extract
data from the database. Set BLOCK FACTOR to at least 25.
FULL | INCREMENTAL | INCREMENTAL SINCE FULL
● FULL – specifies a full backup; all blocks in use in the database are saved to the archive devices. This is
the default action.
● INCREMENTAL – specifies an incremental backup; all blocks changed since the last backup of any kind
are saved to the archive devices. The keyword INCREMENTAL is not allowed with READONLY FILES.
● INCREMENTAL SINCE FULL – specifies an incremental backup; all blocks changed since the last full
backup are saved to the archive devices.
VIRTUAL DECOUPLED
Specifies a decoupled virtual backup. For the backup to be complete, copy the SAP IQ dbspaces after the
decoupled virtual backup finishes, and then perform a nonvirtual incremental backup.
VIRTUAL ENCAPSULATED 'shell_command'
Specifies an encapsulated virtual backup. The ‘shell-command’ argument can be a string or variable
containing a string that is executed as part of the encapsulated virtual backup. The shell commands
execute a system-level backup of the IQ store as part of the backup operation. For security reasons, it is
recommended that an absolute path be specified in the 'shell-command,' and file protections on that
directory be in place to prevent execution of an unintended program.
POINT IN TIME RECOVERY LOGS ONLY
BACKUP DATABASE
POINT IN TIME RECOVERY LOGS ONLY TO ' PITR-archive-directory '
The PITR archive directory is set with the ALTER DBSPACE IQ_SYSTEM_LOG RENAME statement.
POINT IN TIME RECOVERY LOGS ONLY supports only one TO clause, which must point to the PITR archive
directory. No other options are allowed.
SIZE integer
Specifies maximum tape or file capacity per output device (some platforms do not reliably detect end-of-
tape markers). No volume used on the corresponding device should be shorter than this value. This value
applies to both tape and disk files but not third-party devices. Units are kilobytes (KB), although in general,
less than 1 GB is inappropriate. For example, for a 3.5 GB tape, specify 3500000. Defaults are by platform
and medium. The final size of the backup file will not be exact, because backup writes in units of large
blocks of data.
If a size less than 1 GB is specified, a SIZE warning message appears. The backup proceeds but uses the
minimum default file size instead of the specified value. For example, if you specify a file size of 1000000
KB, a default file size of 2 GB (UNIX) or 1.5 GB (Windows) is used instead.
The SIZE parameter is per output device. SIZE does not limit the number of bytes per device; SIZE limits
the file size. Each output device can have a different SIZE parameter. During backup, when the amount of
information written to a given device reaches the value specified by the SIZE parameter, BACKUP
DATABASE does one of the following:
● If the device is a file system device, BACKUP DATABASE closes the current file and creates another file
of the same name, with the next ascending number appended to the file name, for example,
bkup1.dat1.1, bkup1.dat1.2, bkup1.dat1.3.
● If the device is a tape unit, BACKUP DATABASE closes the current tape and you need to mount another
tape.
STACKER integer
Specifies that the device is automatically loaded, and specifies the number of tapes with which it is loaded.
This value is not the tape position in the stacker, which could be zero. When ATTENDED is OFF and
STACKER is ON, SAP IQ waits for a predetermined amount of time to allow the next tape to be autoloaded.
The number of tapes supplied along with the SIZE clause are used to determine whether there is enough
space to store the backed-up data. Do not use this clause with third-party media management devices.
(back to top)
The SAP IQ database might be open for use by many readers and writers when you execute a BACKUP
DATABASE command. It acts as a read-only user and relies on the Table Level Versioning feature of SAP IQ to
achieve a consistent set of data.
BACKUP DATABASE implicitly issues a CHECKPOINT prior to commencing, and then it backs up the catalog
tables that describe the database (and any other tables you have added to the catalog store). During this first
phase, SAP IQ does not allow any metadata changes to the database (such as adding or dropping columns and
tables). Correspondingly, a later RESTORE DATABASE of the backup restores only up to that initial
CHECKPOINT.
The BACKUP DATABASE command lets you specify full or incremental backups. You can choose two types of
incremental backups:
● INCREMENTAL backs up only those blocks that have changed and committed since the last backup of any
type (incremental or full).
● INCREMENTAL SINCE FULL backs up all of the blocks that have changed since the last full backup.
The first type of incremental backup can be smaller and faster to do for BACKUP DATABASE commands, but
slower and more complicated for RESTORE DATABASE commands. The opposite is true for the other type of
incremental backup. The reason is that the first type generally results in N sets of incremental backup archives
for each full backup archive. If a restore is required, a user with the SERVER OPERATOR system privilege must
restore the full backup archive first, and then each incremental archive in the proper order. (SAP IQ keeps track
of which ones are needed.) The second type requires the user with the SERVER OPERATOR system privilege to
restore only the full backup archive and the last incremental archive.
Incremental virtual backup is supported using the VIRTUAL DECOUPLED and VIRTUAL ENCAPSULATED
parameters of the BACKUP DATABASE statement.
Although you can perform an OS-level copy of tablespaces to make a virtual backup of one or more read-only
dbspaces, use the virtual backup statement, because it records the backup in the SAP IQ system tables.
BACKUP DATABASE and RESTORE DATABASE write your SAP IQ data in parallel to or from all of the archive
devices you specify. The catalog store is written serially to the first device. Faster backups and restores result
from greater parallelism.
SAP IQ supports a maximum of 36 hardware devices for backup. For faster backups, specifying one or two
devices per core will help to avoid hardware and IO contention. Set the SIZE parameter on the BACKUP
DATABASE command to avoid creating multiple files per backup device and consider the value used in the
BLOCK FACTOR clause on the BACKUP DATABASE command.
BACKUP DATABASE overwrites existing archive files unless you move the old files or use a different
<archive_device> name or path.
The backup API DLL implementation lets you specify arguments to pass to the DLL when opening an archive
device. For third-party implementations, the archive_device string has this format:
'DLLidentifier::vendor_specific_information'
A specific example:
'spsc::workorder=12;volname=ASD002'
Note
Only certain third-party products are certified with SAP IQ using this syntax. Before using any third-party
product to back up your SAP IQ database in this way, make sure it is certified. See the Release Bulletin for
additional usage instructions or restrictions. .
For the SAP IQ implementation of the backup API, you need to specify only the tape device name or file name.
For disk devices, you should also specify the SIZE value, or SAP IQ assumes that each created disk file is no
larger than 2 GB on UNIX, or 1.5 GB on Windows.
An example of an archive device for the SAP API DLL that specifies a tape device for certain UNIX systems is:
'/dev/rmt/0'
It is your responsibility to mount additional tapes if needed, or to ensure that the disk has enough space to
accommodate the backup.
When multiple devices are specified, BACKUP DATABASE distributes the information across all devices. Other
issues for BACKUP DATABASE include:
Caution
For backup (and for most other situations) SAP IQ treats the leading backslash in a string as an escape
character, when the backslash precedes an n, an x, or another backslash. For this reason, when you
specify backup tape devices, you must double each backslash required by the Windows naming
convention. For example, indicate the first Windows tape device you are backing up to as '\\\\.\
\tape0', the second as '\\\\.\\tape1', and so on. If you omit the extra backslashes, or otherwise
misspell a tape device name, and write a name that is not a valid tape device on your system, SAP IQ
interprets this name as a disk file name.
● SAP IQ does not rewind tapes before using them. You must ensure the tapes used for backup and restore
are at the correct starting point before putting them in the tape device. SAP IQ does rewind tapes after
using them on rewinding devices.
● During backup and restore operations, if SAP IQ cannot open the archive device (for example, when it
needs the media loaded) and the ATTENDED clause is ON, it waits for ten seconds and tries again. It
continues these attempts indefinitely until either it is successful or the operation is terminated with a Ctrl
+ C.
● If you enter Ctrl + C , BACKUP DATABASE fails and returns the database to the state it was in before the
backup started.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
● (UNIX) This example backs up the iqdemo database onto tape devices /dev/rmt/0 and /dev/rmt/2 on
an Oracle Solaris platform. On Solaris, the letter n after the device name specifies the “no rewind on close”
feature. Always specify this feature with BACKUP DATABASE, using the naming convention appropriate for
your UNIX platform (Windows does not support this feature). This example backs up all changes to the
database since the last full backup:
BACKUP DATABASE
INCREMENTAL SINCE FULL
TO '/dev/rmt/0n' SIZE 10000000
TO '/dev/rmt/2n' SIZE 15000000
Note
Size units are kilobytes (KB), although in most cases, size of less than 1 GB are inappropriate. In this
example, the specified sizes are 10 GB and 15 GB.
Related Information
Syntax
[ <statement-label> : ]
… BEGIN [ [ NOT ] ATOMIC ]
… [ <local-declaration> ; … ]
… <statement-list>
… [ EXCEPTION [ <exception-case> … ] ]
… END [ <statement-label> ]
<local-declaration> ::=
{ <variable-declaration>
| <cursor-declaration>
| <exception-declaration>
| <temporary-table-declaration> }
<variable-declaration> ::=
DECLARE <variable-name> [ , … ] <data-type>
[{ = | DEFAULT} <initial-value>]
<initial-value> ::=
<special-value>
| <string>
| [ - ] <number>
| ( <constant-expression> )
| <built-in-function> ( <constant-expression> )
| NULL
<special-value> ::=
CURRENT {
DATABASE
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
statement-label
If specified, it must match the beginning <statement-label>. You can use the LEAVE statement to
resume execution at the first statement after the compound statement. The compound statement that is
the body of a procedure has an implicit label that is the same as the name of the procedure.
initial-value
If specified, the variable is set to that value and the data type must match the type defined by <data-
type>. If you do not specify an initial-value, the variable contains the NULL value until a SET statement
assigns a different value.
Remarks
(back to top)
The body of a procedure is a compound statement. Compound statements can also be used in control
statements within a procedure.
A compound statement allows one or more SQL statements to be grouped together and treated as a unit. A
compound statement starts with BEGIN and ends with END. Immediately after BEGIN, a compound statement
can have local declarations that exist only within the compound statement. A compound statement can have a
local declaration for a variable, a cursor, a temporary table, or an exception. Local declarations can be
referenced by any statement in that compound statement, or in any compound statement nested within it.
Local declarations are invisible to other procedures that are called from within a compound statement.
An atomic statement is a statement executed completely or not at all. For example, an UPDATE statement that
updates thousands of rows might encounter an error after updating many rows. If the statement does not
complete, all changes revert back to their original state. Similarly, if you specify that the BEGIN statement is
atomic, the statement is executed either in its entirety or not at all.
(back to top)
None
Standards
(back to top)
Examples
(back to top)
CustomerLoop:
LOOP
FETCH NEXT curThisCust
INTO ThisCompany, ThisValue ;
IF SQLSTATE = err_notfound THEN
CLOSE curThisCust ;
END
Related Information
Groups CREATE INDEX statements together for execution at the same time.
Syntax
Parameters
statement-list
Remarks
The BEGIN PARALLEL IQ … END PARALLEL IQ statement lets you execute a group of CREATE INDEX
statements as though they are a single DDL statement, creating indexes on multiple IQ tables at the same time.
While this statement is executing, you and other users cannot issue other DDL statements.
● RLV-enabled tables
● TEXT indexes
Privileges
None
Side Effects
Automatic commit
Standards
Examples
The following statement executes atomically. If one command fails, the entire statement rolls back:
BEGIN PARALLEL IQ
CREATE HG INDEX c1_HG on table1 (col1);
CREATE HNG INDEX c12_HNG on table1 (col12);
CREATE HNG INDEX c2_HNG on table1 (col2);
END PARALLEL IQ
Related Information
Syntax
Parameters
transaction-name
(Optional) The name assigned to this transaction. It must be a valid identifier. Use transaction names only
on the outermost pair of nested BEGIN/COMMIT or BEGIN/ROLLBACK statements.
Remarks
Note
BEGIN TRANSACTION is a T-SQL construct and must contain only valid T-SQL commands. You cannot mix
T-SQL and non-T-SQL commands.
When executed inside a transaction, the BEGIN TRANSACTION statement increases the nesting level of
transactions by one. The nesting level is decreased by a COMMIT statement. When transactions are nested, only
the outermost COMMIT makes the changes to the database permanent.
The default SAP ASE transaction mode, called unchained mode, commits each statement individually, unless
an explicit BEGIN TRANSACTION statement is executed to start a transaction. In contrast, the ISO SQL/2003
compatible chained mode only commits a transaction when an explicit COMMIT is executed or when a
statement that carries out an autocommit (such as data definition statements) is executed.
You can control the mode by setting the chained database option. The default setting for ODBC and embedded
SQL connections in SAP IQ is On, in which case SAP ASE runs in chained mode. (ODBC users should also
check the AutoCommit ODBC setting). The default for TDS connections is Off.
In unchained mode, a transaction is implicitly started before any data retrieval or modification statement.
These statements include: DELETE, INSERT, OPEN, FETCH, SELECT, and UPDATE. You must still explicitly end
the transaction with a COMMIT or ROLLBACK statement.
Note
When calling a stored procedure, you should ensure that it operates correctly under the required
transaction mode.
A ROLLBACK statement without a transaction or savepoint name always rolls back statements to the
outermost BEGIN TRANSACTION (explicit or implicit) statement, and cancels the entire transaction.
Privileges
None
Standards
Examples
The following example reports successive values of @@trancount as 0, 1, 2, 1, 0 and prints the values on the
server window:
PRINT @@trancount
BEGIN TRANSACTION
PRINT @@trancount
BEGIN TRANSACTION
PRINT @@trancount
COMMIT TRANSACTION
PRINT @@trancount
COMMIT TRANSACTION
PRINT @@trancount
Do not rely on the value of @@trancount for more than keeping track of the number of explicit BEGIN
TRANSACTION statements that have been issued.
When SAP ASE starts a transaction implicitly, the @@trancount variable is set to 1. SAP IQ does not set the
@@trancount value to 1 when a transaction is started implicitly. So, the SAP IQ @@trancount variable has a
value of zero before any BEGIN TRANSACTION statement (even though there is a current transaction), while in
SAP ASE (in chained mode) it has a value of 1.
For transactions starting with a BEGIN TRANSACTION statement, @@trancount has a value of 1 in both SAP
ASE and SAP ASE after the first BEGIN TRANSACTION statement. If a transaction is implicitly started with a
different statement, and a BEGIN TRANSACTION statement is then executed, @@trancount has a value of 2 in
both SAP IQ, and SAP ASE after the BEGIN TRANSACTION statement.
Syntax
Syntax 1
Syntax 2
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
(Optional) Calls a procedure or function as a different user. The database server verifies that the user ID
and password provided are valid, and then executes the procedure or function as the specified user. The
invoker of the procedure is the specified user. Upon exiting the procedure or function, the user context is
restored to its original state.
Note
All string values must be enclosed in single quotes; otherwise the database server interprets them as
variable names.
(back to top)
CALL invokes a procedure that has been previously created with a CREATE PROCEDURE statement. When the
procedure completes, any INOUT or OUT parameter values are copied back.
Note
The AS USER ... IDENTIFIED BY clause only applies to the CALL statement and is not supported for
procedures in the FROM clause or functions in the select list.
You can specify the argument list by position or by using keyword format. By position, arguments match up
with the corresponding parameter in the parameter list for the procedure. By keyword, arguments match the
named parameters.
Procedure arguments can be assigned default values in the CREATE PROCEDURE statement, and missing
parameters are assigned the default value, or, if no default is set, NULL.
Inside a procedure, CALL can be used in a DECLARE statement when the procedure returns result sets.
Note
Procedures can return an integer value (as a status indicator, say) using the RETURN statement. You can save
this return value in a variable using the equality sign as an assignment operator:
Note
Use of this statement to invoke a function is deprecated. To call functions, use an assignment statement to
invoke the function and assign its result to a variable. For example:
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
(back to top)
Examples
(back to top)
● The following example calls the sp_customer_list procedure. This procedure has no parameters, and
returns a result set:
CALL sp_customer_list()
● The following example creates a procedure to return the number of orders placed by the customer whose
ID is supplied, creates a variable to hold the result, calls the procedure, and displays the result:
Related Information
The CASE statement is a control statement that lets you choose a list of SQL statements to execute based on
the value of an expression.
Syntax
CASE <value-expression>
… WHEN [ <constant> | NULL ] THEN <statement-list> …
… [ WHEN [ <constant> | NULL ] THEN <statement-list> ] …
… ELSE <statement-list>
… END
Remarks
If a WHEN clause exists for the value of <value-expression>, the <statement-list> in the WHEN clause
is executed. If no appropriate WHEN clause exists, and an ELSE clause exists, the <statement-list> in the
ELSE clause is executed. Execution resumes at the first statement after the END.
Note
The ANSI standard allows two forms of CASE statements. Although SAP IQ allows both forms, when CASE is
in the predicate, for best performance you must use the form shown here.
If you require the other form (also called ANSI syntax) for compatibility with SAP SQL Anywhere, use this
syntax:
CASE
WHEN [ search-condition | NULL] THEN statement-list ...
[ WHEN [ search-condition | NULL] THEN statement-list ] ...
[ ELSE statement-list ]
END [ CASE ]
With this ANSI syntax form, the statements are executed for the first satisfied search-condition in the CASE
statement. The ELSE clause is executed if none of the <search-conditions> are met. If the expression
can be NULL, use the following syntax for the first <search-condition>:
Do not confuse the syntax of the CASE statement with that of the CASE expression.
Privileges
None
Examples
The following example classifies the products listed in the Products table of the demo database into one of
shirt, hat, shorts, or unknown:
Related Information
Syntax
CHECKPOINT
CHECKPOINT forces the database server to execute a checkpoint. Checkpoints are also performed
automatically by the database server according to an internal algorithm. Applications do not normally need to
issue CHECKPOINT.
SAP IQ uses checkpoints differently than OLTP databases such as SAP SQL Anywhere. OLTP databases tend
to have short transactions that affect only a small number of rows. Writing entire pages to disk would be very
expensive for them. Instead, OLTP databases generally write to disk at checkpoints, and write only the changed
data rows. SAP IQ is an OLAP database. A single OLAP transaction can change thousands or millions of rows of
data. For this reason, the database server does not wait for a checkpoint to occur to perform physical writes. It
writes updated data pages to disk after each transaction commits. For an OLAP database, writing full pages of
data to disk is much more effective than writing small amounts of data at arbitrary checkpoints.
Adjusting the checkpoint time or issuing explicit checkpoints may be unnecessary. Controlling checkpoints is
less important in SAP IQ than in OLTP database products, because SAP IQ writes the actual data pages after
each transaction commits.
Privileges
Requires the CHECKPOINT system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Related Information
Syntax
CLEAR
Closes any open result sets and leaves the contents of the SQL Statements pane unchanged.
Privileges
None
Side Effects
The CLEAR statement closes the cursor that is associated with the data being cleared.
Standards
Related Information
Syntax
Remarks
Privileges
None
Standards
Examples
Syntax
COMMENT ON
{ COLUMN [<owner>.]<table-name>.<column-name>
| DBSPACE <dbspace-name>
| EVENT <event-name>
| EXTERNAL [ENVIRONMENT] OBJECT <object-name>
| EXTERNAL ENVIRONMENT <environment-name>
| EXTERNAL OBJECT <object-name>
| FOREIGN KEY [<owner>.]<table-name>.<role-name>
| INDEX [ [<owner>.]<table>.]<index-name>
| INTEGRATED LOGIN <integrated-login-id>
| JAVA CLASS <java-class-name>
| JAVA JAR <java-jar-name>
| KERBEROS LOGIN "<client-Kerberos-principal>"
| LDAP SERVER <ldap-server-name>
| LOGICAL SERVER <logical-server-name>
| LOGIN POLICY <policy-name>
| LS POLICY <ls-policy-name>
| MATERIALIZED VIEW [<owner>.]<materialized-view-name>
| PRIMARY KEY ON [<owner>.]<table-name>
| PROCEDURE [<owner>.]<table-name>
| ROLE <role-name>
| SERVICE <web-service-name>
| SEQUENCE [<owner>.]<sequence-name>
| SPATIAL REFERENCE SYSTEM <srs-name>
| SPATIAL UNIT OF MEASURE <uom-identifier>
| TABLE [ <owner>.]<table-name>
| TEXT CONFIGURATION [< owner>.]<text-config-name>
| TEXT INDEX <text-index-name>
| TRIGGER [[<owner>.]<table-name>.]<trigger-name>
| USER <userid>
| VIEW [ <owner>.]<view-name> }
IS <comment>
<environment-name> ::=
JAVA | PERL | PHP | C_ESQL32 | C_ESQL64 | C_ODBC32 | C_ODBC64
<comment> ::=
{ <string> | NULL }
Go to:
● Privileges
Remarks
(back to top)
The COMMENT statement updates remarks in the ISYSREMARK system table. You can remove a comment by
setting it to NULL. The owner of a comment on an index or trigger is the owner of the table on which the index
or trigger is defined.
The COMMENT ON DBSPACE, COMMENT ON JAVA JAR, and COMMENT ON JAVA CLASS statements allow you
to set the Remarks column in the SYS.ISYSREMARK system table. Remove a comment by setting it to NULL.
Note
Materialized views are supported only for SAP SQL Anywhere tables in the IQ catalog store.
Privileges
(back to top)
The privilege required varies by clause. See GRANT System Privilege Statement [page 1511] or GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges.
JAVA CLASS or JAVA JAR MANAGE ANY EXTERNAL OBJECT system privilege
ROLE System role – administrative privilege over the role being commented on.
Standards
(back to top)
Examples
(back to top)
COMMENT
ON TABLE Employees
IS "Employee information"
● The following example removes the comment from the Employees table:
COMMENT
ON TABLE Employees
Related Information
Syntax
COMMIT [ WORK ]
Remarks
Syntax 1
Data definition statements carry out commits automatically. For information, see the Side Effects listing for
each SQL statement.
COMMIT fails if the database server detects any invalid foreign keys. This makes it impossible to end a
transaction with any invalid foreign keys. Usually, foreign key integrity is checked on each data manipulation
operation. However, if the database option WAIT_FOR_COMMIT is set ON or a particular foreign key was defined
with a CHECK ON COMMIT clause, the database server delays integrity checking until the COMMIT statement is
executed.
Syntax 2
Nested transactions are similar to savepoints. When executed as the outermost of a set of nested transactions,
the statement makes changes to the database permanent. When executed inside a transaction, COMMIT
TRANSACTION decreases the nesting level of transactions by one. When transactions are nested, only the
outermost COMMIT makes the changes to the database permanent.
The optional parameter <transaction-name> is the name assigned to this transaction. It must be a valid
identifier. Use transaction names only on the outermost pair of nested BEGIN/COMMIT or BEGIN/ROLLBACK
statements.
You can use a set of options to control the detailed behavior of the COMMIT statement. See
COOPERATIVE_COMMIT_TIMEOUT Option, COOPERATIVE_COMMITS Option, DELAYED_COMMITS Option,
Privileges
None
Side Effects
Standards
Examples
COMMIT
● The following example shows how the Transact-SQL batch reports successive values of @@trancount as
0, 1, 2, 1, 0:
PRINT @@trancount
BEGIN TRANSACTION
PRINT @@trancount
BEGIN TRANSACTION
PRINT @@trancount
COMMIT TRANSACTION
PRINT @@trancount
COMMIT TRANSACTION
PRINT @@trancount
go
Syntax
CONFIGURE
Remarks
The dbisql configuration window displays the current settings of all dbisql options. It does not display or let
you modify database options.
If you select Permanent, the options are written to the SYSOPTION table in the database and the database
server performs an automatic COMMIT. If you do not choose Permanent, and instead click OK, options are set
temporarily and remain in effect only for the current database connection.
Privileges
None
Related Information
Establishes a connection to the database identified by <database-name> running on the server identified by
<engine-name>.
Syntax
Syntax 1
CONNECT
…[ TO <engine-name> ]
…[ DATABASE <database-name> ]
…[ AS <connection-name> ]
…[ USER ] <userid> [ IDENTIFIED BY ]
Syntax 2
Go to:
● Remarks
● Standards
● Privileges
● Examples
Parameters
(back to top)
AS connection-name
Connection can optionally be named by specifying the clause. This allows multiple connections to the
same database, or multiple connections to the same or different database servers, all simultaneously. Each
connection has its own associated transaction. You might even get locking conflicts between your
A list of parameter settings of the form keyword=<value>, and must be enclosed in single quotes.
Remarks
(back to top)
If no <engine-name> is specified, the default local database server is assumed (the first database server
started). If no <database-name> is specified, the first database on the given server is assumed.
The user ID and password are used for permission checks on all dynamic SQL statements. By default, the
password is case-sensitive; the user ID is not. You can connect without a password by using a host variable
for the password and setting the value of the host variable to be the null pointer.
Dbisql behavior
If no database or server is specified in the CONNECT statement, dbisql remains connected to the current
database, rather than to the default server and database. If a database name is specified without a server
name, dbisql attempts to connect to the specified database on the current server. You must specify the
database name defined in the -n database switch, not the database file name. If a server name is specified
without a database name, dbisql connects to the default database on the specified server. For example, if
this batch is executed while connected to a database, the two tables are created in the same database:
No other database statements are allowed until a successful CONNECT statement has been executed.
The user ID and password check the permissions on SQL statements. If the password or the user ID and
password are not specified, the user is prompted to type the missing information. By default, the password
is case-sensitive; the user ID is not.
Multiple connections are managed through the concept of a current connection. After a successful connect
statement, the new connection becomes the current one. To switch to a different connection, use SET
CONNECTION. Executing a CONNECT statement does not close the existing connection (if any). Use
DISCONNECT to drop connections.
Static SQL statements use the user ID and password specified with the -l option on the SQLPP statement line.
If no -l option is given, the user ID and password of the CONNECT statement are used for static SQL
statements also.
Privileges
(back to top)
Standards
(back to top)
Examples
(back to top)
● The following example connects to the default database using dbisql without specifying credentials. You
are prompted for a user ID and password:
CONNECT
● The following example connects to the default database as user DBA using . You are prompted for a user ID
and password:
● The following example connects to the demo database as user DBA using dbisql, where
<machine_iqdemo> is the engine name:
CONNECT
TO <machine_iqdemo>
USER "DBA"
IDENTIFIED BY <password>
● The following example connects to the demo database using a connect string using dbisql:
Related Information
Associates an SAP IQ agent for SAP IQ Cockpit with the named server to support high availability.
Syntax
Remarks
The SYS.ISYSIQMPXSERVERAGENT system table stores the agent connection definitions for the server.
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Side Effects
Automatic commit
Examples
The following example creates an agent for the SAP IQ server named mpx_writer1. The user login is
"sqltester" and the port number is 1138:
Related Information
Syntax
<algorithm-key-spec> ::=
ON
| [ ON ] KEY <key> [ ALGORITHM <AES-algorithm> ]
| [ ON ] ALGORITHM <AES-algorithm> KEY <key>
| [ ON ] ALGORITHM 'SIMPLE'
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
(back to top)
By default, passwords must be a minimum length of 6 characters unless the MINIMUM PASSWORD
LENGTH clause is specified and set to a different value. Passwords should be composed of 7-bit ASCII
characters. Other characters may not work correctly if the server cannot convert from the client character
set to UTF-8.
TRANSACTION LOG
A file where the database server logs all changes made to the database. The transaction log plays a key role
in system recovery. If you do not specify any TRANSACTION LOG clause, or if you omit a path for the file
name, it is placed in the same directory as the .db file. However, you should place it on a different physical
device from the .db and .iq. It cannot be created on a raw partition.
MIRROR mirror-file-name
An identical copy of a transaction log, usually maintained on a separate device, for greater protection of
your data. By default, SAP IQ does not use a mirrored transaction log. If you do want to use a transaction
log mirror, you must provide a file name. If you use a relative path, the transaction log mirror is created
relative to the directory of the catalog store (db-name.db). Tip: Always create a mirror copy of the
transaction log.
CASE { RESPECT | IGNORE }
For databases created with CASE RESPECT, all affected values are case-sensitive in comparisons and
string operations. Database object names such as columns, procedures, or user IDs, are unaffected.
Dbspace names are always case-insensitive, regardless of the CASE specification. The default (RESPECT)
is that all comparisons are case-sensitive. CASE RESPECT provides better performance than CASE
IGNORE.
PAGE SIZE catalog-page-size
Page size for the SQL Anywhere segment of the database (containing the catalog tables) can be 4096,
8192, 16384, or 32768 bytes. Normally, use the default, 4096 (4 KB). Large databases might need a larger
page size than the default and may see performance benefits as a result. The smaller values might limit the
number of columns your database can support. If you specify a page size smaller than 4096, SAP IQ uses a
page size of 4096.
COLLATION collation-label [ ( collation-tailoring-string ) ]
The collation sequence used for sorting and comparison of character data types in the database. The
collation provides character comparison and ordering information for the encoding (character set) being
used. If the COLLATION clause is not specified, SAP IQ chooses a collation based on the operating system
language and encoding. For most operating systems, the default collation sequence is ISO_BINENG, which
provides the best performance. In ISO_BINENG, the collation order is the same as the order of characters
in the ASCII character set. All uppercase letters precede all lowercase letters (for example, both 'A' and 'B'
precede 'a').
You can choose the collation from a list of supported collations. For SAP SQL Anywhere databases created
on an SAP IQ server, the collation can also be the Unicode Collation Algorithm (UCA). If UCA is specified,
also specify the ENCODING clause. SAP IQ does not support any of the UCA-based collations for SAP IQ
databases. If a UCA-based collation is specified in the CREATE DATABASE statement for a database, the
Optionally, you can specify collation tailoring options (<collation-tailoring-string>) for additional
control over the sorting and comparing of characters. These options take the form of keyword=value pairs,
assembled in parentheses, following the collation name.
This table contains the supported keyword, allowed alternate forms, and allowed values for the collation
tailoring option (<collation-tailoring-string>) for an SAP IQ database:
Alternate
Keyword Collation Forms Allowed Values
CaseSensitivity All supported CaseSensitive, ● respect – respect case differences between letters. For
collations Case the UCA collation, this is equivalent to UpperFirst. For
other collations, the value of respect depends on the colla
tion itself.
● ignore – ignore case differences between letters.
● UpperFirst – always sort upper case first (Aa).
● LowerFirst – always sort lowercase first (aA).
Note
Several collation tailoring options are supported when you specify the UCA collation for an SAP SQL
Anywhere database created on an SAP IQ server. For all other collations and for SAP IQ, only case
sensitivity tailoring is supported. Also, databases created with collation tailoring options cannot be
started using a pre-15.0 database server.
Makes the data stored in your physical database file unreadable. Use the CREATE DATABASE ENCRYPTED
keyword without the TABLE keyword to encrypt the entire database. Use the ENCRYPTED TABLE clause to
enable only table encryption for SQL Anywhere tables. Table-level encryption is not supported for SAP IQ
tables. Enabling table encryption means that the tables that are subsequently created or altered using the
ENCRYPTED clause are encrypted using the settings you specified at database creation.
● Simple encryption is equivalent to obfuscation. The data is unreadable, but someone with
cryptographic expertise could decipher the data. For simple encryption, specify the CREATE
DATABASE clause ENCRYPTED ON ALGORITHM 'SIMPLE', ENCRYPTED ALGORITHM 'SIMPLE', or
specify the ENCRYPTED ON clause without specifying an algorithm or key.
● Strong encryption is achieved through the use of a 128-bit algorithm and a security key. The data is
unreadable and virtually undecipherable without the key. For strong encryption, specify the CREATE
DATABASE clause ENCRYPTED ON ALGORITHM with a 128-bit or 256-bit AES algorithm and use the
KEY clause to specify an encryption key. You should choose a value for your key that is at least 16
characters long, contains a mix of uppercase and lowercase, and includes numbers, letters, and
special characters.
This encryption key is required each time you start the database.
You can specify encryption only during database creation. To introduce encryption to an existing database
requires a complete unload, database re-creation, and reload of all data. If the ENCRYPTED clause is used
but no algorithm is specified, the default is AES. By default, encryption is OFF.
Protect your encryption key! Store a copy of your key in a safe location. A lost key results in a
completely inaccessible database from which there is no recovery.
BLANK PADDING ON
Trailing blanks are ignored for comparison purposes (BLANK PADDING ON), and Embedded SQL programs
pad strings that are fetched into character arrays. This option is provided for compatibility with the ISO/
ANSI SQL standard. CREATE DATABASE no longer supports BLANK PADDING OFF.
JCONNECT { ON | OFF }
To use the SAP jConnect for JDBC driver to access system catalog information, install jConnect support.
Set JCONNECT to OFF to exclude the jConnect system objects (the default is ON). You can still use JDBC,
as long as you do not access system information.
IQ PATH iq-file-name
The path name of the main segment file containing the SAP IQ data. You can specify an operating system
file or a raw partition of an I/O device. (IQ PATH Parameter Guidelines in SAP IQ Administration: Database
describes the format for specifying a raw partition.)
SAP IQ automatically detects which type based on the path name you specify. If you use a relative path, the
file is created relative to the directory of the catalog store (the .db file).
If you omit the IQ PATH clause, specifying any of these options generates an error: IQ SIZE, IQ PAGE SIZE,
BLOCK SIZE, MESSAGE PATH, TEMPORARY PATH, and TEMPORARY SIZE.
IQ SIZE iq-file-size
The size in MB of either the raw partition or the operating system file you specify with the IQ PATH clause.
For raw partitions, you should always take the default by not specifying IQ SIZE, which allows SAP IQ to use
the entire raw partition; if you specify a value for IQ SIZE, the value must match the size of the I/O device or
SAP IQ returns an error. For operating system files, you can specify a value from the minimum in the
following table up to a maximum of 100 TB.
The default size for an operating system file depends on IQ PAGE SIZE:
The page size, in bytes, for the SAP IQ segment of the database (containing the IQ tables and indexes). The
value must be a power of 2, from 65536 to 524288 bytes. The default is 131072 (128 KB). Other values for
the size are changed to the next larger size. The IQ page size determines the default I/O transfer block size
and maximum data compression for your database.
The I/O transfer block size, in bytes, for the SAP IQ segment of the database. The value must be less than
IQ PAGE SIZE, and must be a power of two between 4096 and 32768. Other values for the size are changed
to the next larger size. The default value depends on the value of the IQ PAGE SIZE clause. For most
applications, the default value is optimum.
IQ RESERVE sizeMB
Size, in megabytes, of space to reserve for the main IQ store (IQ_SYSTEM_MAIN dbspace), so that the
dbfile can be increased in size in the future. The sizeMB parameter can be any number greater than 0. You
cannot change the reserve after the dbspace is created. When IQ RESERVE is specified, the database uses
more space for internal (free list) structures. If reserve size is too large, the space needed for the internal
structures can be larger than the specified size, which results in an error.
TEMPORARY RESERVE sizeMB
Size, in megabytes, of space to reserve for the temporary IQ store (IQ_SYSTEM_TEMP dbspace), so that
the dbfile can be increased in size in the future. The sizeMB parameter can be any number greater than 0.
You cannot change the reserve after the dbspace is created. When TEMPORARY RESERVE is specified, the
database uses more space for internal (free list) structures. If reserve size is too large, the space needed
for the internal structures can be larger than the specified size, which results in an error.
Note
Reserve and mode for temporary dbspaces are lost if the database is restored from a backup.
Size, in megabytes, of either the raw partition or the operating system file you specify with the
TEMPORARY PATH clause. For raw partitions, always use the default by not specifying TEMPORARY SIZE,
which allows messages trace file. You must specify an operating system file; the message file cannot be on
a raw partition. If you use a relative path or omit the path, the message file is created relative to the
directory of the .db file.SAP IQ to use the entire raw partition. The default for operating system files is
always one-half the value of IQ SIZE. If the IQ store is on a raw partition and the temporary store is an
operating system file, the default TEMPORARY SIZE is half the size of the IQ store raw partition.
SYSTEM PROCEDURE AS DEFINER { ON | OFF }
Defines whether a privileged system procedure runs with the privileges of the invoker (the person
executing the procedure) or the definer (the owner of the procedure). OFF (default), or not specified,
ON means that pre-16.0 privileged system procedures execute with the privileges of the definer. 16.0 or
later privileged system procedures execute with the privileges of the invoker.
Remarks
(back to top)
Creates a database with the supplied name and attributes. The IQ PATH clause is required for creating the SAP
IQ database; otherwise, you create a standard SAP SQL Anywhere database.
When SAP IQ creates a database, it automatically generates four database files to store different types of data
that constitute a database. Each file corresponds to a dbspace, the logical name by which SAP IQ identifies
database files:
● <db-name.db> is the file that holds the catalog dbspace, SYSTEM. It contains the system tables and
stored procedures describing the database and any standard SAP SQL Anywhere database objects you
add. If you do not include the .db extension, SAP IQ adds it. This initial dbspace contains the catalog store,
and you can later add dbspaces to increase its size. It cannot be created on a raw partition.
● <db-name.iq> is the default name of the file that holds the main data dbspace, IQ_SYSTEM_MAIN, which
contains the IQ tables and indexes. You can specify a different file name with the IQ PATH clause. This initial
dbspace contains the IQ store.
Caution
IQ_SYSTEM_MAIN is a special dbspace that contains all structures necessary for the database to open:
the IQ db_identity blocks, the IQ checkpoint log, the IQ rollforward/rollback bitmaps of each committed
transaction and each active checkpointed transaction, the incremental backup bitmaps, and the
freelist root pages. IQ_SYSTEM_MAIN is always online when the database is open.
The administrator can allow user tables to be created in IQ_SYSTEM_MAIN, especially if these tables
are small, important tables. However, it is more common that immediately after creating the database,
the administrator creates a second main dbspace, revokes create privilege in dbspace
IQ_SYSTEM_MAIN from all users, grants create privilege on the new main dbspace to selected users,
and sets PUBLIC.default_dbspace to the new main dbspace.
● <db-name.iqtmp> messages trace is the default name of the file that holds the initial temporary dbspace,
IQ_SYSTEM_TEMP. It contains the temporary tables generated by certain queries. The required size of this
file can vary depending on the type of query and amount of data. You can specify a different name using
the TEMPORARY PATH clause. This initial dbspace contains the temporary store.
● <db-name.iqmsg> is the default name of the file that contains the messages trace dbspace,
IQ_SYSTEM_MSG. You can specify a different file name using the MESSAGE PATH clause.
In addition to these files, a database has a transaction log file (db-name.log), and might have a transaction
log mirror file.
The dbbackup utility truncates the database name to 70 characters and creates a target file with a truncated
name. SAP IQ uses dbbackup when synchronizing secondary servers. Due to dbbackup restrictions, database
names must be less than 70 characters long.
In Windows, if you specify a path, any backslash characters (\) must be doubled if they are followed by an n or
an x. This prevents them being interpreted as a newline character (\n) or as a hexadecimal number (\x),
according to the rules for strings in SQL. It is safer to always double the backslash. For example:
● The catalog store file (<db-name.db>) is created relative to the working directory of the server.
● The IQ store, temporary store, and message log files are created in the same directory as, or relative to, the
catalog store.
Caution
The database file, temporary dbspace, and transaction log file must be located on the same physical
machine as the database server. Do not place database files and transaction log files on a network drive.
The transaction log should be on a separate device from its mirror, however.
On UNIX-like operating systems, you can create symbolic links, which are indirect pointers that contain the
path name of the file to which they point. You can use symbolic links as relative path names. There are several
advantages to creating a symbolic link for the database file name:
● Symbolic links to raw devices can have meaningful names, while the actual device name syntax can be
obscure.
● A symbolic name might eliminate problems restoring a database file that was moved to a new directory
since it was backed up.
ln -s /disk1/company/iqdata/company.iq company_iq_store
Once you create this link, you can specify the symbolic link in commands like CREATE DATABASE or RESTORE
DATABASE instead of the fully qualified path name.
When you create a database or a dbspace, the path for every dbspace file must be unique. If your CREATE
DATABASE command specifies the identical path and file name for these two stores, you receive an error.
● Specify a different extension for each file (for example, mydb.iq and mydb.iqtmp)
● Specify a different file name (for example, mydb.iq and mytmp.iq)
● Specify a different path name (for example, /iqfiles/main/iq and /iqfiles/temp/iq) or different
raw partitions
Caution
To maintain database consistency on UNIX-like operating systems, you must specify file names that are
links to different files. SAP IQ cannot detect the target where linked files point. Even if the file names in the
command differ, make sure they do not point to the same operating system file.
Character strings inserted into tables are always stored in the case they are entered, regardless of whether the
database is case-sensitive or not. If the string Value is inserted into a character data type column, the string is
always stored in the database with an uppercase V and the remainder of the letters lowercase. SELECT
statements return the string as Value. If the database is not case-sensitive, however, all comparisons make
Value the same as value, VALUE, and so on. The SAP IQ server may return results in any combination of
lowercase and uppercase, so you cannot expect case-sensitive results in a database that is case-insensitive
(CASE IGNORE).
The result of the SELECT can be “oNe” (as specified in the WHERE clause) and not necessarily “ONE” (as
stored in the database).
All databases are created with at least one user ID (DBA) and password (<password>).
In new databases, all passwords are case-sensitive, regardless of the case-sensitivity of the database. The user
ID is unaffected by the CASE RESPECT setting.
When you start a database, its page size cannot be larger than the page size of the current server. The server
page size is taken from the first set of databases started or is set on the server command line using the -gp
command line option.
Command line length for any statement is limited to the catalog page size. The 4 KB default is large enough in
most cases; however, in a few cases, a larger PAGE SIZE value is needed to accommodate very long
commands, such as RESTORE DATABASE commands that reference numerous dbspaces. A larger page size
might also be needed to execute queries involving large numbers of tables or views.
Because the default catalog page size is 4 KB, this is a problem only when the connection is to a database such
as utility_db, which has a page size of 1024. This restriction may cause RESTORE DATABASE commands
that reference numerous dbspaces to fail. To avoid the problem, make sure the length of SQL command lines is
less than the catalog page size.
Alternatively, start the engine with -gp 32768 to increase catalog page size.
(back to top)
The permissions required to execute this statement are set using the -gu server command line option, as
follows:
The account under which the server is running must have write permissions on the directories where files are
created.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
● (Windows) This example creates an SAP IQ database named mydb with its corresponding mydb.db,
mydb.iq, mydb.iqtmp, and mydb.iqmsg files in the C:\s1\data directory:
● (UNIX) This example creates an SAP IQ database with raw devices for IQ PATH and TEMPORARY PATH.
The default IQ page size of 128 KB applies:
● (Windows) This example creates an SAP IQ database with a raw device for IQ PATH. Note the doubled
backslashes in the raw device name (a Windows requirement):
● (UNIX) This example creates a strongly encrypted SAP IQ database using the AES encryption algorithm
with the key “is!seCret.”
Related Information
Creates a new dbspace and the associated dbfiles for the IQ main store, cache dbspace, catalog store, or RLV
store.
Syntax
<file-specification> ::= or
{ <single-path-spec> | <new-file-spec> [, ...] }
<single-path-spec> ::=
'<file-path>' | <iq-file-opts>
<new-file-spec> ::=
FILE <logical-file-name > | '<file-path>' <iq-file-opts>
<iq-file-opts> ::=
[ [ SIZE ] <file-size> ]
…[ KB | MB | GB | TB ] ]
[ RESERVE <size>
…[ KB | MB | GB | TB ] ]
<iq-dbspace-opts> ::=
[ NOPREALLOCATE ]
[ STRIPING ] {ON | OFF} ] …[ STRIPESIZEKB <sizeKB> ]
Go to:
Parameters
new-file-spec
Creates a dbspace for the IQ main store. You can specify one or more dbfiles for the IQ main store. The
dbfile name and physical file path are required for each file, and must be unique.
RESERVE
Specifies the size in kilobytes (KB), megabytes (MB), gigabytes (GB), or terabytes (TB) of space to reserve,
so that the dbspace can be increased in size in the future. The size parameter can be any number greater
than 0; megabytes is the default. You cannot change the reserve after the dbspace dbfile is created. When
RESERVE is specified, the database uses more space for internal (free list) structures. If reserve size is too
large, the space needed for the internal structures can be larger than the specified size, which results in an
error.
dbspace-name and dbfile-name
Note
SAP IQ supports IQ_SYSTEM_MAIN plus one user dbspace in the base product license. You must be
licensed for the IQ_VLDBMGMT option in order to create additional dbspaces.
file-path
The actual operating system file name of the dbfile, with a preceding path where necessary. <file-path>
without an explicit directory is created in the same directory as the catalog store of the database. Any
relative directory is relative to the catalog store.
SIZE
Specifies the size, from 0 to 4 terabytes, of the operating system file specified in <file-path>. The
default depends on the store type and block size. For the IQ main store, the default number of bytes equals
1000* the block size. You cannot specify the SIZE clause for the catalog store. A SIZE value of 0 creates a
dbspace of minimum size, which is 8 MB for the IQ main store. For raw partitions, do not explicitly specify
SIZE. SAP IQ automatically sets this parameter to the maximum raw partition size, and returns an error if
you attempt to specify another size.
NOPREALLOCATE
Instructs SAP IQ to bypass preallocation of dbspace files on cooked (not raw) file systems. Preallocation
can take an excessive amount of time if allocating large files to the dbspace on a cooked file system. You
cannot change the NOPREALLOCATE value. You must drop the dbspace in order to change the allocation.
NOPREALLOCATE is not available on IQ_SYSTEM_MAIN or IQ_SYSTEM_TEMP dbspaces.
STRIPESIZEKB
Specifies the number of kilobytes (KB) to write to each file before the disk striping algorithm moves to the
next stripe for the specified dbspace. If you do not specify striping or stripe size, the default values of the
options DEFAULT_DISK_STRIPING and DEFAULT_KB_PER_STRIPE apply.
IQ CACHE STORE
Remarks
CREATE DBSPACE creates a new dbspace for the IQ main store, cache dbspace, catalog store, or RLV store.
The dbspace you add can be on a different disk device than the initial dbspace, allowing you to create stores
that are larger than one physical device.
Syntax 1 creates a dbspace for the catalog store, where both dbspace and dbfile have the same logical name.
Each dbspace in the catalog store has a single file.
The dbspace name and dbfile names are always case-insensitive. The physical file paths have the case
sensitivity of the operating system if the database is CASE RESPECT, and are case-insensitive if the database is
CASE IGNORE.
Note
Creating a RLV dbspace containing a minimum of one file is a prerequisite for RLV storage. Before enabling
RLV storage on an SAP IQ server, check that the RLV dbspace exists.
You can create only one cache dbspace on an SAP IQ server or multiplex node. Attempting to create a second
cache dbspace results in an error.
Caution
(UNIX platforms) To maintain database consistency, specify file names that are links to different files. SAP
IQ cannot detect the target where linked files point. Even if the file names in the command differ, make sure
they do not point to the same operating system file.
Privileges
Requires the MANAGE ANY DBSPACE system privilege. See GRANT System Privilege Statement [page 1511]
for assistance with granting privileges.
Side Effects
● Automatic commit
● Automatic checkpoint
Standards
● The following example creates a dbspace called DspHist for the IQ main store with two dbfiles on a UNIX
system. Each dbfile is 1 GB in size and can grow 500 MB:
● The following example creates an IQ main dbspace called EmpStore1 for the IQ store (three alternate
syntax examples):
CREATE DBSPACE d1
USING FILE f1
'f1.iq' SIZE 1000 IQ RLV STORE;
● The following example creates a cache dbspace called myDAS with a 200 GB dbfile:
● The following example bypasses preallocation of dbspace files on a cooked file system:
CREATE DBSPACE BigDB USING FILE BigDB 'BigDB.iq' SIZE 500000 IQ STORE
NOPREALLOCATE;
Related Information
Syntax
<default-value> ::=
<special-value>
| <string>
| <global variable>
| [ - ] <number>
| ( <constant-expression> )
| <built-in-function>( <constant-expression> )
| AUTOINCREMENT
| CURRENT DATABASE
| CURRENT REMOTE USER
| NULL
| TIMESTAMP
| LAST USER
<special-value> ::=
CURRENT
{ DATE
| TIME
| TIMESTAMP
| USER
| PUBLISHER }
| USER
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
data-type
Remarks
(back to top)
User-defined data types are aliases for built-in data types, including precision and scale values, where
applicable. They improve convenience and encourage consistency in the database.
Note
Use CREATE DOMAIN, rather than CREATE DATATYPE, as CREATE DOMAIN is the ANSI/ISO SQL3 term.
The user who creates a data type is automatically made the owner of that data type. No owner can be specified
in the CREATE DATATYPE statement. The user-defined data type name must be unique, and all users can
access the data type without using the owner as prefix.
User-defined data types are objects within the database. Their names must conform to the rules for identifiers.
User-defined data type names are always case-insensitive, as are built-in data type names.
By default, user-defined data types allow NULLs unless the allow_nulls_by_default database option is set
to OFF. In this case, new user-defined data types by default do not allow NULLs. The nullability of a column
created on a user-defined data type depends on the setting of the definition of the user-defined data type, not
on the setting of the allow_nulls_by_default option when the column is referenced. Any explicit setting of
NULL or NOT NULL in the column definition overrides the user-defined data type setting.
The CREATE DOMAIN statement allows you to specify DEFAULT values on user-defined data types. The
DEFAULT value specification is inherited by any column defined on the data type. Any DEFAULT value explicitly
specified on the column overrides that specified for the data type.
The CREATE DOMAIN statement lets you incorporate a rule, called a CHECK condition, into the definition of a
user-defined data type.
SAP IQ enforces CHECK constraints for base, global temporary. local temporary tables, and user-defined data
types.
To drop the data type from the database, use the DROP statement. You must be either the owner of the data
type or have the CREATE DATATYPE or CREATE ANY OBJECT system privilege in order to drop a user-defined
data type.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
The following example creates a data type named address, which holds a 35-character string, and which may
be NULL:
Related Information
Defines an event and its associated handler for automating predefined actions. Also defines scheduled actions.
Syntax
<event-type> ::=
BackupEnd
| "Connect"
| ConnectFailed
| DatabaseStart
| DBDiskSpace
| "Disconnect"
| GlobalAutoincrement
| GrowDB
| GrowLog
| GrowTemp
| IQMainDBSpaceFree
| IQTempDBSpaceFree
| LogDiskSpace
| "RAISERROR"
| ServerIdle
| TempDiskSpace
<trigger-condition> ::=
event_condition( <condition-name> )
{ =
| <
| >
| !=
| <=
| >= } <value>
<schedule-spec> ::=
[ <schedule-name> ]
{ START TIME <start-time> | BETWEEN <start-time> AND <end-time> }
[ EVERY <period> { HOURS | MINUTES | SECONDS } ]
[ ON { ( <day-of-week>, … ) | ( <day-of-month>, … ) } ]
[ START DATE <start-date> ]
Parameters
event-name
TYPE event-type
One of a set of system-defined event types. The event types are case-insensitive. To specify the conditions
under which this <event-type> triggers the event, use the WHERE clause.
● DiskSpace – if the database contains an event handler for one of the DiskSpace types, the database
server checks the available space on each device associated with the relevant file every 30 seconds.
In the event the database has more than one dbspace, on separate drives, DBDiskSpace checks each
drive and acts depending on the lowest available space.
● LogDiskSpace – checks the location of the transaction log and any mirrored transaction log, and
reports based on the least available space.
● Globalautoincrement – fires when the GLOBAL AUTOINCREMENT default value for a table is within
one percent of the end of its range. A typical action for the handler could be to request a new value for
the GLOBAL_DATABASE_ID clause.
You can use the EVENT_CONDITION function with RemainingValues as an argument for this event type.
● ServerIdle – if the database contains an event handler for the ServerIdle type, the server checks for
server activity every 30 seconds.
WHERE trigger-condition
The trigger condition determines the condition under which an event is fired. For example, to take an action
when the disk containing the transaction log becomes more than 80 percent full, use this triggering
condition:
...
WHERE event_condition( 'LogDiskSpacePercentFree' ) < 20
...
The argument to the EVENT_CONDITION function must be valid for the event type. You can use multiple
AND conditions to make up the WHERE clause, but you cannot use OR conditions or other conditions.
Specifies when scheduled actions are to take place. The sequence of times acts as a set of triggering
conditions for the associated actions defined in the event handler.You can create more than one schedule
for a given event and its associated handler. This permits complex schedules to be implemented. While it is
compulsory to provide a schedule name when there is more than one schedule, it is optional if you provide
only a single schedule.
You can list schedule names by querying the system table SYSSCHEDULE. For example:
Each event has a unique event ID. Use the event_id columns of SYSEVENT and SYSSCHEDULE to match
the event to the associated schedule.
When a nonrecurring scheduled event has passed, its schedule is deleted, but the event handler is not
deleted.
● START DATE – the date on which scheduled events are to start occurring. The default is the current
date.
● START TIME – the first scheduled time for each day on which the event is scheduled. If a START DATE
is specified, the START TIME refers to that date. If no START DATE is specified, the START TIME is on
the current day (unless the time has passed) and each subsequent day.
You can specify a variable name for <start-time>.
● BETWEEN … AND – a range of times during the day outside of which no scheduled times occur. If a
START DATE is specified, the scheduled times do not occur until that date.
You can specify a variable name for <start-time> and <end-time>.
● EVERY – an interval between successive scheduled events. Scheduled events occur only after the
START TIME for the day, or in the range specified by BETWEEN …AND.
You can specify a variable name for <period>.
● ON – a list of days on which the scheduled events occur. The default is every day. These can be
specified as days of the week or days of the month.
Days of the week are Monday, Tuesday, and so on. The abbreviated forms of the day, such as Mon, Tue,
and so on, may also be used. The database server recognizes both full-length and abbreviated day
names in any of the languages supported by SAP IQ.
Days of the month are integers from 0 to 31. A value of 0 represents the last day of any month.
Each time a scheduled event handler is completed, the next scheduled time and date is calculated.
● If the EVERY clause is used, find whether the next scheduled time falls on the current day, and is before
the end of the BETWEEN …AND range. If so, that is the next scheduled time.
● If the next scheduled time does not fall on the current day, find the next date on which the event is to
be executed.
● Find the START TIME for that date, or the beginning of the BETWEEN … AND range.
ENABLE | DISABLE
By default, event handlers are enabled. When DISABLE is specified, the event handler does not execute
even when the scheduled time or triggering condition occurs. A TRIGGER EVENT statement does not
cause a disabled event handler to be executed
AT { CONSOLIDATED | REMOTE | ALL }
To execute events at remote or consolidated databases in a SQL Remote setup, use this clause to restrict
the databases at which the event is handled. By default, all databases execute the event.
HANDLER
Each event has one handler. Like the body of a stored procedure, the handler is a compound statement.
There are some differences, though: you can use an EXCEPTION clause within the compound statement to
handle errors, but not the ON EXCEPTION RESUME clause provided within stored procedures.
An event definition includes two distinct pieces. The trigger condition can be an occurrence, such as a disk
filling up beyond a defined threshold. A schedule is a set of times, each of which acts as a trigger condition.
When a trigger condition is satisfied, the event handler executes. The event handler includes one or more
actions specified inside a compound statement (BEGIN... END).
If no trigger condition or schedule specification is supplied, only an explicit TRIGGER EVENT statement can
trigger the event. During development, you might want to develop and test event handlers using TRIGGER
EVENT and add the schedule or WHERE clause once testing is complete.
When event handlers are triggered, the server makes context information, such as the connection ID that
caused the event to be triggered, available to the event handler using the EVENT_PARAMETER function.
Note
Although statements that return result sets are disallowed in events, you can allow an event to call a stored
procedure and insert the procedure results into a temporary table.
For parameters that accept variable names, an error is returned if one of the following conditions is true:
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Event handlers execute on a separate connection, with the privileges of the event owner. To execute an event
with privileges other than MANAGE ANY EVENT system privilege, you can call a procedure from within the
event handler. The procedure executes with the permissions of its owner.
Side Effects
● Automatic commit.
● The actions of an event handler are committed if no error is detected during execution, and rolled back if
errors are detected.
Examples
● The following example instructs the database server to carry out an automatic incremental backup daily at
1 a.m.:
● The following example instructs the database server to call the system stored procedure
sp_iqspaceused every 10 minutes, then store in a table the returned current date and time, the current
number of connections to the database, and current information about the use of main and temporary IQ
store:
● The following example posts a message to the server log when free disk space on the device containing the
transaction log file falls below 30 percent, but execute the handler no more than once every 300 seconds:
Related Information
Creates a new proxy table that represents an existing table on a remote server.
Syntax
<location-string> ::=
<remote-server-name>.[<db-name>].[<owner>].<object-name>
| <remote-server-name>;[<db-name>];[<owner>];<object-name>
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
column-definition
Specifies the location of the remote object. The AT clause supports the semicolon (;) as a delimiter. If a
semicolon is present anywhere in the <column-definition>, the semicolon is the field delimiter. If no
semicolon is present, a period is the field delimiter. This behavior allows file names and extensions to be
used in the database and owner fields. An ESCAPE CHARACTER clause allows applications to escape these
delimiters within a location string.
When you create a proxy table by using either the CREATE TABLE or the CREATE EXISTING statement, the
AT clause includes a location string that consists of the following parts:
Use a period or semicolon to delimit the location strings. The location string can also contain variable
names that are expanded when the database server evaluates the location string. Variable names within
the location string are encapsulated within braces. It is very rare to have a period, semicolon, and a brace,
or just a brace, be part of a remote server name, catalog name, owner name, schema name, or table name.
However, there may be some situations where one or all of these delimiter characters must be interpreted
literally within a location string.
Note
The ESCAPE clause is only necessary if there is a need to escape delimiters within the location clause.
In general, the ESCAPE clause can be omitted when creating proxy tables. The escape character can be
any single byte character.
The string in the AT clause can contain local or global variable names enclosed in braces (for example,
{variable-name}). The SQL variable name must be of type CHAR, VARCHAR, or LONG VARCHAR. For
example, an AT clause that contains 'access;{@myfile};;a1' indicates that @myfile is a SQL variable
and that the current contents of the @myfile variable should be substituted when the proxy table is
created.
(back to top)
The CREATE EXISTING TABLE statement creates a new, local, proxy table that maps to a table at an external
location. CREATE EXISTING TABLE is a variant of the CREATE TABLE statement. The EXISTING keyword is
used with CREATE TABLE to specify that a table already exists remotely, and to import its metadata. This
syntax establishes the remote table as a visible entity to users. The software verifies that the table exists at the
external location before it creates the table.
Tables used as proxy tables cannot have names longer than 30 characters.
If the object does not exist (either as a host data file or remote server object), the statement is rejected with an
error message.
Index information from the host data file or remote server table is extracted and used to create rows for the
ISYSIDX system table. This information defines indexes and keys in server terms and enables the query
optimizer to consider any indexes that may exist on this table.
In a simplex environment, you cannot create a proxy table that refers to a remote table on the same node. In a
multiplex environment, you cannot create a proxy table that refers to the remote table defined within the
multiplex.
For example, in a simplex environment, if you try to create proxy table proxy_e, which refers to base table
Employees defined on the same node, the CREATE EXISTING TABLE statement is rejected with an error
message. In a multiplex environment, the CREATE EXISTING TABLE statement is rejected if you create proxy
table proxy_e from any node (coordinator or secondary) that refers to remote table Employees defined
within a multiplex.
If <column-definitions> are not specified, then the database server derives the column list from the
metadata it obtains from the remote table. If column-definitions are specified, then the database server verifies
the column-definitions. Column names, data types, lengths, the identity property, and null properties are
checked for the following conditions:
Privileges
(back to top)
To create a table to be owned by another user requires the CREATE ANY TABLE system privilege.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● The following example creates a proxy table named nation for the nation table at the remote server
server_a:
● The following example creates a proxy table named blurbs for the blurbs table at the remote server
server_a. SAP IQ derives the column list from the metadata it obtains from the remote table:
● The following example creates a proxy table named rda_employee for the Employees table at the SAP IQ
remote server remote_iqdemo_srv:
Related Information
Assigns an alternate login name and password to be used when communicating with a remote server.
Syntax
Parameters
login-name
Specifies the local user login name. When using integrated logins, the <login-name> is the database user
to which the Windows user ID is mapped.
TO remote-server
Specifies the user account on <remote-server> for the local user <login-name>.
IDENTIFIED BY remote-password
(Optional) Specifies that <remote-password> is the password for <remote-user>. If you omit the
IDENTIFIED BY clause, the password is sent to the remote server as NULL. If you specify IDENTIFIED BY " "
(an empty string), the password sent is the empty string.
Remarks
Changes made by CREATE EXTERNLOGIN do not take effect until the next connection to the remote server.
By default, SAP IQ uses the names and passwords of its clients whenever it connects to a remote server on
behalf of those clients. CREATE EXTERNLOGIN assigns an alternate login name and password to be used when
communicating with a remote server. It stores the password internally in encrypted form.
The <remote_server> must be known to the local server by an entry in the ISYSSERVER system table. For
more information, see the CREATE SERVER Statement.
Creating a remote login with the CREATE EXTERNLOGIN statement and defining a remote server with a
CREATE SERVER statement sets up an external login and password for the INSERT...LOCATION such that any
user can use the login and password in any context. This avoids possible errors due to inaccessibility of the
login or password, and is the recommended way to connect to a remote server.
If you rely on the user ID and password of the current connection, and a user changes the password, you
must stop and restart the server before the new password takes effect on the remote server. Remote logins
created with CREATE EXTERNLOGIN are unaffected by changes to the password for the default user ID.
Sites with automatic password expiration should plan for periodic updates of passwords for external logins.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Side Effects
Automatic commit
Standards
Examples
The following example maps the local user named DBA to the user sa with password 4TKNOX when connecting
to the server mydb1:
Related Information
Creates a user-defined function in the database. A function can be created for another user by specifying an
owner name. Subject to permissions, a user-defined function can be used in exactly the same way as other
non-aggregate functions.
Syntax
Syntax 1
<parameter> ::=
IN <parameter-name> <data-type> [ DEFAULT <expression> ]
<tsql-compound-statement> ::=
<sql-statement>
<sql-statement> …
<native-call> ::=
'[ <system-configuration>:]<function-name>@<library-file-prefix>; …'
<system-configuration> ::=
{ <generic-operating-system> | <specific-operating-system> } [ (<processor-
architecture>) ]
<specific-operating-system> ::=
{ AIX | HPUX | Linux | OSX | Solaris | WindowsNT }
<processor-architecture> ::=
{ 32 | 64 | ARM | IA64 | PPC | SPARC | X86 | X86_64 }
<java-call> ::=
'[ <package-name>.]<class-name>.<method-name> <method-signature>'
<method-signature> ::=
( [ <field-descriptor>, ….] ) <return-descriptor>
Syntax 2
<parameter> ::=
IN <parameter-name> <data-type> [ DEFAULT <expression> ]
<url-string> ::=
' { HTTP | HTTPS | HTTPS_FIPS }://[<user:password@>]<hostname>[:<port>][/
<path>] '
Parameters
CREATE [ OR REPLACE ]
Parameter names must conform to the rules for database identifiers. They must have a valid SQL data type
and be prefixed by the keyword IN, signifying that the argument is an expression that provides a value to
the function.
The CREATE clause creates a new function, while the OR REPLACE clause replaces an existing function
with the same name. When a function is replaced, the definition of the function is changed but the existing
permissions are preserved. You cannot use the OR REPLACE clause with temporary functions.
TEMPORARY
The function is visible only by the connection that created it, and that it is automatically dropped when the
connection is dropped. Temporary functions can also be explicitly dropped. You cannot perform ALTER,
GRANT, or REVOKE operations on them, and unlike other functions, temporary functions are not recorded in
the catalog or transaction log.
Temporary functions execute with the permissions of their creator (current user), and can only be owned
by their creator. Therefore, do not specify owner when creating a temporary function. They can be created
and dropped when connected to a read-only database.
SQL SECURITY
Defines whether the function is executed as the INVOKER, the user who is calling the function, or as the
DEFINER, the user who owns the function. The default is DEFINER.
When INVOKER is specified, more memory is used because annotation must be done for each user that
calls the procedure. Also, name resolution is done as the invoker as well. Therefore, take care to qualify all
object names (tables, procedures, and so on) with their appropriate owner.
The data type of the parameter. Set the data type explicitly, or specify the %TYPE or %ROWTYPE attribute
to set the data type to the data type of another object in the database. Use %TYPE to set it to the data type
of a column in a table or view. Use %ROWTYPE to set the data type to a composite data type derived from
a row in a table or view. LONG BINARY and LONG VARCHAR are not permitted as return-value data types.
compound-statement
A set of SQL statements bracketed by BEGIN and END, and separated by semicolons. See BEGIN … END
Statement.
tsql-compound-statement
A wrapper around a call to a function in an external library and can have no other clauses following the
RETURNS clause. The library name may include the file extension, which is typically .dll on Windows
and .so on UNIX. In the absence of the extension, the software appends the platform-specific default file
extension for libraries. The external-name clause is not supported for temporary functions.
LANGUAGE JAVA
A wrapper around a Java method. For information on calling Java procedures, see CREATE PROCEDURE
Statement.
ON EXCEPTION RESUME
Function is re-evaluated each time it is called in a query. The results of functions not specified in this
manner may be cached for better performance, and re-used each time the function is called with the same
parameters during query evaluation.
Functions that have side effects, such as modifying the underlying data, should be declared as NOT
DETERMINISTIC. For example, a function that generates primary key values and is used in an INSERT …
SELECT statement should be declared NOT DETERMINISTIC:
Functions may be declared as DETERMINISTIC if they always return the same value for given input
parameters. All user-defined functions are treated as deterministic unless they are declared NOT
DETERMINISTIC. Deterministic functions return a consistent result for the same parameters and are free
of side effects. That is, the database server assumes that two successive calls to the same function with
the same parameters will return the same result without unwanted side-effects on the semantics of the
query.
URL
Parameter values are passed as part of the request. The syntax used depends on the type of request. For
HTTP:GET, the parameters are passed as part of the URL; for HTTP:POST requests, the values are placed
in the body of the request. Parameters to SOAP requests are always bundled in the request body.
HEADER
When creating HTTP web service client functions, use this clause to add or modify HTTP request header
entries. Only printable ASCII characters can be specified for HTTP headers, and they are case-insensitive.
For moFor use only when defining an HTTP or SOAP web services client function. Specifies information
about how to use this the URL of the web service. The optional user name and password parameters
provide a means of supplying the credentials needed for HTTP basic authentication. HTTP basic clause,
see the HEADER clause of the authentication base-64 encodes the user and password information and
passes it in the “Authentication” header of the HTTP request.
SOAPHEADER
When declaring a SOAP Web service as a function, use this clause to specify one or more SOAP request
header entries. A SOAP header can be declared as a static constant, or can be dynamically set using the
parameter substitution mechanism (declaring IN, OUT, or INOUT parameters for hd1, hd2, and so on). A
web service function can define one or more IN mode substitution parameters, but cannot define an
INOUT or OUT substitution parameter.
TYPE
Specifies the format used when making the web service request. If SOAP is specified or no type clause is
included, the default type SOAP:RPC is used. HTTP implies HTTP:POST. Since SOAP requests are always
sent as XML documents, HTTP:POST is always used to send SOAP requests.
NAMESPACE
Applies to SOAP client functions only and identifies the method namespace usually required for both
SOAP:RPC and SOAP:DOC requests. The SOAP server handling the request uses this namespace to
interpret the names of the entities in the SOAP request message body. The namespace can be obtained
from the WSDL description of the SOAP service available from the web service server. The default value is
the procedure's URL, up to but not including the optional path component.
CERTIFICATE
To make a secure (HTTPS) request, a client must have access to the certificate used by the HTTPS server.
The necessary information is specified in a string of semicolon-separated key/value pairs. The certificate
can be placed in a file and the name of the file provided using the file key, or the whole certificate can be
placed in a string, but not both. These keys are available:
unit For use only when defining an HTTP or SOAP web services client
function. SpecifiesCompany unit specified in the certificate
Certificates are required only for requests that are either directed to an HTTPS server or can be redirected
from an insecure to a secure server.
CLIENTPORT
Identifies the port number on which the HTTP client procedure communicates using TCP/IP. It is provided
for and recommended only for connections across firewalls, as firewalls filter according to the TCP/UDP
port. You can specify a single port number, ranges of port numbers, or a combination of both; for example,
CLIENTPORT '85,90-97'.
PROXY
Specifies the URI of a proxy server. For use when the client must access the network through a proxy.
Indicates that the procedure is to connect to the proxy server and send the request to the web service
through it.
Remarks
To modify a user-defined function, or to hide the contents of a function by scrambling its definition, use the
ALTER FUNCTION statement.
When functions are executed, not all parameters need to be specified. If a default value is provided in the
CREATE FUNCTION statement, missing parameters are assigned the default values. If an argument is not
provided by the caller and no default is set, an error is given.
Required Parameters
For required parameters that accept variable names, an error is returned if one of the following conditions is
true:
Privileges
To create a function to be owned by self requires the CREATE PROCEDURE system privilege.
To create a function containing an external reference, regardless of ownership of the function also requires the
CREATE EXTERNAL REFERENCE system privilege.
Side Effects
Automatic commit
Standards
Examples
fullname('joe', 'smith')
joe smith
Fran Whitney
Matthew Cobb
Philip Chin
Julie Jordan
Robert Breault
...
In this section:
Related Information
Syntax
<parameter> ::=
IN <parameter-name> <data-type> [ DEFAULT <expression> ]
<tsql-compound-statement> ::=
<sql-statement>
<sql-statement> …
<java-call> ::=
'[ <package-name>.]<class-name>.<method-name> <method-signature>'
<method-signature> ::=
( [ <field-descriptor>, ...] ) <return-descriptor>
Parameters
CREATE [ OR REPLACE ]
Parameter names must conform to the rules for database identifiers. They must have a valid SQL data type
and be prefixed by the keyword IN, signifying that the argument is an expression that provides a value to
the function.
The CREATE clause creates a new function, while the OR REPLACE clause replaces an existing function
with the same name. When a function is replaced, the definition of the function is changed but the existing
permissions are preserved. You cannot use the OR REPLACE clause with temporary functions.
TEMPORARY
The function is visible only by the connection that created it, and that it is automatically dropped when the
connection is dropped. Temporary functions can also be explicitly dropped. You cannot perform ALTER,
GRANT, or REVOKE operations on them, and unlike other functions, temporary functions are not recorded in
the catalog or transaction log.
Defines whether the function is executed as the INVOKER, the user who is calling the function, or as the
DEFINER, the user who owns the function. The default is DEFINER.
When INVOKER is specified, more memory is used because annotation must be done for each user that
calls the procedure. Also, name resolution is done as the invoker as well. Therefore, take care to qualify all
object names (tables, procedures, and so on) with their appropriate owner.
data-type
LONG BINARY and LONG VARCHAR are not permitted as return-value data types.
compound-statement
A set of SQL statements bracketed by BEGIN and END, and separated by semicolons. See BEGIN … END
Statement.
tsql-compound-statement
Function is re-evaluated each time it is called in a query. The results of functions not specified in this
manner may be cached for better performance, and re-used each time the function is called with the same
parameters during query evaluation.
Functions that have side effects, such as modifying the underlying data, should be declared as NOT
DETERMINISTIC. For example, a function that generates primary key values and is used in an INSERT …
SELECT statement should be declared NOT DETERMINISTIC:
Functions may be declared as DETERMINISTIC if they always return the same value for given input
parameters. All user-defined functions are treated as deterministic unless they are declared NOT
DETERMINISTIC. Deterministic functions return a consistent result for the same parameters and are free
of side effects. That is, the database server assumes that two successive calls to the same function with
the same parameters will return the same result without unwanted side-effects on the semantics of the
query.
LANGUAGE JAVA
A wrapper around a Java method. For information on calling Java procedures, see CREATE PROCEDURE
Statement.
environment-name
The DISALLOW clause is the default. The ALLOW clause indicates that server-side connections are allowed.
Note
Do not specify the ALLOW clause unless necessary. ALLOW slows down certain types of SAP IQ table
joins. Do not use UDFs with both the ALLOW and DISALLOW SERVER SIDE REQUESTS clauses in the
same query.
Remarks
When functions are executed, not all parameters need to be specified. If a default value is provided in the
CREATE FUNCTION statement, missing parameters are assigned the default values. If an argument is not
provided by the caller and no default is set, an error is given.
Privileges
For function to be owned by self – requires the CREATE PROCEDURE system privilege
To create a function containing an external reference, regardless of whether or not they are the owner of the
function, also requires the CREATE EXTERNAL REFERENCE system privilege.
Standards
Examples
Syntax
<http-type-spec-string> :
HTTP[: { GET
| POST[:<MIME-type> ]
| PUT[:<MIME-type> ]
| DELETE
| HEAD
| OPTIONS } ]
<soap-type-spec-string> :
SOAP[:{ RPC | DOC }
<parameter> :
[ IN ] <parameter-name> <datatype> [ DEFAULT <expression> ]
<url-string> :
{ HTTP | HTTPS | HTTPS_FIPS }://[<user>:<password>@]<hostname>[:<port>][/<path>]
<option-list> :
HTTP( <http-option> [ ;<http-option> ...] )
| SOAP( <soap-option> [ ;<soap-option> ...] )
| REDIR( <redir-option> [ ;<redir-option> ...] )
<http-option> :
CHUNK={ ON | OFF | AUTO }
| EXCEPTIONS={ ON | OFF | AUTO }
| VERSION={ 1.0 | 1.1 }
| KTIMEOUT=<number-of-seconds>
<soap-option> :
OPERATION=<soap-operation-name>
<redir-option> :
COUNT=<count>
| STATUS=<status-list>
OR REPLACE clause
Specifying CREATE OR REPLACE FUNCTION creates a new function, or replaces an existing function with
the same name. This clause changes the definition of the function, but preserves existing privileges. You
cannot use the OR REPLACE clause with temporary functions.
function-name
Parameter names must conform to the rules for database identifiers. They must have a valid SQL data
type.
If a parameter has a default value, it need not be specified. Parameters with no default value must be
specified.
Parameters can be prefixed by the keyword IN, signifying that the argument is an expression that provides
a value to the function. However, function parameters are IN by default.
data-type
The data type of the parameter. Set the data type explicitly, or specify the %TYPE or %ROWTYPE attribute
to set the data type to the data type of another object in the database. Use %TYPE to set it to the data type
of a column in a table or view. Use %ROWTYPE to set the data type to a composite data type derived from
a row in a table or view. However, defining the data type using a %ROWTYPE that is set to a table reference
variable (TABLE REF (<table-reference-variable>) %ROWTYPE) is not allowed.
Only SOAP requests support the transmission of typed data such as FLOAT, INT, and so on. HTTP requests
support the transmission of strings only, so you are limited to CHAR types.
RETURNS clause
Specify one of the following to define the return type for the SOAP or HTTP function:
● CHAR
● VARCHAR
● LONG VARCHAR
● TEXT
● NCHAR
● NVARCHAR
● LONG NVARCHAR
● NTEXT
● XML
● BINARY
● VARBINARY
● LONG BINARY
The value returned is the body of the HTTP response. No HTTP header information is included. If more
information is required, such as status information, use a procedure instead of a function.
The data type does not affect how the HTTP response is processed.
URL clause
For functions of type HTTP:GET, query parameters can be specified within the URL clause in addition to
being automatically generated from parameters passed to a function.
URL 'http://localhost/service?parm=1
Specifying HTTPS_FIPS forces the system to use the FIPS-certified libraries. If HTTPS_FIPS is specified,
but no FIPS-certified libraries are present, libraries that are not FIPS-certified are used instead.
To use a certificate from the operating system certificate store, specify a URL beginning with https://.
TYPE clause
Specifies the format used when making the web service request. SOAP:RPC is used when SOAP is
specified or no TYPE clause is included. HTTP:POST is used when HTTP is specified.
The TYPE clause allows the specification of a MIME-type for HTTP:POST and HTTP:PUT types. When
HTTP:PUT is used, then a MIME-type must be specified.The <MIME-type> specification is used to set the
Content-Type request header and set the mode of operation to allow only a single call parameter to
populate the body of the request. Only zero or one parameter may remain when making a web service
function call after parameter substitutions have been processed. Calling a web service function with a
NULL value or no parameter (after substitutions) results in a request with no body and a content-length of
zero. When a MIME-type is specified then the single body parameter is sent in the request as is, so the
application must ensure that the content is formatted to match the MIME-type.
● text/plain
● text/html
● text/xml
When no MIME-type is specified, parameter names and values (multiple parameters are permitted) are
URL encoded within the body of the HTTP request.
The keywords for the TYPE clause have the following meanings:
'HTTP:GET'
For example, the following request is produced when a client submits a request from the URL http://
localhost/WebServiceName?arg1=param1&arg2=param2:
'HTTP:POST'
'HTTP:PUT'
HTTP:PUT is similar to HTTP:POST, but the HTTP:PUT type does not have a default media type.
The following example demonstrates how to configure a general purpose client function that uploads
data to a database server running the %IQDIRSAMP%\SQLAnywhere\HTTP\put_data.sql sample:
'HTTP:DELETE'
A web service client function can be configured to delete a resource located on a server. Specifying the
media type is optional.
The following example demonstrates how to configure a general purpose client function that deletes a
resource from a database server running the put_data.sql sample:
'HTTP:HEAD'
The HEAD method is identical to a GET method but the server does not return a body. A media type
can be specified.
'HTTP:OPTIONS'
The OPTIONS method is identical to a GET method but the server does not return a body. A media
type can be specified. This method allows Cross-Origin Resource Sharing (CORS).
'SOAP:RPC'
This type sets the Content-Type header to 'text/xml'. SOAP operations and parameters are
encapsulated in SOAP envelope XML documents.
'SOAP:DOC'
This type sets the Content-Type header to 'text/xml'. It is similar to the SOAP:RPC type but allows you
to send richer data types. SOAP operations and parameters are encapsulated in SOAP envelope XML
documents.
When creating HTTP web service client functions, use this clause to add, modify, or delete HTTP request
header entries. The specification of headers closely resembles the format specified in RFC2616 Hypertext
Transfer Protocol, HTTP/1.1, and RFC822 Standard for ARPA Internet Text Messages, including the fact
that only printable ASCII characters can be specified for HTTP headers, and they are case-insensitive.
Headers can be defined as <header-name>:<value-name> pairs. Each header must be delimited from its
value with a colon ( : ) and therefore cannot contain a colon. You can define multiple headers by delimiting
each pair with \n, \x0d\n, <LF> (line feed), or <CR><LF>. (carriage return followed by a line feed)
Multiple contiguous white spaces within the header are converted to a single white space.
CERTIFICATE clause
To make a secure (HTTPS) request, a client must have access to the certificate used to sign the HTTP
server's certificate (or any certificate higher in the signing chain). The necessary information is specified in
a string of semicolon-separated keyword=value pairs. The following keywords are available:
Note
Setting this option to ON is not
recommended because this set
ting prevents the database server
from fully authenticating the
HTTP server.
Certificates are required only for requests that are either directed to an HTTPS server, or can be redirected
from a non-secure to a secure server. Only PEM formatted certificates are supported.
CLIENTPORT clause
Identifies the port number on which the HTTP client function communicates using TCP/IP. It is provided for
and recommended only for connections through firewalls that filter "outgoing" TCP/IP connections. You
can specify a single port number, ranges of port numbers, or a combination of both; for example,
CLIENTPORT '85,90-97''.
PROXY clause
Specifies the URI of a proxy server. For use when the client must access the network through a proxy. The
<proxy-string> is usually an HTTP or HTTPS url-string. This is site specific information that you usually
need to obtain from your network administrator. This clause indicates that the function is to connect to the
proxy server and send the request to the web service through it. For an example, the following PROXY
clause sets the proxy server to proxy.example.com:
PROXY http://proxy.example.com
SET clause
Specifies protocol-specific behavior options for HTTP, SOAP, and REDIR (redirects). Only one SET clause is
permitted. The following list describes the supported SET options. CHUNK, EXCEPTIONS, VERSION, and
KTIMEOUT apply to the HTTP protocol, OPERATION applies to the SOAP protocol, and COUNT and
STATUS apply to the REDIR option. REDIR options can be included with either HTTP or SOAP protocol
options.
(short form CH) This HTTP option allows you to specify whether to use chunking. Chunking allows
HTTP messages to be broken up into several parts. Possible values are ON (always chunk), OFF (never
chunk), and AUTO (chunk only if the contents, excluding auto-generated markup, exceeds 8196 bytes).
For example, the following SET clause enables chunking:
SET 'HTTP(CHUNK=ON)'
If the CHUNK option is not specified, the default behavior is AUTO. If a chunked request fails in AUTO
mode with a status of 505 HTTP Version Not Supported, or with 501 Not Implemented, or with
411 Length Required, the client retries the request without chunked transfer-coding.
Since CHUNK mode is a transfer encoding supported starting in HTTP version 1.1, setting CHUNK to
ON requires that the version (VER) be set to 1.1, or not be set at all, in which case 1.1 is used as the
default version.
EXCEPTIONS={ ON | OFF | AUTO }
(short form EX) This HTTP option allows you to control status code handling. The default is ON.
When set to ON or AUTO, HTTP client functions will return a response for HTTP success status codes
(1XX and 2XX) and all codes will raise the exception SQLE_HTTP_REQUEST_FAILED.
SET 'HTTP(EXCEPTIONS=AUTO)'
When set to OFF, HTTP client functions will always return a response, independent of the HTTP status
code. The HTTP status code will not be available.
Exceptions that are not related to the HTTP status code (for example,
SQLE_UNABLE_TO_CONNECT_TO_HOST) will be raised when appropriate regardless of the
EXCEPTIONS setting.
VERSION={ 1.0 | 1.1 }
(short form VER) This HTTP option allows you to specify the version of the HTTP protocol that is used
for the format of the HTTP message. For example, the following SET clause sets the HTTP version to
1.1:
SET 'HTTP(VERSION=1.1)'
(short form KTO) This HTTP option allows you to specify the keep-alive timeout criteria, permitting a
web client function to instantiate and cache a keep-alive HTTP/HTTPS connection for a period of time.
To cache an HTTP keep-alive connection, the HTTP version must be set to 1.1 and KTIMEOUT set to a
non-zero value. KTIMEOUT may be useful for HTTPS connections particularly, if you notice a
significant performance difference between HTTP and HTTPS connections. A database connection
can only cache a single keep-alive HTTP connection. Subsequent calls to a web client function using
the same URI reuse the keep-alive connection. Therefore, the executing web client call must have a URI
whose scheme, destination host and port match that of the cached URI, and the HEADER clause must
not specify Connection: close. When KTIMEOUT is not specified, or is set to zero, HTTP/HTTPS
connections are not cached.
OPERATION=soap-operation-name
(short form OP) This SOAP option allows you to specify the name of the SOAP operation, if it is
different from the name of the function you are creating. The value of OPERATION is analogous to the
If the OPERATION option is not specified, the name of the SOAP operation must match the name of
the function you are creating.
COUNT=count
(short form CNT) This REDIR option allows you to control redirects. See STATUS below.
STATUS=status-list
(short form STAT) This REDIR option allows you to control redirects. HTTP response status codes such
as302 Found and 303 See Other are used to redirect web applications to a new URI, particularly after
an HTTP POST has been performed. For example, a client request could be:
In response, the client would send another HTTP request to the new URI. The REDIR options allow you
to control the maximum number of redirections allowed and which HTTP response status codes to
automatically redirect.
The default redirection limit <count> is 5. By default, an HTTP client function will automatically
redirect in response to all HTTP redirection status codes (301, 302, 303, 307). To disallow all
redirection status codes, use SET 'REDIR(COUNT=0)'. In this mode, a redirection response does not
result in an error (SQLE_HTTP_REQUEST_FAILED). Instead, a result set is returned with the HTTP
status and response headers. This permits a caller to conditionally reissue the request based on the
URI contained in the Location header.
A web service function specifying a POST HTTP method which receives a 303 See Other status issues
a redirect request using the GET HTTP method.
The Location header can contain either an absolute path or a relative path. The HTTP client function
will handle either. The header can also include query parameters and these are forwarded to the
redirected location. For example, if the header contained parameters such as the following, the
subsequent GET or a POST will include these parameters.
Location: alternate_service?a=1&b=2
The following example shows the use of short forms with uppercase and lowercase letters.
SOAPHEADER clause
(SOAP format only) When declaring a SOAP web service as a function, use this clause to specify one or
more SOAP request header entries. A SOAP header can be declared as a static constant, or can be
dynamically set using the parameter substitution mechanism (declaring IN, OUT, or INOUT parameters for
hd1, hd2, and so on). A web service function can define one or more IN mode substitution parameters, but
cannot define an INOUT or OUT substitution parameter.
The following example illustrates how a client can specify the sending of several header entries using
parameter substitution and receiving the response SOAP header data:
CREATE FUNCTION soap_client( IN hd1 LONG VARCHAR, IN hd2 LONG VARCHAR, IN hd3
LONG VARCHAR)
RETURNS LONG BINARY
URL 'localhost/some_endpoint'
SOAPHEADER '!hd1!hd2!hd3';
NAMESPACE clause
(SOAP format only) This clause identifies the method namespace usually required for both SOAP:RPC and
SOAP:DOC requests. The SOAP server handling the request uses this namespace to interpret the names of
the entities in the SOAP request message body. The namespace can be obtained from the WSDL (Web
Services Description Language) of the SOAP service available from the web service server. The default
value is the function's URL, up to but not including the optional path component.
You can specify a variable name for <namespace-string>. If the variable is NULL, the namespace
property is ignored.
Remarks
The CREATE FUNCTION statement creates a web services function in the database. A function can be created
for another user by specifying an owner name.
When functions are executed, not all parameters need to be specified. If a DEFAULT value is provided in the
CREATE FUNCTION statement, missing parameters are assigned the default values. If an argument is not
provided by the caller and no default is set, an error is given.
Parameter values are passed as part of the request. The syntax used depends on the type of request. For
HTTP:GET, the parameters are passed as part of the URL; for HTTP:POST requests, the values are placed in the
body of the request. Parameters to SOAP requests are always bundled in the request body.
For required parameters that accept variable names, an error is returned if one of the following conditions is
true:
Privileges
You must have the CREATE PROCEDURE system privilege to create functions owned by you.
You must have the CREATE ANY PROCEDURE or CREATE ANY OBJECT system privilege to create functions
owned by others.
To replace an existing function, you must own the procedure or have one of the following:
Side effects
Automatic commit.
Standards
Example
1. The following statement creates a function named cli_test1 that returns images from the get_picture
service running on localhost:
3. The following statement uses a substitution parameter to allow the request URL to be passed as an
input parameter. The secure HTTPS request uses a certificate stored in the database. The SET clause
is used to turn off CHUNK mode transfer-encoding.
4. The following statement issues an HTTP request with the URL http://localhost/get_picture?
image=widget:
5. The following example creates a function using a variable in the NAMESPACE clause
1. The following statements create a variable for a NAMESPACE clause:
2. The following statement creates a function named FtoC that uses a variable in the NAMESPACE
clause:
Related Information
Creates an index on a specified table, or pair of tables. Once an index is created, it is never referenced in a SQL
statement again except to delete it using the DROP INDEX statement.
Syntax
<index-type> ::=
{ CMP | HG | HNG | WD | DATE | TIME | DTTM }
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
index-type
For columns in SAP IQ tables, you can specify an <index-type> of the following:
● HG (default) – High_Group
● WD – Word
● DATE
● TIME
● DTTM – Datetime
To create an index on the relationship between two columns in an IQ main store table, you can specify an
<index-type> of CMP (Compare). Columns must be of identical data type, precision and scale. For a
CHAR, VARCHAR, BINARY or VARBINARY column, precision means that both columns have the same width.
For maximum query speed, the correct type of index for a column depends on:
You can specify multiple indexes on a column of an IQ main store table, but these must be of different index
types. CREATE INDEX does not let you add a duplicate index type. SAP IQ chooses the fastest index
available for the current query or portion of the query. However, each additional index type might
significantly add to the space requirements of that table.
column-name
Specifies the name of the column to be indexed. A column name is an identifier preceded by an optional
correlation name. (A correlation name is usually a table name. For more information on correlation names,
see FROM Clause.) If a column name has characters other than letters, digits, and underscore, enclose it in
quotation marks (“”).
Only the HG and CMP index types can be specified on a multi-column index.
Foreign keys require nonunique indexes and composite foreign keys require nonunique composite HG
indexes. CHAR, VARCHAR, BINARY, and VARBINARY data cannot be more than 5300 bytes in a single-
column HG index. A multi-column HG index (both unique and non-unique) can contain a single CHAR,
VARCHAR, or BINARY column of up to 5297 bytes.
UNIQUE
Permitted for index type HG only. Ensures that no two rows in the table have identical values in all the
columns in the index. Each index key must be unique or contain a NULL in at least one column.
SAP IQ allows the use of NULL in data values on a user created unique multicolumn HG index, if the column
definition allows for NULL values and a constraint (primary key or unique) is not being enforced.
IF NOT EXISTS
If the named object already exists, no changes are made and an error is not returned.
IN
Specifies index placement. If you omit the IN clause, the index is created in the dbspace where the table is
created. An index is always placed in the same type of dbspace (IQ main store or temporary store) as its
table. When you load the index, the data is spread across any database files of that type with room
available. SAP IQ ensures that any <dbspace-name> you specify is appropriate for the index. If you try to
specify IQ_SYSTEM_MAIN or other main dbspaces for indexes on temporary tables, or vice versa, you
receive an error. Dbspace names are always case-insensitive, regardless of the CREATE DATABASE...CASE
IGNORE or CASE RESPECT specification.
DELIMITED BY
Specifies separators to use in parsing a column string into the words to be stored in the WD index of that
column. If you omit this clause or specify the value as an empty string, SAP IQ uses the default set of
separators. The default set of separators is designed for the default collation order (ISO-BINENG). It
includes all 7-bit ASCII characters that are not 7-bit ASCII alphanumeric characters, except for the hyphen
and the database was created with the CASE IGNORE setting using default separators, these words are
stored in the WD index from this string:
If you specify multiple DELIMITED BY and LIMIT clauses, no error is returned, but only the last clause of
each type is used.
separators-string
Must be a sequence of 0 or more characters in the collation order used when the database was created.
Each character in the separators string is treated as a separator. If there are no characters in the
separators string, the default set of separators is used. (Each separator must be a single character in the
collation sequence being used.) There cannot be more than 256 characters (separators) in the separators
string.
To specify tab as a delimiter, you can either type a TAB character within the separator string, or use the
hexadecimal ASCII code of the tab character, \x09. “\t” specifies two separators, \ and the letter t. To
specify newline as a delimiter, you can type a RETURN character or the hexadecimal ASCII code \x0a.
For example, the clause DELIMITED BY ' :;.\/t' specifies these seven separators:
space : ; . \ / t
LIMIT
Can be used for the creation of the WD index only. Specifies the maximum word length that is permitted in
the WD index. Longer words found during parsing causes an error. The default is 255 bytes. The minimum
permitted value is 1 and the maximum permitted value is 255. If the maximum word length specified in the
CREATE INDEX statement or determined by default exceeds the column width, the used maximum word
length is silently reduced to the column width. Using a lower maximum permitted word length allows
insertions, deletions, and updates to use less space and time. The empty word (two adjacent separators) is
silently ignored. After a WD index is created, any insertions into its column are parsed using the separators
and maximum word size determined at create time. These separators and maximum word size cannot be
changed after the index is created.
NOTIFY
Gives notification messages after n records are successfully added for the index. The messages are sent to
the standard output device. A message contains information about memory usage, database space, and
how many buffers are in use. The default is 100,000 records. To turn off NOTIFY, set it to 0.
(back to top)
● There is no way to specify the index owner in the CREATE INDEX statement. Indexes are automatically
owned by the owner of the table on which they are defined. The index name must be unique for each
owner.
● Indexes cannot be created for views. The name of each index must be unique for a given table.
● CREATE INDEX is prevented whenever the statement affects a table currently being modified by another
connection. However, queries are allowed on a table that is also adding an index.
● After a WD index is created, any insertions into its column are parsed using the separators, and maximum
word size cannot be changed after the index is created. For CHAR columns, specify a space as at least one
of the separators or use the default separator set. SAP IQ automatically pads CHAR columns to the
maximum column width. If your column contains blanks in addition to the character data, queries on WD
indexed data might return misleading results. For example, column CompanyName contains two words
delimited by a separator, but the second word is blank padded:
The parser determines that the string contains the following, instead of 'Farms', and returns 0 instead of
1:
'Farms '
You can avoid this problem by using VARCHAR instead of CHAR columns.
● Data types:
○ You cannot use CREATE INDEX to create an index on a column with BIT data.
○ Only the default index, CMP index, or WD index can be created on CHAR and VARCHAR data with more
than 255 bytes.
○ Only the default and WD index types can be created on LONG VARCHAR data.
○ Only the default index, CMP index, and TEXT index types can be created on BINARY and VARBINARY
data with more than 255 bytes.
○ An HNG index or a CMP index cannot be created on a column with FLOAT, REAL, or DOUBLE data.
○ A TIME index can be created only on a column having the data type TIME.
○ A DATE index can be created only on a column having the data type DATE.
○ A DTTM index can be created only on a column having the data type DATETIME or TIMESTAMP.
● You can create a unique or nonunique HG index with more than one column. SAP IQ implicitly creates a
nonunique HG index on a set of columns that makes up a foreign key.
HG and CMP are the only types of indexes that can have multiple columns. You cannot create a DATE, TIME,
or index with more than one column.
The maximum width of a multicolumn concatenated key is 5 KB (5300 bytes). The number of columns
allowed depends on how many columns can fit into 5 KB. CHAR or VARCHAR data greater than 255 bytes are
not allowed as part of a composite key in single-column HG, DATE, TIME, or DTTM indexes.
An INSERT on a multicolumn index must include all columns of the index.
To enhance query performance, use multicolumn HG indexes to run ORDER BY operations on more than
one column (that can also include ROWID) in the SELECT or ORDER BY clause with these conditions:
○ All projected columns, plus all ordering columns (except ROWID), exist within the index
○ The ordering keys match the leading columns, in order
If more than one multicolumn HG index with the lowest distinct counts is used. index satisfies these
conditions, the index with the lowest distinct counts is used.If a query has an ORDER BY clause, and the
ORDER BY column list is a prefix of a multicolumn index where all columns referenced in the If a query has
an ORDER BY clause, and the ORDER BY column list is a prefix of a multicolumn index where all columns
referenced in the
index with the lowest distinct counts is used. index satisfies these conditions, the index with the lowest
distinct counts is used.If a query has an ORDER BY clause, and the ORDER BY column list is a prefix of a
multicolumn index where all columns referenced in the If a query has an ORDER BY clause, and the ORDER
BY column list is a prefix of a multicolumn index where all columns referenced in the SELECT list are
present in a multicolumn index, then the multicolumn index performs vertical projection; for example:
If expressions exist on base columns in the SELECT list, and all the columns referenced in all the
expressions are present in the multicolumn index, then the query will use a multicolumn index; for
example:
In addition to the two previous examples, if the ROWID() function is in the SELECT list expressions,
multicolumn indexes will be used. For example:
In addition to the three previous examples, if ROWID() is present at the end of an ORDER BY list, and if the
columns of that list — except for ROWID() — use multicolumn indexes in the exact order, multicolumn
indexes will be used for the query. For example:
SAP IQ allows the use of NULL in data values on a user created unique multicolumn HG index, if the column
definition allows for NULL values and a constraint (primary key or unique) is not being enforced. The rules
for this feature are as follows:
○ A NULL is treated as an undefined value.
○ Multiple rows with NULL values in a unique index column or columns are allowed.
1. In a single column index, multiple rows with a NULL value in an index column are allowed.
2. In a multicolumn index, multiple rows with a NULL value in index column or columns are allowed,
as long as non-NULL values in the rest of the columns guarantee uniqueness in that index.
According to rule 1 above, you can insert a NULL value into an index column in multiple rows:
According to rule 2 above, you must guarantee uniqueness in the index. The following INSERT does not
succeed, since the multicolumn index c1c2_hg2 on row 1 and row 3 has the same value:
When a multicolumn HG index is governed by a unique constraint, a NULL value is not allowed in any
column participating in the index.
● You can use the BEGIN PARALLEL IQ … END PARALLEL IQ statement to group CREATE INDEX
statements on multiple IQ main store tables, so that they execute as though they are a single DDL
statement. See BEGIN PARALLEL IQ … END PARALLEL IQ Statement for more information.
Caution
Using the CREATE INDEX command on a local temporary table containing uncommitted data fails and
generates the error message Local temporary table, <tablename>, must be committed in
order to create an index. Commit the data in the local temporary table before creating an index.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
SAP ASE indexes can be either clustered or nonclustered. A clustered index almost always retrieves data faster
than a nonclustered index. Only one clustered index is permitted per table.
SAP IQ does not support clustered indexes. The CLUSTERED and NONCLUSTERED keywords are allowed by
SAP SQL Anywhere, but are ignored by SAP IQ. If no <index-type> is specified, SAP IQ creates an HG index
on the specified column(s).
Index names must be unique on a given table for both SAP IQ and SAP ASE.
Examples
(back to top)
● The following example creates a Compare index on the projected_earnings and current_earnings
columns. These columns are decimal columns with identical precision and scale:
● The following example creates a High_Group index on the ID column of the SalesOrderItems table. The
data pages for this index are allocated from dbspace Dsp5:
● The following example creates a High_Group index on the SalesOrderItems table for the ProductID
column:
● The following example creates a WD index on the earnings_report table. Specify that the delimiters of
strings are space, colon, semicolon, and period. Limit the length of the strings to 25:
● The following example creates a DTTM index on the SalesOrders table for the OrderDate column:
Related Information
Creates a new LDAP server configuration object for LDAP user authentication. Parameters defined during the
creation of an LDAP server configuration object are stored in the ISYSLDAPSERVER (system view
SYSLDAPSERVER) system table.
Syntax
<ldapua-server-attribs> ::=
Go to:
● Privileges
● Standards
● Examples
Parameters
(back to top)
URL 'URL_string'
Identifies the host (by name or by IP address), port number, and the search to be performed for the DN
lookup for a given user ID. This value is validated for correct LDAP URL syntax before it is stored in the
ISYSLDAPSERVER system table. The maximum size for this string is 1024 bytes.
ACCESS ACCOUNT { 'DN_string' | NULL }
User created in the LDAP server for use by SAP IQ, not a user within SAP IQ. The distinguished name (DN)
for this user is used to connect to the LDAP server. This user has permissions within the LDAP server to
search for DNs by user ID in the locations specified by the SEARCH DN URL. The maximum size for this
string is 1024 bytes.
IDENTIFIED BY { 'password' | NULL }
Provides the password associated with the ACCESS ACCOUNT user. The password is stored using
symmetric encryption on disk. Use the value NULL to clear the password and set it to none. The maximum
size of a clear text password is 255 bytes.
IDENTIFIED BY ENCRYPTED { encrypted-password | NULL }
Configures the password associated with the ACCESS ACCOUNT distinguished name in an encrypted
format. The binary value is the encrypted password and is stored on disk as is. Use the value NULL to clear
the password and set it to none. The maximum size of the binary is 289 bytes. The encrypted key should
be a valid varbinary value. Do not enclose the encrypted key in quotation marks.
AUTHENTICATION URL { 'URL_string' | NULL }
Identifies the host (by name or IP address) and the port number of the LDAP server to use for
authentication of the user. This is the value defined for URL_string and is validated for correct LDAP URL
syntax before it is stored in ISYSLDAPSERVER system table. The DN of the user obtained from a prior DN
search and the user password bind a new connection to the authentication URL. A successful connection
to the LDAP server is considered proof of the identity of the connecting user. The maximum size for this
string is 1024 bytes.
CONNECTION TIMEOUT timeout_value
Specifies the number of retries on connections from SAP IQ to the LDAP server for both DN searches and
authentication. The valid range of values is 1– 60, with a default value of 3.
TLS { ON | OFF }
Defines whether the TLS or Secure LDAP protocol is used for connections to the LDAP server for both DN
searches and authentication. When set to ON, the TLS protocol is used and the URL would begin with
"ldap://" When set to OFF (or not specified), Secure LDAP protocol is used and the URL begins with
“ldaps://”. When using the TLS protocol, specify the database security option
TRUSTED_CERTIFICATES_FILE with a file name containing the certificate of the Certificate Authority (CA)
that signed the certificate used by the LDAP server.
WITH ACTIVATE
Activates the LDAP server configuration object for immediate use upon creation. This permits the
definition and activation of LDAP User Authentication in one statement. The LDAP server configuration
object state changes to READY when WITH ACTIVATE is used.
Privileges
(back to top)
Requires the MANAGE ANY LDAP SERVER system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● The following example sets the search parameters, the authentication URL, and sets a three second
timeout, and activates the server so it can begin authenticating users. It connects to the LDAP server
without TLS or SECURE LDAP protocols:
● The following example uses the same search parameters as example 1, but specifies “ldaps” so that a
Secure LDAP connection is established with the LDAP server on host my_LDAPserver, port 636. Only LDAP
clients using the Secure LDAP protocol may now connect on this port. The database security option
TRUSTED_CERTIFICATE_FILE must be set with a file name containing the certificate of the certificate
authority (CA) that signed the certificate used by the LDAP server at "ldaps://my_LDAPserver:636". During
the handshake with the LDAP server, the certificate presented by the LDAP server is checked by the SAP IQ
server (the LDAP client) to ensure that it is signed by one of the certificates listed in the file. This
establishes trust by the client that the server is who it says it is. The ACCESS ACCOUNT and IDENTIFIED
BY parameters establish trust by the LDAP server that the client is who it says it is.
Note
The TLS parameter must be OFF when Secure LDAP is used instead of TLS protocol.
● The following example establishes the TLS protocol on port 389. It also requires database security option
TRUSTED_CERTIFICATE_FILE to be set with a file name and provides the same type of security as example
2. In this example, the TLS protocol is ON to facilitate wider support by LDAP server vendors:
Note
Check the requirements of all your LDAP servers when deciding how to configure Secure LDAP or TLS
for an SAP IQ server.
Related Information
Creates a user-defined logical server. This statement enforces consistent shared system temporary store
settings across physical nodes shared by logical servers.
Syntax
<ls-create-clause> ::=
{ MEMBERSHIP ( { <ls-member>, ...} ) | POLICY <ls-policy-name> }
<ls-member> ::=
FOR LOGICAL COORDINATOR | <mpx-server-name>
Parameters
logical-server-name
● ALL
● AUTO
● COORDINATOR
● DEFAULT
● NONE
● OPEN
● SERVER
MEMBERSHIP
To define a logical membership to the coordinator, include FOR LOGICAL COORDINATOR in the
MEMBERSHIP clause.
When no members are specified during the creation of a logical server, the logical server is created empty.
Note
Implicit logical server membership definitions, such as those for OPEN and SERVER logical servers, are
not stored at all.
The SYS.ISYSLOGICALMEMBER system table stores definitions for the logical server memberships.
Changing the ALLOW_COORDINATOR_AS_MEMBER option of the root logical server policy from ON to
OFF does not affect the membership information stored in the catalog. Instead, it affects only the effective
configuration of the logical server.
You can define a logical server membership to the current coordinator either by specifying the multiplex
server name or by using the FOR LOGICAL COORDINATOR clause, even when
The catalog stores the logical server and its membership definitions.
POLICY
Associates a logical server with a user-defined logical server policy. If no POLICY clause is specified, the
logical server is associated with the root policy. The SYS.ISYSIQLOGICALSERVER system table stores
information about the logical server policy for a corresponding logical server.
ls-policy-name
Automatically shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP option
is changed directly or indirectly.
Remarks
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
● The following example creates a user-defined logical server ls1 with three multiplex nodes as its
members:
● The following example creates a user-defined logical server ls1 with three member nodes, and defines the
logical server policy name < lsp1>:
CREATE LOGICAL SERVER ls1 MEMBERSHIP ( w1_svr, w2_svr, r2_svr ) POLICY lsp1
● The following example creates servers as in Example 2, except that WITH STOP SERVER automatically
shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP option is changed
directly or indirectly:
CREATE LOGICAL SERVER ls1 MEMBERSHIP ( w1_svr, w2_svr, r2_svr ) POLICY lsp1
WITH STOP SERVER
● The following example where n1 is the current coordinator, creates a logical server ls2 with the named
membership of multiplex nodes n1 and n3 and logical membership of the coordinator. Also sets the logical
server policy of ls2 to lspolicy2:
Related Information
Syntax
<policy-option> ::=
= <policy-option-value>
<policy-option-name> ::=
AUTO_UNLOCK_TIME
| CHANGE_PASSWORD_DUAL_CONTROL
| DEFAULT_LOGICAL_SERVER
| LOCKED
| MAX_CONNECTIONS
| MAX_DAYS_SINCE_LOGIN
| MAX_FAILED_LOGIN_ATTEMPTS
| MAX_NON_DBA_CONNECTIONS
| PAM_FAILOVER_TO_STD
| PAM_SERVICENAME
| PASSWORD_EXPIRY_ON_NEXT_LOGIN
| PASSWORD_GRACE_TIME
| PASSWORD_LIFE_TIME
| ROOT_AUTO_UNLOCK_TIME
| LDAP_PRIMARY_SERVER
| LDAP_SECONDARY_SERVER
| LDAP_AUTO_FAILBACK_PERIOD
| LDAP_FAILOVER_TO_STD
| LDAP_REFRESH_DN
<policy-option-value> ::=
{ UNLIMITED | DEFAULT | <value> }
policy-name
The name of the login policy. Specify root to modify the root login policy.
policy-option-name
The name of the policy option. See Login Policy Options and LDAP Login Policy Options for details about
each option.
policy-option-value
The value assigned to the login policy option. If you specify UNLIMITED, no limits are used. If you specify
DEFAULT, the default limits are used. See Login Policy Options and LDAP Login Policy Options for supported
values for each option.
.
Remarks
If you do not specify a policy option, values for this login policy come from the root login policy. New policies do
not inherit the MAX_NON_DBA_CONNECTIONS and ROOT_AUTO_UNLOCK_TIME policy options.
Privileges
Requires the MANAGE ANY LOGIN POLICY system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
The following system privileges can override the noted login policy options:
MAX_DAYS_SINCE_LOGIN
Examples
In this section:
Related Information
AUTO_UN The time period after which locked accounts that ● Values – 0 – UNLIMITED
LOCK_TIME are not granted the MANAGE ANY USER system ● Default – UNLIMITED
privilege are automatically unlocked. You can define ● Applies to all users who are not granted the
this option in any login policy, including the root MANAGE ANY USER system privilege
login policy.
CHANGE_PASS Requires input from two users, each of whom is ● Values – ON; OFF
WORD_DUAL_C granted the CHANGE PASSWORD system privilege, ● Default – OFF
ONTROL to change the password of another user. ● Applies to all users
LOCKED If set ON, users cannot establish new connections. ● Values – ON; OFF
This setting temporarily denies access to login pol ● Default – OFF
icy users. Logical server overrides for this option ● Applies to all users except those with the MAN
are not allowed. AGE ANY USER system privilege
MAX_DAYS_SI The maximum number of days that can elapse be ● Values – 0–2147483647
NCE_LOGIN tween two successive logins by the same user. ● Default – UNLIMITED
● Applies to all users except those with the MAN
AGE ANY USER system privilege
MAX_FAILED_L The maximum number of failed attempts, since the ● Values – 0–2147483647
OGIN_AT last successful attempt, to log in to the user ac ● Default – UNLIMITED
TEMPTS count before the account is locked. ● Applies to all users
PASS If set ON, the user's password expires at the next ● Values – ON; OFF
WORD_EX login. ● Default – OFF
PIRY_ON_NEXT ● Applies to all users
_LOGIN Note
This functionality is not currently implemented
when logging in to SAP IQ Cockpit. However,
when logging in to SAP IQ outside of SAP IQ
Cockpit (for example, using Interactive SQL),
users are then prompted to enter a new pass
word.
ROOT_AUTO_U The time period after which locked accounts that ● Values: 0 – UNLIMITED
NLOCK_TIME are granted the MANAGE ANY USER system privi ● Default: 15
lege are automatically unlocked. You can define this ● Applies to all users who are granted the MAN
option only in the root login policy. AGE ANY USER system privilege.
LDAP_PRI Specifies the name of the primary LDAP server. ● Values – N/A
MARY_SERVER ● Default – none
● Applies to all users
LDAP_SECON Specifies the name of the secondary LDAP server. ● Values – N/A
DARY_SERVER ● Default – none
● Applies to all users
LDAP_AUTO_F Specifies the time period, in minutes, after which ● Values – 0–2147483647
AILBACK_PE automatic failback to the primary server is at ● Default – 15 minutes
RIOD tempted. ● Applies to all users
Each time a user authenticates with LDAP, if the ● Applies to all users
value of ldap_refresh_dn in
ISYSLOGINPOLICYOPTION is more recent than
the value of user_dn in ISYSUSER, a search for a
new user DN occurs. The user_dn value is then
updated with the new user DN and the
user_dn_changed_at value is again updated
to the current time.
This example overrides the login policy settings on a logical server, increasing the maximum number of
connections on logical server ls1:
Any login management commands you execute on any multiplex server automatically propagate to all servers
in the multiplex. For best performance, execute these commands, or any DDL, on the coordinator.
An override at the logical server level override means that a particular login policy option has different settings
for different logical servers. SYS.ISYSIQLSLOGINPOLICYOPTION stores login policy option values for logical-
server override. For each logical-server override of a login policy option, a corresponding row exists in
ISYSIQLSLOGINPOLICYOPTION.
Creates a user-defined logical server policy. This statement enforces consistent shared system temporary
store settings across physical nodes shared by logical servers.
Syntax
<ls-option-value-list> ::=
{ <ls-option-name> = <ls-policy-option-value> } ...
<ls-option-name> ::=
ALLOW_COORDINATOR_AS_MEMBER
| DQP_ENABLED
| ENABLE_AUTOMATIC_FAILOVER
| LOGIN_REDIRECTION
| REDIRECTION_WAITERS_THRESHOLD
| TEMP_DATA_IN_SHARED_TEMP
Parameters
ls-policy-name
The name of the logical server policy. You can specify any identifier except root for the policy name.
ls-option-value-list
Any unspecified option inherits its value from the root logical server policy. See Remarks.
WITH STOP SERVER
Automatically shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP option
is changed directly or indirectly.
If you want a smaller IQ_SYSTEM_TEMP dbspace, set TEMP_DATA_IN_SHARED_TEMP to ON, which writes
temporary data to IQ_SHARED_TEMP instead of IQ_SYSTEM_TEMP. In a distributed query processing
environment, however, setting both DQP_ENABLED and TEMP_DATA_IN_SHARED_TEMP to ON may saturate
your SAN with additional data in IQ_SHARED_TEMP, where additional I/O operations against IQ_SHARED_TEMP
may adversely affect DQP performance.
ALLOW_COORDI Can only be set for the ROOT logical server policy. When ON ● Values – ON, OFF
NATOR_AS_MEM (the default), the coordinator can be a member of any user- ● Default – ON
BER defined logical server. OFF prevents the coordinator from be
ing used as a member of any user-defined logical servers.
DQP_ENABLED When set to 0, query processing is not distributed. When set ● Values – 0, 1, 2
to 1 (the default), query processing is distributed as long as a ● Default – 1
writable shared temporary file exists. When set to 2, query
processing is distributed over the network, and the shared
temporary store is not used.
ENABLE_AUTO Can only be set for the ROOT logical server policy. When ON, ● Values – ON, OFF, DEFAULT
MATIC_FAILOVER enables automatic failover for logical servers governed by ● Default – OFF
specified login policy. When OFF (the default), disables auto
matic failover at the logical server level, allowing manual fail
over. Specify DEFAULT to set back to the default value.
LOGIN_REDIREC When ON, enables login redirection for logical servers gov ● Values – ON, OFF
TION erned by specified login policy. When OFF (the default), disa ● Default – OFF
bles login redirection at the logical server level, allowing ex
ternal connection management.
REDIREC Specifies how many connections can queue before SAP IQ ● Values – Integer
TION_WAIT redirects a connection to this logical server to another ● Default – 5
ERS_THRESHOLD server. Can be any integer value; default is 5.
TEMP_DATA_IN_S When ON, all temporary table data and eligible scratch data ● Values – ON, OFF
HARED_TEMP writes to the shared temporary store, provided that the ● Default – OFF
shared temporary store has at least one read-write file
added. You must restart all multiplex nodes after setting this
option or after adding a read-write file to the shared tempo
rary store. (If the shared temporary store contains no read-
write file, or if you do not restart nodes, data is written to
IQ_SYSTEM_TEMP instead.)
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
The following example creates a user-defined logical server policy named lspolicy1:
Related Information
Adds a user-defined message to the SYSUSERMESSAGES system table for use by PRINT and RAISERROR
statements.
Syntax
Parameters
message-number
The message number of the message to add. The message number for a user-defined message must be
20000 or greater.
The text of the message to add. The maximum length is 255 bytes. PRINT and RAISERROR recognize
placeholders in the message text to print out. A single message can contain up to 20 unique placeholders
in any order. These placeholders are replaced with the formatted contents of any arguments that follow the
message when the text of the message is sent to the client.
Placeholders are numbered to allow reordering of the arguments when translating a message to a
language with a different grammatical structure. A placeholder for an argument appears as “%nn!” — a
percent sign (%), followed by an integer from 1 to 20, followed by an exclamation mark (!) — where the
integer represents the position of the argument in the argument list, “%1!” is the first argument, “%2!” is
the second argument, and so on.
Remarks
CREATE MESSAGE associates a message number with a message string. The message number can be used in
PRINT and RAISERROR statements.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side Effects
Automatic commit
Standards
Syntax
<host-port-list> ::=
{[ PRIVATE ] HOST '<host_name>' PORT <port_number> }
Go to
● Remarks
● Privileges
● Examples
Parameters
(back to top)
server-name
The name of the multiplex secondary server based on the rules for server startup option -n. The name
must be unique across the local area network.
path
The path to the database file on the secondary node, entered as an absolute value. Store the database files
on the local disk of the coordinator or secondary node, not on a remote location. The path must exist
before executing the CREATE MULTIPLEX SERVER statement.
db_file
Specifies that the particular HOST PORT pair is for private interconnection. A separate private
interconnection for multiplex interprocess communication (MIPC) enables highly available and high-
performance network configurations. SAP IQ automatically opens private ports; you need not list them in
Writer role can run read-only and read-write operations against shared IQ objects. Reader Role can only run
read-only operations. Both can manipulate local data in temporary and SA base tables. The default, if not
specified, is READER.
HOST host_name
Allows the coordinator to use an in-memory store for high-performance row-level updates. Default if not
specified is DISABLED.
STATUS | { INCLUDED | EXCLUDED }
Adds or removes a secondary node as part of a multiplex. The default, if not specified, is INCLUDED. If a
multiplex secondary server will be shut down for an extended period of time, exclude that server from the
multiplex first. After including a server, the server must be synchronized and then started. See
Synchronizing Servers.
Remarks
(back to top)
If you plan to use UNIX soft (symbolic) links for server paths, create the soft link before you run CREATE
MULTIPLEX SERVER. When you start the new server, the database file path must match the database file path
specified when creating that server.
When creating the initial multiplex server, both coordinator node and secondary node rows are added to
SYS.ISYSIQMPXSERVER. The transaction log records this operation as two separate CREATE MULTIPLEX
SERVER commands, one for the coordinator node and one for the secondary node.
After creating the first secondary node, the coordinator shuts down automatically.
The SYS.ISYSIQMPXSERVER system table stores the HOST '<hostname>' PORT <port number> pairs in
its connection_info string as host:port[;host:port…].
Note
Use multiple host:port pairs if the computer the multiplex server is running on has multiple redundant
network cards mapped to different network addresses.
You may specify the clauses DATABASE, host-port list, ROLE and STATUS in any order.
When you add a server, the coordinator must be running, but you can run the CREATE MULTIPLEX SERVER
command from any server in the multiplex.
(back to top)
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
(back to top)
In the following statement, the role of server host_c is converted to a coordinator and the secondary node
mpxnode_w1, running on 2957 is created with a writer role. The statement also defines the path (mympx_c1)
to where the secondary node will run, and where the synchronized copy of the database (mpxtest.db) will
reside on the secondary node.
Related Information
Creates or replaces a mutex (lock) that can be used to lock a resource such as a file or a procedure.
Syntax
Parameters
owner
The owner of the mutex. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
mutex-name
Use this clause to overwrite (update) the definition of a permanent mutex of the same name, if one exists.
If the OR REPLACE clause is specified, and a mutex with this name is in use at the time, then the statement
returns an error.
You cannot use this clause with the TEMPORARY or IF NOT EXISTS clauses.
TEMPORARY clause
Use this clause to create a mutex only if it doesn't already exist. If a mutex exists with the same name, then
nothing happens and no error is returned.
Use this clause to specify whether the mutex applies to a transaction (TRANSACTION), or the connection
(CONNECTION). If the SCOPE clause is not specified, then the default behavior is CONNECTION.
Remarks
Permanent and temporary mutexes and semaphores share the same namespace; therefore, you cannot create
two of these objects with the same name and owner. Use of the OR REPLACE and IF NOT EXISTS clause can
inadvertently cause an error related to naming. For example, if you have a permanent mutex, and you try to
create a temporary semaphore with the same name, an error is returned even if you specify IF NOT EXISTS.
Similarly, if you have a temporary semaphore, and you try to replace it with a permanent semaphore with the
same name by specifying OR REPLACE, an error is returned because this is equivalent to attempting to create
a second object with the same name.
Permanent mutex definitions persist across database restarts. However, their state information (locked or
released), does not.
A temporary mutex persists until the connection that created it is terminated, or until the mutex is dropped
using a DROP MUTEX statement. If another connection is waiting for a temporary mutex and the connection
that created the temporary mutex is terminated, then an error is returned to the waiting connection indicating
that the mutex has been deleted.
CONNECTION scope mutexes are not automatically released other than when the connection is terminated.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Example
The following statement creates a connection scope mutex called protect_my_cr_section to protect a
critical section of a stored procedure.
Related Information
To create external procedure interfaces, see CREATE PROCEDURE Statement (External Procedures).
Syntax
<parameter> ::=
<parameter_mode> <parameter-name> <data-type> [ DEFAULT <expression> ]
| SQLCODE
Parameters
parameter-name
Parameter names must conform to the rules for other database identifiers, such as column names, and
must be a valid SQL data type. The keywords have the following meanings:
Parameters can be prefixed by one of the keywords IN, OUT or INOUT. If no keyword is specified,
parameters are INOUT by default. The keywords have the following meanings:
Set the data type explicitly, or specify the %TYPE or %ROWTYPE attribute to set the data type to the data
type of another object in the database. Use %TYPE to set it to the data type of a column in a table or view.
Use %ROWTYPE to set the data type to a composite data type derived from a row in a table or view.
However, defining the data type using a %ROWTYPE that is set to a table reference variable (TABLE REF
(<table-reference-variable>) %ROWTYPE is not allowed.
SQLSTATE and SQLCODE
Special parameters that output the SQLSTATE or SQLCODE value when the procedure ends (they are OUT
parameters). Whether or not a SQLSTATE and SQLCODE parameter is specified, the SQLSTATE and
SQLCODE special values can always be checked immediately after a procedure call to test the return
status of the procedure.
The SQLSTATE and SQLCODE special values are modified by the next SQL statement. Providing SQLSTATE
or SQLCODE as procedure arguments allows the return code to be stored in a variable.
CREATE
Replaces an existing procedure with the same name. This clause changes the definition of the procedure,
but preserves existing permissions.
You cannot use the OR REPLACE clause with temporary procedures. Also, an error is returned if the
procedure being replaced is already in use.
TEMPORARY
The stored procedure is visible only by the connection that created it, and that it is automatically dropped
when the connection is dropped. You can also explicitly drop temporary stored procedures. You cannot
perform ALTER, GRANT, or REVOKE on them, and, unlike other stored procedures, temporary stored
procedures are not recorded in the catalog or transaction log.
To drop the owner of a temporary procedure, drop the temporary procedure first.
You can create and drop temporary stored procedures when you are connected to a read-only database;
they cannot be external procedures.
For example, the following temporary procedure drops the table called CustRank, if it exists. For this
example, the procedure assumes that the table name is unique and can be referenced by the procedure
creator without specifying the table owner:
RESULT
Declares the number and type of columns in the result set. The parenthesized list following the RESULT
keyword defines the result column names and types. This information is returned by the Embedded SQL
DESCRIBE or by ODBC SQLDescribeCol when a CALL statement is being described. Allowed data types are
listed in SQL Data Types.
Some procedures can produce more than one result set, depending on how they are executed. For
example, this procedure returns two columns under some circumstances, and one in others:
Procedures with variable result sets must be written without a RESULT clause, or in Transact-SQL. Their
use is subject to these limitations:
● Embedded SQL – you must DESCRIBE the procedure call after the cursor for the result set is opened,
but before any rows are returned, in order to get the proper shape of result set. The CURSOR
<cursor-name> clause on the DESCRIBE statement is required.
● ODBC, OLE DB, ADO.NET – variable result-set procedures can be used by ODBC applications. The
proper description of the result sets is carried out by the driver or provider.
● Open Client applications – variable result-set procedures can be used by Open Client applications.
If your procedure returns only one result set, use a RESULT clause. The presence of this clause prevents
ODBC and Open Client applications from describing the result set again after a cursor is open.
Declares that this procedure returns no result set. This is useful when an external environment needs to
know that a procedure does not return a result set.
SQL SECURITY
Defines whether the procedure is executed as the INVOKER (the user who is calling the procedure), or as
the DEFINER (the user who owns the procedure). The default is DEFINER.
Extra memory is used when you specify SQL SECURITY INVOKER, because annotation must be done for
each user that calls the procedure. Also, name resolution is performed as the invoker as well. Therefore,
qualify all object names (tables, procedures, and so on) with their appropriate owner. For example,
suppose user1 creates this procedure:
If user2 attempts to run this procedure and a table user2.table1 does not exist, a table lookup error
results. Additionally, if a user2.table1 does exist, that table is used instead of the intended
user1.table1. To prevent this situation, qualify the table reference in the statement (user1.table1,
instead of just table1).
ON EXCEPTION RESUME
The procedure takes an action that depends on the setting of the ON_TSQL_ERROR option. If
ON_TSQL_ERROR option is set to CONDITIONAL (which is the default) the execution continues if the next
statement handles the error; otherwise, it exits.
● IF
● SELECT @variable
● CASE
● LOOP
● LEAVE
● CONTINUE
● CALL
● EXECUTE
● SIGNAL
● RESIGNAL
● DECLARE
● SET VARIABLE
Creates a proxy stored procedure on the current database for a remote procedure specified by
<location-string>. The AT clause supports the semicolon (;) as a field delimiter in <location-
string>. If no semicolon is present, a period is the field delimiter. This allows file names and extensions to
be used in the database and owner fields.
Remarks
CREATE PROCEDURE creates a procedure in the database. A procedure is invoked with a CALL statement. You
can create permanent or temporary (TEMPORARY) stored procedures. You can use PROC as a synonym for
PROCEDURE.
Note
There are two ways to create stored procedures: ISO/ANSI SQL and T-SQL. BEGIN TRANSACTION, for
example, is T-SQL-specific when using CREATE PROCEDURE syntax. Do not mix syntax when creating
stored procedures. See CREATE PROCEDURE Statement [T-SQL].
When procedures are executed using CALL, not all parameters need to be specified. If a default value is
provided in the CREATE PROCEDURE statement, missing parameters are assigned the default values. If an
argument is not provided in the CALL statement, and no default is set, an error is given.
If a remote procedure can return a result set, even if it does not return one in all cases, then the local procedure
definition must contain a RESULT clause.
Privileges
The privilege required depends on the procedure type and ownership of the procedure. See GRANT System
Privilege Statement [page 1511] for assistance with granting privileges.
Watcom SQL or Transact SQL proce Self Requires CREATE PROCEDURE system
dure privilege.
Side Effects
Automatic commit
Standards
Examples
● The following example uses a case statement to classify the results of a query:
● The following example uses a cursor and loop over the rows of the cursor to return a single value:
In this section:
Sharing a temporary table between procedures can cause problems if the table definitions are inconsistent.
For example, you have two procedures procA and procB, both of which define a temporary table,
temp_table, and call another procedure called sharedProc. Neither procA nor procB have been called yet,
so the temporary table does not yet exist.
Now, if both procA and procB used the same column names and types but their definitions for temp_table
differed slightly, the column order would differ.
When you call procA, it returns the expected result. However, when you call procB, it returns a different result.
This is because when procA was called, it created temp_table, and then called sharedProc. When
sharedProc was called, the SELECT statement inside of it was parsed and validated, and then a parsed
representation of the statement is cached so that it can be used again when another SELECT statement is
executed. The cached version reflects the column ordering from the table definition in procA.
Calling procB causes temp_table to be re-created, but with different column ordering. When procB calls
sharedProc, the database server uses the cached representation of the SELECT statement. So, the results
differ.
● Ensure that temporary tables used in this way are defined consistently
● Consider using a global temporary table instead
Creates a new procedure that is compatible with SAP Adaptive Server Enterprise.
This subset of the Transact-SQL CREATE PROCEDURE statement is supported in SAP IQ.
Syntax
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
Parameters
(back to top)
CREATE
Replaces an existing procedure with the same name. This clause changes the definition of the procedure,
but preserves existing permissions.
Remarks
(back to top)
● Variable names prefixed by @ – the “@” sign denotes a Transact-SQL variable name; SAP IQ variables can
be any valid identifier and the @ prefix is optional.
● Input and output parameters – SAP IQ procedure parameters are specified as IN, OUT, or INOUT; Transact-
SQL procedure parameters are INPUT parameters by default or can be specified as OUTPUT. Those
parameters declared as INOUT or as OUT in SAP IQ should be declared with OUTPUT in Transact-SQL.
● Parameter default values – SAP IQ procedure parameters are given a default value using the keyword
DEFAULT; Transact-SQL uses an equality sign (=) to provide the default value.
● Procedure body – the body of a Transact-SQL procedure is a list of Transact-SQL statements prefixed by
the AS keyword. The body of an SAP IQ procedure is a compound statement, bracketed by BEGIN and END
keywords.
Note
There are two ways to create stored procedures: T-SQL and SQL/92. BEGIN TRANSACTION, for
example, is T-SQL specific when using CREATE PROCEDURE syntax. Do not mix syntax when creating
stored procedures.
If the Transact-SQL WITH RECOMPILE optional clause is supplied, it is ignored. SAP SQL Anywhere always
recompiles procedures the first time they are executed after a database is started, and stores the compiled
procedure until the database is stopped.
Privileges
(back to top)
Watcom SQL or Transact SQL procedure to be owned by self – requires CREATE PROCEDURE system privilege.
Watcom SQL or Transact SQL procedure to be owned by any user – requires one of:
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Related Information
For CREATE PROCEDURE reference information for Java UDFs, see CREATE PROCEDURE Statement [Java
UDF]. For CREATE PROCEDURE reference information for table UDFs, see CREATE PROCEDURE Statement
[Table UDF]
Syntax
<parameter> ::=
<parameter_mode> <parameter-name> <data-type> [ DEFAULT <expression> ] |
SQLCODE | SQLSTATE
<native-call> ::=
[<system-configuration>:]<function-name>@<library>
<system-configuration> ::=
{ <generic-operating-system> | <specific-operating-system> } [ (<processor-
architecture>) ]
<specific-operating-system> ::=
{ AIX | HPUX | Linux | OSX | Solaris | WindowsNT }
<processor-architecture> ::=
{ 32 | 64 | ARM | IA64 | PPC | SPARC | X86 | X86_64 }
<c-call> ::=
[<system-configuration>:]<function-name>@<library>; ...
<perl-call> ::=
<file=<perl-file>> $sa_perl_return = <perl-
subroutine>( $sa_perl_arg0[, ... ] )
<php-call> ::=
<file=<php-file>> print <php-func>( $argv[1][, ... ] )
<java-call> ::=
[<package-name>.]<class-name>.<method-name> <method-signature>
<method-signature> ::=
( [ <field-descriptor>, ... ] ) <return-descriptor>
Parameters
CREATE
Replaces an existing procedure with the same name. This clause changes the definition of the procedure,
but preserves existing permissions.
parameter
Parameter names must conform to the rules for other database identifiers, such as column names, and
must be a valid SQL data type. The keywords have the following meanings:
Parameters can be prefixed by one of the keywords IN, OUT or INOUT. If no keyword is specified,
parameters are INOUT by default. The keywords have the following meanings:
Note
TABLE parameters cannot be declared as INOUT or OUT. See CREATE PROCEDURE Statement (Table
UDF).
When procedures are executed using CALL, not all parameters need to be specified. If a default value is
provided in the CREATE PROCEDURE statement, missing parameters are assigned the default values. If an
argument is not provided in the CALL statement, and no default is set, an error is given.
Note
You cannot CALL a table UDF. Use the CREATE PROCEDURE statement.
RESULT
Declares the number and type of columns in the result set. The parenthesized list following the RESULT
keyword defines the result column names and types. This information is returned by the Embedded SQL
DESCRIBE or by ODBC SQLDescribeCol when a CALL statement is being described. Allowed data types are
listed in SQL Data Types.
Perl or PHP (LANGUAGE PERL, LANGUAGE PHP) external procedures cannot return result sets.
Procedures that call native functions loaded by the database server cannot return result sets.
CLR or Java (LANGUAGE CLR, LANGUAGE JAVA) external procedures can return 0, 1, or more result sets.
NO RESULT SET
Declares that this procedure returns no result set. This is useful when an external environment needs to
know that a procedure does not return a result set.
DYNAMIC RESULT SETS
Use this clause with LANGUAGE CLR and LANGUAGE JAVA calls. This clause is useful only if you specify
LANGUAGE. If you specify a RESULT clause, DYNAMIC RESULT SETS defaults to 1. If you do not specify a
RESULT clause, DYNAMIC RESULT SETS defaults to 0. Note that procedures that call into Perl or PHP
(LANGUAGE PERL, LANGUAGE PHP) external functions cannot return result sets. Procedures that call
native functions loaded by the database server cannot return result sets.
Procedures that call into Perl or PHP (LANGUAGE PERL, LANGUAGE PHP) external functions cannot
return result sets. Procedures that call native functions loaded by the database server cannot return result
sets.
SQL SECURITY
Defines whether the procedure is executed as the INVOKER (the user who is calling the procedure), or as
the DEFINER (the user who owns the procedure). The default is DEFINER. For external calls, this clause
establishes the ownership context for unqualified object references in the external environment.
Extra memory is used when you specify SQL SECURITY INVOKER, because annotation must be done for
each user that calls the procedure. Also, name resolution is performed as the invoker as well. Therefore,
qualify all object names (tables, procedures, and so on) with their appropriate owner. For example,
suppose user1 creates this procedure:
If user2 attempts to run this procedure and a table user2.table1 does not exist, a table lookup error
results. Additionally, if a user2.table1 does exist, that table is used instead of the intended
user1.table1. To prevent this situation, qualify the table reference in the statement (user1.table1,
instead of just table1).
EXTERNAL NAME
A procedure using the EXTERNAL NAME clause with no LANGUAGE attribute defines an interface to a
native function written in a programming language such as C. The native function is loaded by the
database server into its address space.
The library name can include the file extension, which is typically .dll on Windows and .so on UNIX. In
the absence of the extension, the software appends the platform-specific default file extension for libraries.
This is a formal example:
A simpler way to write the preceding EXTERNAL NAME clause, using platform-specific defaults:
When called, the library containing the function is loaded into the address space of the database server.
The native function executes as part of the server. In this case, if the function causes a fault, then the
database server terminates. Because of this, loading and executing functions in an external environment
using the LANGUAGE attribute is recommended. If a function causes a fault in an external environment,
the database server continues to run.
EXTERNAL NAME c-call LANGUAGE { C_ESQL32 | C_ESQL64 | C_ODBC32 | C_ODBC64 } :
When the LANGUAGE attribute is specified, then the library containing the function is loaded by an
external process and the external function will execute as part of that external process. In this case, if the
function causes a fault, then the database server will continue to run.
To call a Perl function in an external environment, the procedure interface is defined with an EXTERNAL
NAME clause followed by the LANGUAGE PERL attribute.
A Perl stored procedure or function behaves the same as a SQL stored procedure or function with the
exception that the code for the procedure or function is written in Perl and the execution of the procedure
or function takes place outside the database server (that is, within a Perl executable instance).
To call a PHP function in an external environment, the procedure interface is defined with an EXTERNAL
NAME clause followed by the LANGUAGE PHP attribute.
A PHP stored procedure or function behaves the same as a SQL stored procedure or function with the
exception that the code for the procedure or function is written in PHP and the execution of the procedure
or function takes place outside the database server (that is, within a PHP executable instance).
A Java method signature is a compact character representation of the types of the parameters and the
type of the return value.
To call a Java method in an external environment, the procedure interface is defined with an EXTERNAL
NAME clause followed by the LANGUAGE JAVA attribute.
The descriptors for arguments and return values from Java methods have the following meanings:
● B – byte
● C – char
● D – double
● F – float
● I – int
● J – long
● L–
● L <class-name>; – an instance of the <class-name> class. The class name must be fully qualified,
and any dot in the name must be replaced by a backslash. For example, java/lang/String
● S – short
● V – void
● Z – Boolean
● [ – use one for each dimension of an array
double some_method(
boolean a,
int b,
java.math.BigDecimal c,
byte [][] d,
java.sql.ResultSet[] d ) {
}
Remarks
The body of a procedure consists of a compound statement. For information on compound statements, see
BEGIN … END Statement.
Note
There are two ways to create stored procedures: ISO/ANSI SQL and T-SQL. BEGIN TRANSACTION, for
example, is T-SQL specific when using CREATE PROCEDURE syntax. Do not mix syntax when creating
stored procedures. See CREATE PROCEDURE Statement [T-SQL].
If a stored procedure returns a result set, it cannot also set output parameters or return a return value.
When referencing a temporary table from multiple procedures, a potential issue can arise if the temporary
table definitions are inconsistent and statements referencing the table are cached.
You can create permanent stored procedures that call external or native procedures written in a variety of
programming languages. You can use PROC as a synonym for PROCEDURE.
Privileges
External procedure to be owned by any user – Requires CREATE EXTERNAL REFERENCE system privilege.
Also requires one of:
Side Effects
Automatic commit
Standards
Related Information
Syntax
<parameter> ::=
[ IN <parameter_mode> <parameter-name> <data-type>
[ DEFAULT <expression> ]
<result-column> ::=
<column-name> <data-type>
<java-call> ::=
'[<package-name>.]<class-name>.<method-name> <method-signature>'
<java> ::=
[ ALLOW | DISALLOW SERVER SIDE REQUESTS ]
Go to:
● Remarks
● Privileges
● Standards
Parameters
(back to top)
java
DISALLOW is the default. ALLOW indicates that server-side connections are allowed.
Do not specify ALLOW unless necessary. A setting of ALLOW slows down certain types of SAP IQ table
joins. If you change a procedure definition from ALLOW to DISALLOW, or vice-versa, the change will not
be recognized until you make a new connection.
Do not use UDFs with both ALLOW SERVER SIDE REQUESTS and DISALLOW SERVER SIDE REQUESTS
in the same query.
Remarks
(back to top)
If your query references SAP IQ tables, note that different syntax and parameters apply compared to a query
that references only catalog store tables.
For Java table functions, exactly one result set is allowed. If the Java table functions are joined with an SAP IQ
table or if a column from an SAP IQ table is an argument to the Java table function then only one result set is
supported.
If the Java table function is the only item in the FROM clause then N number of result sets are allowed.
For CREATE PROCEDURE reference information for external procedures, see CREATE PROCEDURE Statement
[External Procedures]. For CREATE PROCEDURE reference information for table UDFs, see CREATE
PROCEDURE Statement [Table UDF].
Privileges
(back to top)
Unless creating a temporary procedure, a user must have the CREATE PROCEDURE system privilege to create
a procedure for themselves. To create UDF procedure for others, a user must specify an owner and have either
the CREATE ANY PROCEDURES or CREATE ANY OBJECT system privilege. If a procedure has an external
reference, a user must also have the CREATE EXTERNAL REFERENCE system privilege, in addition to the
previously mentioned system privileges, regardless of whether or not they are the owner of procedure.
Standards
(back to top)
For CREATE PROCEDURE reference information for external procedures, see CREATE PROCEDURE Statement
[External Procedures]. For CREATE PROCEDURE reference information for Java UDFs, see CREATE PROCEDURE
Statement [Java UDF].
Syntax
<parameter> ::=
[ IN ] <parameter-name> <data-type> [ DEFAULT <expression> ]
| [ IN ] <parameter-name> <table-type>
<table-type> ::=
TABLE | TABLE ( <column-name> <data-type> [, …] )
<result-type> ::=
<table-name> TABLE | <result-col-type> [, …]
<result-col-type> ::=
<column-name> <data type>
<external-call> ::=
[<column-name>:]<function-name@library>; …
Go to:
● Remarks
● Privileges
● Standards
Parameters
(back to top)
IN
The parameter is an object that provides a value for a scalar parameter or a set of values for a TABLE
parameter to the UDF.
Note
TABLE parameters cannot be declared as INOUT or OUT. You can only have one TABLE parameter (the
position of which is not important).
Declares the column names and their data types for the result set of the external UDF. If the UDF is not
polymorphic, the data types of the columns must be a valid SQL data type. If the result table in a UDF is
polymorphic, it is declared as "RESULT ( TABLE). The set of datums in the result implies the TABLE.
External UDFs can only have one result set of type TABLE.
Note
A table UDF cannot have LONG VARBINARY or LONG VARCHAR data types in its result set, but a table
parameterized function (TPF) can have large object (LOB) data in its result set.
A TPF cannot produce LOB data, but can have columns in the result set as LOB data types. However,
the only way to get LOB data in the output is to pass a column from an input table to the output table.
The describe attribute EXTFNAPIV4_DESCRIBE_COL_VALUES_SUBSET_OF_INPUT allows this, as
illustrated in the sample file tpf_blob.cxx.
SQL SECURITY
Defines whether the procedure is executed as the INVOKER (the user who is calling the UDF), or as the
DEFINER (the user who owns the UDF). The default is DEFINER.
When SQL SECURITY INVOKER is specified, more memory is used because annotation must be done for
each user that calls the procedure. Also, when SQL SECURITY INVOKER is specified, name resolution is
done as the invoker as well. Therefore, care should be taken to qualify all object names (tables, procedures,
and so on) with their appropriate owner. For example, suppose user1 creates this procedure:
If user2 attempts to run this procedure and a table user2.table1 does not exist, a table lookup error results.
Additionally, if a user2.table1 does exist that table is used instead of the intended user1.table1. To prevent
this situation, qualify the table reference in the statement (user1.table1, instead of just table1).
EXTERNAL NAME
An external UDF must have EXTERNAL NAME clause, which defines an interface to a function written in a
programming language such as C. The function is loaded by the database server into its address space.
The library name can include the file extension, which is typically .dll on Windows and .so on UNIX. In
the absence of the extension, the software appends the platform-specific default file extension for libraries.
This is a formal example:
Remarks
(back to top)
You define table UDFs using the a_v4_extfn API. CREATE PROCEDURE statement reference information for
external procedures that do not use the a_v3_extfn or a_v4_extfn APIs is located in a separate topic.
CREATE PROCEDURE statement reference information for Java UDFs is located in a separate topic.
The CREATE PROCEDURE statement creates a procedure in the database. To create a procedure for
themselves, a user must have the CREATE PROCEDURE system privilege. To create a procedure for others, a
user must specify the owner of the procedure and must have either the CREATE ANY PROCEDURE or CREATE
ANY OBJECT system privilege. If the procedure contains an external reference, the user must have the CREATE
EXTERNAL REFERENCE system privilege in addition to previously mentioned system privileges, regardless of
who owns the procedure.
If a stored procedure returns a result set, it cannot also set output parameters or return a return value.
When referencing a temporary table from multiple procedures, a potential issue can arise if the temporary
table definitions are inconsistent and statements referencing the table are cached. Use caution when
referencing temporary tables within procedures.
You can use the CREATE PROCEDURE statement to create external table UDFs implemented in a different
programming language than SQL. However, be aware of the table UDF restrictions before creating external
UDFs.
The data type for a scalar parameter, a result column, and a column of a TABLE parameter must be a valid SQL
data type.
Parameter names must conform to the rules for other database identifiers such as column names. They must
be a valid SQL data type.
TPFs support a mix scalar parameters and one or more TABLE parameters. Unless the UDF is polymorphic, a
TABLE parameter must define a schema for an input set of rows to be processed by the UDF. The definition of a
TABLE parameter includes column names and column data types.
The following example defines a schema with the two columns c1 and c2 of types INT and CHAR(20). Each row
processed by the UDF must be a tuple with two (2) values. TABLE parameters, unlike scalar parameters cannot
be assigned a default value:
(back to top)
Unless creating a temporary procedure, a user must have the CREATE PROCEDURE system privilege to create
a UDF for themselves. To create a UDF for others, they must specify the owner of the procedure and must have
either the CREATE ANY PROCEDURE or CREATE ANY OBJECT system privilege. If the procedure contains an
external reference, a user must also have the CREATE EXTERNAL REFERENCE system privilege, in addition to
the previously mentioned system privileges.
Standards
(back to top)
Creates a user-defined web client procedure that makes HTTP or SOAP requests to an HTTP server.
Syntax
<http-type-spec-string> :
HTTP[: { GET
| POST[:<MIME-type> ]
| PUT[:<MIME-type> ]
| DELETE
| HEAD
| OPTIONS } ]
<soap-type-spec-string> :
SOAP[:{ RPC | DOC }
<parameter> :
<parameter-mode> :
IN
| OUT
| INOUT
<url-string> :
{ HTTP | HTTPS | HTTPS_FIPS }://[<user>:<password>@]<hostname>[:<port>][/<path>]
<option-list> :
HTTP( <http-option> [ ;<http-option> ...] )
| SOAP( <soap-option> [ ;<soap-option> ...] )
| REDIR( <redir-option> [ ;<redir-option> ...] )
<http-option> :
CHUNK={ ON | OFF | AUTO }
| EXCEPTIONS={ ON | OFF | AUTO }
| VERSION={ 1.0 | 1.1 }
| KTIMEOUT=<number-of-seconds>
<soap-option> :
OPERATION=<soap-operation-name>
<redir-option> :
COUNT=<count>
| STATUS=<status-list>
Parameters
OR REPLACE clause
Specifying CREATE OR REPLACE PROCEDURE creates a new procedure, or replaces an existing procedure
with the same name. This clause changes the definition of the procedure, but preserves existing privileges.
An error is returned if you attempt to replace a procedure that is already in use.
procedure-name
Parameter names must conform to the rules for other database identifiers such as column names. They
must have a valid SQL data type.
If a parameter has a default value, it need not be specified. Parameters with no default value must be
specified.
Parameters can be prefixed with one of the keywords IN, OUT, or INOUT. OUT and INOUT parameters are
only supported for SOAP procedures. If you do not specify one of these values, parameters are INOUT by
default. The keywords have the following meanings:
IN
The parameter is a variable that provides a value to the procedure, and could be given a new value by
the procedure.
datatype
The data type of the parameter. Set the data type explicitly, or specify the %TYPE or %ROWTYPE attribute
to set the data type to the data type of another object in the database. Use %TYPE to set it to the data type
of a column in a table or view. Use %ROWTYPE to set the data type to a composite data type derived from
a row in a table or view. However, defining the data type using a %ROWTYPE that is set to a table reference
variable (TABLE REF (<table-reference-variable>) %ROWTYPE is not allowed.
Only SOAP requests support the transmission of typed data such as FLOAT, INT, and so on. HTTP requests
support the transmission of strings only, so you are limited to CHAR types.
RESULT clause
The RESULT clause is required to use the procedure in a SELECT statement. The RESULT clause must
return two columns. The first column contains HTTP response header, status, and response body
attributes, while the second column contains the values for these attributes. The RESULT clause must
specify two character data types. For example, VARCHAR or LONG VARCHAR. If the RESULT clause is not
specified, the default column names are Attribute and Value and their data types are LONG VARCHAR.
URL clause
Specifies the URI of the web service. The optional user name and password parameters provide a means of
supplying the credentials needed for HTTP basic authentication. HTTP basic authentication base-64
encodes the user and password information and passes it in the Authentication header of the HTTP
request. When specified in this way, the user name and password are passed unencrypted, as part of the
URL.
For procedures of type HTTP:GET, query parameters can be specified within the URL clause in addition to
being automatically generated from parameters passed to a procedure.
URL 'http://localhost/service?parm=1
Specifying HTTPS_FIPS forces the system to use the FIPS-certified libraries. If HTTPS_FIPS is specified,
but no FIPS-certified libraries are present, libraries that are not FIPS-certified are used instead.
To use a certificate from the operating system certificate store, specify a URL beginning with https://.
TYPE clause
Specifies the format used when making the web service request. SOAP:RPC is used when SOAP is
specified or no TYPE clause is included. HTTP:POST is used when HTTP is specified.
The TYPE clause allows the specification of a MIME-type for HTTP:POST and HTTP:PUT types. When
HTTP:PUT is used, then a MIME-type must be specified.The <MIME-type> specification is used to set the
Content-Type request header and set the mode of operation to allow only a single call parameter to
populate the body of the request. Only zero or one parameter may remain when making a web service
stored procedure call after parameter substitutions have been processed. Calling a web service procedure
with a NULL value or no parameter (after substitutions) results in a request with no body and a content-
● text/plain
● text/html
● text/xml
When no MIME-type is specified, parameter names and values (multiple parameters are permitted) are
URL encoded within the body of the HTTP request.
The keywords for the TYPE clause have the following meanings:
HTTP:GET
For example, the following request is produced when a client submits a request from the URL http://
localhost/WebServiceName?arg1=param1&arg2=param2:
HTTP:POST
For example, the following request is produced when a client submits a request from the URL http://
localhost/WebServiceName?arg1=param1&arg2=param2:
HTTP:PUT
HTTP:PUT is similar to HTTP:POST, but the HTTP:PUT type does not have a default media type.
The following example demonstrates how to configure a general purpose client procedure that uploads
data to a database server running the %IQDIRSAMP%\SQLAnywhere\HTTP\put_data.sql sample:
HTTP:DELETE
A web service client procedure can be configured to delete a resource located on a server. Specifying
the media type is optional.
HTTP:HEAD
The HEAD method is identical to a GET method but the server does not return a body. A media type
can be specified.
HTTP:OPTIONS
The OPTIONS method is identical to a GET method but the server does not return a body. A media
type can be specified. This method allows Cross-Origin Resource Sharing (CORS).
SOAP:RPC
This type sets the Content-Type header to 'text/xml'. SOAP operations and parameters are
encapsulated in SOAP envelope XML documents.
SOAP:DOC
This type sets the Content-Type header to 'text/xml'. It is similar to the SOAP:RPC type but allows you
to send richer data types. SOAP operations and parameters are encapsulated in SOAP envelope XML
documents.
Specifying a MIME-type for the TYPE clause automatically sets the Content-Type header to that MIME-
type.
HEADER clause
When creating HTTP web service client procedures, use this clause to add, modify, or delete HTTP request
header entries. The specification of headers closely resembles the format specified in RFC2616 Hypertext
Transfer Protocol, HTTP/1.1, and RFC822 Standard for ARPA Internet Text Messages, including the fact
that only printable ASCII characters can be specified for HTTP headers, and they are case-insensitive.
Headers can be defined as <header-name>:<value-name> pairs. Each header must be delimited from its
value with a colon ( : ) and therefore cannot contain a colon. You can define multiple headers by delimiting
each pair with \n, \x0d\n, <LF> (line feed), or <CR><LF>. (carriage return followed by a line feed)
Multiple contiguous white spaces within the header are converted to a single white space.
CERTIFICATE clause
Certificates are required only for requests that are either directed to an HTTPS server, or can be redirected
from a non-secure to a secure server. Only PEM formatted certificates are supported.
CLIENTPORT clause
Identifies the port number on which the HTTP client procedure communicates using TCP/IP. It is provided
for and recommended only for connections through firewalls that filter "outgoing" TCP/IP connections. You
can specify a single port number, ranges of port numbers, or a combination of both; for example,
CLIENTPORT '85,90-97''.
PROXY clause
Specifies the URI of a proxy server. For use when the client must access the network through a proxy. The
<proxy-string> is usually an HTTP or HTTPS url-string. This is site specific information that you usually
need to obtain from your network administrator. This clause indicates that the procedure is to connect to
PROXY http://proxy.example.com
SET clause
Specifies protocol-specific behavior options for HTTP, SOAP, and REDIR (redirects). Only one SET clause is
permitted. The following list describes the supported SET options. CHUNK, EXCEPTIONS, VERSION, and
KTIMEOUT apply to the HTTP protocol, OPERATION applies to the SOAP protocol, and COUNT and
STATUS apply to the REDIR option. REDIR options can be included with either HTTP or SOAP protocol
options.
(short form CH) This HTTP option allows you to specify whether to use chunking. Chunking allows
HTTP messages to be broken up into several parts. Possible values are ON (always chunk), OFF (never
chunk), and AUTO (chunk only if the contents, excluding auto-generated markup, exceeds 8196 bytes).
For example, the following SET clause enables chunking:
SET 'HTTP(CHUNK=ON)'
If the CHUNK option is not specified, the default behavior is AUTO. If a chunked request fails in AUTO
mode with a status of 505 HTTP Version Not Supported, or with 501 Not Implemented, or with
411 Length Required, the client retries the request without chunked transfer-coding.
Set the CHUNK option to OFF (never chunk) if the HTTP server does not support chunked transfer-
coded requests.
Since CHUNK mode is a transfer encoding supported starting in HTTP version 1.1, setting CHUNK to
ON requires that the version (VER) be set to 1.1, or not be set at all, in which case 1.1 is used as the
default version.
EXCEPTIONS={ ON | OFF | AUTO }
(short form EX) This HTTP option allows you to control status code handling. The default is ON.
When set to ON or AUTO, HTTP client procedures will return a result set for HTTP success status
codes (1XX and 2XX) and all codes will raise the exception SQLE_HTTP_REQUEST_FAILED.
SET 'HTTP(EXCEPTIONS=AUTO)'
When set to OFF, HTTP client procedures will always return a result set, independent of the HTTP
status code. The result row with the word Status in the attribute column contains the HTTP status
code in the value column.
Exceptions that are not related to the HTTP status code (for example,
SQLE_UNABLE_TO_CONNECT_TO_HOST) will be raised when appropriate regardless of the
EXCEPTIONS setting.
VERSION={ 1.0 | 1.1 }
(short form VER) This HTTP option allows you to specify the version of the HTTP protocol that is used
for the format of the HTTP message. For example, the following SET clause sets the HTTP version to
1.1:
SET 'HTTP(VERSION=1.1)'
(short form KTO) This HTTP option allows you to specify the keep-alive timeout criteria, permitting a
web client procedure to instantiate and cache a keep-alive HTTP/HTTPS connection for a period of
time. To cache an HTTP keep-alive connection, the HTTP version must be set to 1.1 and KTIMEOUT set
to a non-zero value. KTIMEOUT may be useful for HTTPS connections particularly, if you notice a
significant performance difference between HTTP and HTTPS connections. A database connection
can only cache a single keep-alive HTTP connection. Subsequent calls to a web client procedure using
the same URI reuse the keep-alive connection. Therefore, the executing web client call must have a URI
whose scheme, destination host and port match that of the cached URI, and the HEADER clause must
not specify Connection: close. When KTIMEOUT is not specified, or is set to zero, HTTP/HTTPS
connections are not cached.
OPERATION=soap-operation-name
(short form OP) This SOAP option allows you to specify the name of the SOAP operation, if it is
different from the name of the procedure you are creating. The value of OPERATION is analogous to
the name of a remote procedure call. For example, if you wanted to create a procedure called
accounts_login that calls a SOAP operation called login, you would specify something like the
following:
If the OPERATION option is not specified, the name of the SOAP operation must match the name of
the procedure you are creating.
COUNT=count
(short form CNT) This REDIR option allows you to control redirects. See STATUS below.
STATUS=status-list
(short form STAT) This REDIR option allows you to control redirects. HTTP response status codes such
as302 Found and 303 See Other are used to redirect web applications to a new URI, particularly after
an HTTP POST has been performed. For example, a client request could be:
In response, the client would send another HTTP request to the new URI. The REDIR options allow you
to control the maximum number of redirections allowed and which HTTP response status codes to
automatically redirect.
The default redirection limit <count> is 5. By default, an HTTP client procedure will automatically
redirect in response to all HTTP redirection status codes (301, 302, 303, 307). To disallow all
redirection status codes, use SET 'REDIR(COUNT=0)'. In this mode, a redirection response does not
result in an error (SQLE_HTTP_REQUEST_FAILED). Instead, a result set is returned with the HTTP
status and response headers. This permits a caller to conditionally reissue the request based on the
URI contained in the Location header.
A web service procedure specifying a POST HTTP method which receives a 303 See Other status
issues a redirect request using the GET HTTP method.
The Location header can contain either an absolute path or a relative path. The HTTP client procedure
will handle either. The header can also include query parameters and these are forwarded to the
redirected location. For example, if the header contained parameters such as the following, the
subsequent GET or a POST will include these parameters.
Location: alternate_service?a=1&b=2
The following example shows how several option settings are combined in the same SET clause:
The following example shows the use of short forms with uppercase and lowercase letters.
SOAPHEADER clause
(SOAP format only) When declaring a SOAP web service as a procedure, use this clause to specify one or
more SOAP request header entries. A SOAP header can be declared as a static constant, or can be
dynamically set using the parameter substitution mechanism (declaring IN, OUT, or INOUT parameters for
hd1, hd2, and so on). A web service procedure can define one or more IN mode substitution parameters,
and a single INOUT or OUT substitution parameter.
The following example illustrates how a client can specify the sending of several header entries using
parameter substitution and receiving the response SOAP header data:
NAMESPACE clause
(SOAP format only) This clause identifies the method namespace usually required for both SOAP:RPC and
SOAP:DOC requests. The SOAP server handling the request uses this namespace to interpret the names of
the entities in the SOAP request message body. The namespace can be obtained from the WSDL (Web
You can specify a variable name for <namespace-string>. If the variable is NULL, the namespace
property is ignored.
Remarks
Parameter values are passed as part of the request. The syntax used depends on the type of request. For
HTTP:GET, the parameters are passed as part of the URL; for HTTP:POST requests, the values are placed in the
body of the request. Parameters to SOAP requests are always bundled in the request body.
You can create or replace a web services client procedure. You can use PROC as a synonym for PROCEDURE.
For SOAP requests, the procedure name is used as the SOAP operation name by default. For more information,
see the SET clause.
For required parameters that accept variable names, an error is returned if one of the following conditions is
true:
Privileges
You must have the CREATE PROCEDURE system privilege to create procedures owned by you.
You must have the CREATE ANY PROCEDURE or CREATE ANY OBJECT system privilege to create procedures
owned by others.
To replace an existing procedure, you must own the procedure or have one of the following:
Side effects
Automatic commit.
Example
1. The following example creates a web service client procedure named FtoC.
2. The following example creates a secure web service client procedure named
SecureSendWithMimeType that uses a certificate stored in the database.
3. The following example creates a procedure named SecureSendWithMimeType that uses a certificate
from the operating system certificate store:
4. The following example creates a procedure named SecureSendWithMimeType that verifies that the
certificate myrootcert.crt is at the root of the database server's certificate's signing chain, but does no
other checking:
2. The following statement creates a procedure named FtoC that uses a variable in the NAMESPACE
clause:
6. The following statement causes a POST request to the URL 'http://localhost/post_data' with the body
of the request equal to the json array '[0,1,2]' and the Content-Type of the request set to 'application/
json'.
Creates a new role, extends an existing user to act as a role, or manages role administrators on a role.
Syntax
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
role_name
Unless you are using the OR REPLACE clause, <role_name> cannot already exist in the database.
<role_name> must already exist in the database. If <role_name> does not already exist, a new user-
defined role is created. All current administrators are replaced by those specified in the <admin_name
[..]> clause as follows:
● All existing role administrators granted the WITH ADMIN OPTION not included on the new role
administrators list become members of the role with no administrative rights on the role.
● All existing role administrators granted the WITH ADMIN ONLY OPTION not included on the new role
administrators list are removed as members of the role.
When using the OR REPLACE clause, if an existing role administrator is included on the new role
administrators list he or she retains his or her original administrative rights if they are higher than the
replacement rights. For example, User A is an existing role administrator originally granted WITH ADMIN
rights on the role. New role administrators are granted WITH ADMIN ONLY rights. If User A is included on
this list, User A retains the higher WITH ADMIN rights.
FOR USER user_id
When using the FOR USER clause without the OR REPLACE, <user_id> must be the name of an existing
user that currently does not have the ability to act as a role.
admin_name
WITH ADMIN
Each <admin_name> specified is granted administrative privileges over the role in addition to all
underlying system privileges. WITH ADMIN clause is not valid when SYS_MANAGE_ROLES_ROLE is
included on the list.
WITH ADMIN ONLY
Each <admin_name> specified is granted administrative privileges only over the role, not the underlying
system privileges.
SYS_MANAGE_ROLES_ROLE
Allows global role administrators to administer the role. Can be specified in conjunction with the WITH
ADMIN ONLY clause.
Remarks
(back to top)
If you specify role administrators (<admin_name>), but do not include the global role administrator
(SYS_MANAGE_ROLES_ROLE), global role administrators will be unable to manage the new role. For this
reason, do not specify role administrators during the creation process, but instead use the OR REPLACE clause
to add them afterwards.
If you do not specify an ADMIN clause, the default WITH ADMIN ONLY clause is used and the default
administrator is the global roles administrator (SYS_MANAGE_ROLES_ROLE).
When replacing role administrators, if the role has a global role administrator, it must be included on the new
role administrators list or it is removed from the role.
Privileges
(back to top)
To use the OR REPLACE clause requires the MANAGE ROLES system privilege along with administrative rights
over the role being replaced.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● The following example creates the role Sales. Only global role administrator can administer the role.
● The following example extends the existing user Jane to act as a role.
● The following example creates the role Finance with Mary and Jeff as role administrators with
administrative rights to the role. Global role administrators cannot administer this role.
● The following example creates the role Marketing with Mary and Jeff as role administrators. Global role
administrators can also manage this role.
● In the following example, Finance is an existing role with Harry and Susan as role administrators with
administrative rights. You want to keep Susan as an administrator, replace Harry, and add the global role
administrator. The new role administrators will have administrative rights only. This statement keeps
Related Information
Creates a schema, which is a collection of tables, views, and permissions and their associated permissions, for
a database user.
Syntax
Remarks
The <userid> must be the user ID of the current connection. You cannot create a schema for another user.
The user ID is not case-sensitive.
If any of the statements in the CREATE SCHEMA statement fail, the entire CREATE SCHEMA statement is rolled
back.
CREATE SCHEMA statement is simply a way to collect individual CREATE and GRANT statements into one
operation. There is no SCHEMA database object created in the database, and to drop the objects you must use
individual DROP TABLE or DROP VIEW statements. To revoke permissions, use a REVOKE statement for each
permission granted.
Note
Individual CREATE or GRANT statements are not separated by statement delimiters. The statement delimiter
marks the end of the CREATE SCHEMA statement itself.
Creating more than one schema for a user is not recommended and might not be supported in future releases.
Privileges
Requires the CREATE ANY OBJECT system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Side Effects
Automatic commit
Standards
Related Information
Creates or replaces a semaphore and establishes the initial value for its counter. A semaphore is a locking
mechanism that uses a counter to communicate and control the availability of a resource such as an external
library or procedure.
Syntax
owner
The owner of the semaphore. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
semaphore-name
The name of the semaphore. Specify a valid identifier in the CHAR database collation. <semaphore-
name> can also be specified using an indirect identifier (for example, `[@<variable-name>]`).
OR REPLACE clause
Use this clause to overwrite (update) the definition of a permanent semaphore of the same name, if one
exists.
If the OR REPLACE clause is specified, and a semaphore with this name is in use at the time, then the
statement returns an error.
You cannot use this clause with the TEMPORARY or IF NOT EXISTS clauses.
TEMPORARY clause
Use this clause to create a semaphore only if it doesn't already exist. If a semaphore exists with the same
name and same lifespan (permanent or temporary), then nothing happens and no error is returned.
Use this clause to specify the initial value for the semaphore counter. If this clause is not specified, then
<initial-count> is set to 0.
<initial-count> can be specified using a variable (for example, START WITH @initial-count).
If you set <initial-count> to NULL, or if it is set to a variable and the variable value is NULL, the
behavior is equivalent to not specifying the clause.
Remarks
The CREATE SEMAPHORE statement creates a semaphore and establishes a counter for it. Each time a
NOTIFY SEMAPHORE statement is executed, the counter for the associated semaphore is incremented. Each
time a WAITFOR SEMAPHORE statement is executed, and assuming the current count is a positive integer, the
counter for the associated semaphore is decremented.
Permanent and temporary mutexes and semaphores share the same namespace, therefore you cannot create
two of these objects with the same name. Use of the OR REPLACE and IF NOT EXISTS clause can inadvertently
Permanent semaphore definitions persist across database restarts. However, their count returns to
<initial-count> after a restart.
A temporary semaphore persists until the connection that created it is terminated, or until an explicit DROP
operation is performed. If another connection is waiting for a temporary semaphore and the connection that
created the temporary semaphore is terminated, then an error is returned to the waiting connection.
When replacing (OR REPLACE clause) a permanent semaphore, the old semaphore is deleted, and all
connections waiting for the semaphore are notified.
If the OR REPLACE clause is specified, and a permanent semaphore with that name exists and connections are
blocked waiting for the semaphore, the semaphore is still replaced. In this case, the waiting connections are
unblocked and an error is returned to them indicating that the semaphore has been dropped. There is one
exception however. If the replacement semaphore definition has identical settings, there is no impact to waiting
connections.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
Standards
Example
The following statement creates a semaphore called license_counter and sets its counter to 3:
Creates a sequence that can be used to generate primary key values that are unique across multiple tables,
and for generating default values for a table. This statement applies to SAP IQ catalog store tables only.
Syntax
Parameters
OR REPLACE clause
Specifying OR REPLACE creates a new sequence, or replaces an existing sequence with the same name. If
you do not use the OR REPLACE clause, an error is returned if you specify the name of a sequence that
already exists for the current user.
INCREMENT BY clause
Defines the amount the next sequence value is incremented from the last value assigned. The default is 1.
Specify a negative value to generate a descending sequence. An error is returned if the INCREMENT BY
value is 0.
START WITH clause
Defines the starting sequence value. If you do not specify a value for the START WITH clause, MINVALUE is
used for ascending sequences and MAXVALUE is used for descending sequences. An error is returned if
the START WITH value is beyond the range specified by MINVALUE or MAXVALUE.
MINVALUE clause
Defines the largest value generated by the sequence. The default is 2^63-1. An error is returned if
MAXVALUE is greater than 2^63-1 or less than -(2^63-1).
CACHE clause
Specifies the number of preallocated sequence values that are kept in memory for faster access. When the
cache is exhausted, the sequence cache is repopulated and a corresponding entry is written to the
transaction log. At checkpoint time, the current value of the cache is forwarded to the ISYSSEQUENCE
system table. The default is 100.
CYCLE clause
Specifies whether values should continue to be generated after the maximum or minimum value is
reached.
The default is NO CYCLE, which returns an error once the maximum or minimum value is reached.
Remarks
A sequence is a database object that allows the automatic generation of numeric values. A sequence is not
bound to a specific or unique table column.
You control the behavior when the sequence runs out of values using the CYCLE clause.
If a sequence is increasing and it exceeds the MAXVALUE, MINVALUE is used as the next sequence value if
CYCLE is specified. If a sequence is decreasing and it falls below MINVALUE, MAXVALUE is used as the next
sequence value if CYCLE is specified. If CYCLE is not specified, an error is returned.
Privileges
You must have the CREATE ANY SEQUENCE or CREATE ANY OBJECT system privilege to create sequences.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
None
Standards
Sequences comprise SQL Language Feature T176. The software does not allow optional specification of the
sequence data type. This behavior can be achieved with a CAST when using the sequence.
● CACHE clause
● OR REPLACE syntax
● CURRVAL expression
● Use of sequences in DEFAULT expressions
Example
The following example creates a sequence named Test that starts at 4, increments by 2, does not cycle, and
caches 15 values at a time:
Related Information
Syntax
<server-class> ::=
{ SAODBC
| ASEODBC
| DB2ODBC
| MSSODBC
| ORAODBC
| ODBC }
<connection-info> ::=
{ <machine-name>:<port-number> [ /<dbname> ] | <data-source-name> }
Parameters
USING
If an ODBC-based server class is used, the USING clause is the <data-source-name>, which is the ODBC
Data Source Name.
READ ONLY
Specifies that the remote server is a read-only data source. Any update request is rejected by SAP IQ.
Remarks
Privileges
Requires the SERVER OPERATOR system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Automatic commit
Standards
Examples
The following example creates a remote server for the Oracle server named oracle723. Its ODBC Data Source
Name is “oracle723”:
Related Information
Syntax
<service-type-string> ::=
{ 'RAW'
| 'HTML'
| 'XML'
| 'SOAP'
| 'DISH' }
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
service-name-string
Web service names may be any sequence of alphanumeric characters or “/”, “-”, “_”, “.”, “!”, “~”, “*”, “'”, “(“,
or “”)”, except that the first character cannot begin with a slash (/) and the name cannot contain two or
more consecutive slash characters.
AUTHORIZATION
Determines whether users must specify a user name and password when connecting to the service. The
default value is ON.
● If authorization is OFF, the AS clause is required and a single user must be identified by the USER
clause. All requests are run using that user’s account and permissions.
● If authorization is ON, all users must provide a user name and password. Optionally, you can limit the
users that are permitted to use the service by providing a user or role name using the USER clause. If
the user name is NULL, all known users can access the service.
Run production systems with authorization turned on. Grant permission to use the service by adding users
to a role.
SECURE
Indicates whether unsecure connections are accepted. ON indicates that only HTTPS connections are to
be accepted. Service requests received on the HTTP port are automatically redirected to the HTTPS port. If
set to OFF, both HTTP and HTTPS connections are accepted. The default value is OFF.
USER
If authorization is disabled, this parameter becomes mandatory and specifies the user ID used to execute
all service requests. If authorization is enabled (the default), this optional clause identifies the user or role
permitted access to the service. The default value is NULL, which grants access to all users.
URL
Determines whether URI paths are accepted and, if so, how they are processed. OFF indicates that nothing
must follow the service name in a URI request. ON indicates that the remainder of the URI is interpreted as
the value of a variable named <url>. ELEMENTS indicates that the remainder of the URI path is to be split
at the slash characters into a list of up to 10 elements. The values are assigned to variables named url plus
Applies only to DISH services. The parameter specifies a name prefix. Only SOAP services whose names
begin with this prefix are handled.
service-type-string
Identifies the type of the service. The type must be one of the listed service types. There is no default value.
● RAW – Sends the result set to the client without any further formatting. You can produce formatted
documents by generating the required tags explicitly within your procedure.
● HTML – Formats the result set of a statement or procedure into an HTML document that contains a
table.
● XML – Assumes the result set is an XML format. If it is not already so, it is automatically converted to
XML RAW format.
● SOAP – Formats the result set as a Simple Object Access Protocol (SOAP) response. The request must
be a valid SOAP request. For more information about the SOAP standards, see www.w3.org/TR/SOAP
.
● DISH – Determine SOAP Handler, or DISH, service acts as a proxy for one or more SOAP services. In
use, it acts as a container that holds and provides access to a number of SOAP services. A Web
Services Description Language (WSDL) file is automatically generated for each of the included SOAP
services. The included SOAP services are identified by a common prefix, which must be specified in
the USING clause.
statement
If the statement is NULL, the URI must specify the statement to be executed. Otherwise, the specified
SQL statement is the only one that can be executed through the service. The statement is mandatory for
SOAP services, and ignored for DISH services. The default value is NULL.
All services that are run in production systems must define a statement. The statement can be NULL only
if authorization is enabled.
Remarks
(back to top)
The CREATE SERVICE statement causes the database server to act as a web server. A new entry is created in
the SYSWEBSERVICE system table.
In a multiplex, execute CREATE SERVICE on both the coordinator and each secondary node that will act as a
web server.
Privileges
(back to top)
Standards
(back to top)
Examples
(back to top)
The following example sets up a Web server quickly, start a database server with the -xs switch, then execute
this statement:
After executing this statement, use any Web browser to open the URL http://localhost/tables.
Related Information
Syntax
<srs-attribute> ::=
SRID <srs-id>
| DEFINITION { <definition-string> | NULL }
| ORGANIZATION { <organization-name> IDENTIFIED BY <organization-srs-id>
<grid-size> ::=
DOUBLE : usually between 0 and 1
<axis-order> ::=
{ 'x/y/z/m' | 'long/lat/z/m' | 'lat/long/z/m' }
<polygon-format> ::=
{ 'CounterClockWise' | 'Clockwise' | 'EvenOdd' }
<storage-format> ::=
{ 'Internal' | 'Original' | 'Mixed' }
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
OR REPLACE
Specifying OR REPLACE creates the spatial reference system if it does not already exist in the database,
and replaces it if it does exist. An error is returned if you attempt to replace a spatial reference system
while it is in use. An error is also returned if you attempt to replace a spatial reference system that already
exists in the database without specifying the OR REPLACE clause.
IF NOT EXISTS
Specifying CREATE SPATIAL REFERENCE IF NOT EXISTS checks to see if a spatial reference system by
that name already exists. If it does not exist, the database server creates the spatial reference system. If it
does exist, no further action is performed and no error is returned.
IDENTIFIED BY
If the IDENTIFIED BY clause is not specified, then the SRID defaults to the <organization-srs-id>
defined by either the ORGANIZATION clause or the DEFINITION clause. If neither clause defines an
<organization-srs-id> that could be used as a default SRID, an error is returned.
When the spatial reference system is based on a well known coordinate system, but has a different
geodesic interpretation, set the srs-id value to be 1000000000 (one billion) plus the well known value. For
example, the SRID for a planar interpretation of the geodetic spatial reference system WGS 84 (ID 4326)
would be 1000004326.
With the exception of SRID 0, spatial reference systems provided by SAP IQ that are not based on well
known systems are given a SRID of 2000000000 (two billion) and above. The range of SRID values from
2000000000 to 2147483647 is reserved by SAP IQ and you should not create SRIDs in this range.
To reduce the possibility of choosing a SRID that is reserved by a defining authority such as OGC or by
other vendors, you should not choose a SRID in the range 0 - 32767 (reserved by EPSG), or in the range
2147483547 - 2147483647.
Also, since the SRID is stored as a signed 32-bit integer, the number cannot exceed 231-1 or 2147483647.
DEFINITION
Set, or override, default coordinate system settings. If any attribute is set in a clause other than the
DEFINITION clause, it takes the value specified in the other clause regardless of what is specified in the
DEFINITION clause.
<definition-string> is a string in the Spatial Reference System Well Known Text syntax as defined by
SQL/MM and OGC. For example, the following query returns the definition for WGS 84:
In Interactive SQL, if you double-click the value returned, an easier to read version of the value appears.
When the DEFINITION clause is specified, definition-string is parsed and used to choose default values for
attributes. For example, definition-string may contain an AUTHORITY element that defines the
organization-name and <organization-srs-id>.
Parameter values in definition-string are overridden by values explicitly set using the SQL statement
clauses. For example, if the ORGANIZATION clause is specified, it overrides the value for ORGANIZATION in
<definition-string>.
ORGANIZATION
Information about the organization that created the spatial reference system that the spatial reference
system is based on.
TRANSFORM DEFINITION
A description of the transform to use for the spatial reference system. Currently, only the PROJ.4 transform
is supported. The transform definition is used by the ST_Transform method when transforming data
between spatial reference systems. Some transforms may still be possible even if there is no transform-
definition-string defined.
LINEAR UNIT OF MEASURE
The angular unit of measure for the spatial reference system. The value you specify must match an angular
unit of measure defined in the ST_UNITS_OF_MEASURE system table.
If this clause is not specified, and is not defined in the DEFINITION clause, the default is DEGREE for
geographic spatial reference systems and NULL for non-geographic spatial reference systems.
The angular unit of measure must be non-NULL for geographic spatial reference systems and it must be
NULL for non-geographic spatial reference systems.
The angular unit of measure must be non-NULL for geographic spatial reference systems and it must be
NULL for non-geographic spatial reference systems. To add predefined units of measure to the database,
use the sa_install_feature system procedure.
To add custom units of measure to the database, use the CREATE SPATIAL UNIT OF MEASURE statement.
TYPE
Control how the SRS interprets lines between points. For geographic spatial reference systems, the TYPE
clause can specify either ROUND EARTH (the default) or PLANAR. The ROUND EARTH model interprets
lines between points as great elliptic arcs. Given two points on the surface of the Earth, a plane is selected
that intersects the two points and the center of the Earth. This plane intersects the Earth, and the line
between the two points is the shortest distance along this intersection.
For two points that lie directly opposite each other, there is not a single unique plane that intersects the two
points and the center of the Earth. Line segments connecting these anti-podal points are not valid and give
an error in the ROUND EARTH model.
The ROUND EARTH model treats the Earth as a spheroid and selects lines that follow the curvature of the
Earth. In some cases, it may be necessary to use a planar model where a line between two points is
interpreted as a straight line in the equirectangular projection where x=long, y=lat.
In the following example, the blue line shows the line interpretation used in the ROUND EARTH model and
the red line shows the corresponding PLANAR model.
For non-geographic SRSs, the type must be PLANAR (and that is the default if the TYPE clause is not
specified and either the DEFINITION clause is not specified or it uses a non-geographic definition).
COORDINATE
The bounds on the spatial reference system's dimensions. coordinate-name is the name of the coordinate
system used by the spatial reference system. For non-geographic coordinate systems, coordinate-name
can be x, y, or m. For geographic coordinate systems, coordinate-name can be LATITUDE, LONGITUDE, z,
or m.
Specify UNBOUNDED to place no bounds on the dimensions. Use the BETWEEN clause to set low and high
bounds.
The X and Y coordinates must have associated bounds. For geographic spatial reference systems, the
longitude coordinate is bounded between -180 and 180 degrees and the latitude coordinate is bounded
between -90 and 90 degrees by default the unless COORDINATE clause overrides these settings. For non-
geographic spatial reference systems, the CREATE statement must specify bounds for both X and Y
coordinates.
LATITUDE and LONGITUDE are used for geographic coordinate systems. The bounds for LATITUDE and
LONGITUDE default to the entire Earth, if not specified.
ELLIPSOID
The values to use for representing the Earth as an ellipsoid for spatial reference systems of type ROUND
EARTH. If the DEFINITION clause is present, it can specify ellipsoid definition. If the ELLIPSOID clause is
specified, it overrides this default ellipsoid.
The Earth is not a perfect sphere because the rotation of the Earth causes a flattening so that the distance
from the center of the Earth to the North or South pole is less than the distance from the center to the
equator. For this reason, the Earth is modeled as an ellipsoid with different values for the semi-major axis
(distance from center to equator) and semi-minor axis (distance from center to the pole). It is most
common to define an ellipsoid using the semi-major axis and the inverse flattening, but it can instead be
SAP IQ uses the ellipsoid definition when computing distance in geographic spatial reference systems.
SNAP TO GRID
Flat-Earth (planar) spatial reference systems, use the SNAP TO GRID clause to define the size of the grid
SAP IQ uses when performing calculations. By default, SAP IQ selects a grid size so that 12 significant
digits can be stored at all points in the space bounds for X and Y. For example, if a spatial reference system
bounds X between -180 and 180 and Y between -90 and 90, then a grid size of 0.000000001 (1E-9) is
selected.
TOLERANCE
Flat-Earth (planar) spatial reference systems, use the TOLERANCE clause to specify the precision to use
when comparing points. If the distance between two points is less than tolerance-distance, the two points
are considered equal. Setting tolerance-distance allows you to control the tolerance for imprecision in the
input data or limited internal precision. By default, tolerance-distance is set to be equal to grid-size.
Internally, SAP IQ interprets polygons by looking at the orientation of the constituent rings. As one travels a
ring in the order of the defined points, the inside of the polygon is on the left side of the ring. The same
rules are applied in PLANAR and ROUND EARTH spatial reference systems.
The interpretation used by SAP IQ is a common but not universal interpretation. Some products use the
exact opposite orientation, and some products do not rely on ring orientation to interpret polygons. The
POLYGON FORMAT clause can be used to select a polygon interpretation that matches the input data, as
needed. The following values are supported:
● CounterClockwise – input follows SAP IQ's internal interpretation: the inside of the polygon is on the
left side while following ring orientation.
● Clockwise – input follows the opposite of SAP IQ's approach: the inside of the polygon is on the right
side while following ring orientation.
● EvenOdd – (default) the orientation of rings is ignored and the inside of the polygon is instead
determined by looking at the nesting of the rings, with the exterior ring being the largest ring and
interior rings being smaller rings inside this ring. A ray is traced from a point within the rings and
radiating outward crossing all rings. If the number the ring being crossed is an even number, it is an
outer ring. If it is odd, it is an inner ring.
STORAGE FORMAT
Control what is stored when spatial data is loaded into the database. Possible values are:
● Internal – SAP IQ stores only the normalized representation. Specify this when the original input
characteristics do not need to be reproduced. This is the default for planar spatial reference systems
(TYPE PLANAR).
● Original – SAP IQ stores only the original representation. The original input characteristics can be
reproduced, but all operations on the stored values must repeat normalization steps, possibly slowing
down operations on the data.
Remarks
(back to top)
For a geographic spatial reference system, you can specify both a LINEAR and an ANGULAR unit of measure;
otherwise for non-geographic, you specify only a LINEAR unit of measure. The LINEAR unit of measure is used
for computing distance between points and areas. The ANGULAR unit of measure tells how the angular
latitude/longitude are interpreted and is NULL for projected coordinate systems, non-NULL for geographic
coordinate systems.
When working with data that is being synchronized with a non-SQL Anywhere database, STORAGE FORMAT
should be set to either 'Original' or 'Mixed' so that the original characteristics of the data can be preserved.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
Related Information
Syntax
Parameters
OR REPLACE
Defines whether the unit of measure is used for angles (ANGULAR) or distances (LINEAR).
CONVERT USING
The conversion factor for the spatial unit relative to the base unit. For linear units, the base unit is METRE.
For angular units, the base unit is RADIAN.
Remarks
The CONVERT USING clause is used to define how to convert a measurement in the defined unit of measure to
the base unit of measure (radians or meters). The measurement is multiplied by the supplied conversion factor
to get a value in the base unit of measure. For example, a measurement of 512 millimeters would be multiplied
by a conversion factor of 0.001 to get a measurement of 0.512 meters.
Spatial reference systems always include a linear unit of measure to be used when calculating distances
(ST_Distance or ST_Length), or area. For example, if the linear unit of measure for a spatial reference system is
miles, then the area unit used is square miles. In some cases, spatial methods accept an optional parameter
that specifies the linear unit of measure to use. For example, if the linear unit of measure for a spatial reference
system is in miles, you could retrieve the distance between two geometries in meters by using the optional
parameter 'metre'.
For projected coordinate systems, the X and Y coordinates are specified in the linear unit of the spatial
reference system. For geographic coordinate systems, the latitude and longitude are specified in the angular
units of measure associated with the spatial reference system. In many cases, this angular unit of measure is
degrees but any valid angular unit of measure can be used.
You can use the sa_install_feature system procedure to add predefined units of measure to your database.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Related Information
Syntax
…[ IN <dbspace-name> ]
…[ ON COMMIT { DELETE | PRESERVE } ROWS ]
[ AT <location-string> ]
[PARTITION BY
<range-partitioning-scheme>
| <hash-partitioning-scheme>
| <composite-partitioning-scheme> ]
<column-definition> ::=
<column-name> <data-type>
[ [ NOT ] NULL ]
[ DEFAULT <default-value> | IDENTITY ]
[ PARTITION | SUBPARTITION ( <partition-name> IN <dbspace-name>
[ , ... ] ) ]
<default-value> ::=
<special-value>
| <string>
| <global variable>
| [ - ] <number>
| ( <constant-expression> )
| <built-in-function>( <constant-expression> )
| AUTOINCREMENT
| CURRENT DATABASE
| CURRENT REMOTE USER
| NULL
| TIMESTAMP
| LAST USER
<column-constraint> ::=
[ CONSTRAINT <constraint-name> ] {
{ UNIQUE
| PRIMARY KEY
| REFERENCES <table-name> [ ( <column-name> ) ] [ <action> ]
}
[ IN <dbspace-name> ]
| CHECK ( <condition> )
| IQ UNIQUE ( <integer> )
}
}
<table-constraint> ::=
[ CONSTRAINT <constraint-name> ]
{ { UNIQUE ( <column-name> [ , <column-name> ] … )
| PRIMARY KEY ( <column-name> [ , <column-name> ] … )
}
[ IN <dbspace-name> ]
| <foreign-key-constraint>
| CHECK ( <condition> )
| IQ UNIQUE ( <integer> )
<foreign-key-constraint> ::=
FOREIGN KEY [ <role-name> ] [ ( <column-name> [ , <column-name> ] … ) ]
…REFERENCES <table-name> [ ( <column-name> [ , <column-name> ] … ) ]
…[ <actions> ] [ IN <dbspace-name> ]
<actions> ::=
[ ON { UPDATE | DELETE } RESTRICT ]
<location-string> ::=
{ <remote-server-name>. [ <db-name> ].[ <owner> ].<object-name>
| <remote-server-name>; [ <db-name> ]; [ <owner> ];<object-name> }
<range-partitioning-scheme> ::=
RANGE ( <partition-key> ) ( <range-partition-decl> [,<range-partition-
decl> ... ] )
<range-partition-decl> ::=
VALUES <= ( {<constant-expr>
| MAX } [ , { <constant-expr>
| MAX }]... )
[ IN <dbspace-name> ]
<hash-partitioning-scheme> ::=
HASH ( <partition-key> [ , <partition-key>, … ] )
<composite-partitioning-scheme> ::=
<hash-partitioning-scheme> SUBPARTITION <range-partitioning-scheme>
Registers this table with the RLV store for real-time in-memory updates. Not supported for IQ temporary
tables. This value overrides the value of the database option BASE_TABLES_IN_RLV. In a multiplex, the
RLV store can only be enabled on the coordinator.
IN
Specify SYSTEM with this clause to put either a permanent or temporary table in the catalog store. Specify
IQ_SYSTEM_TEMP to store temporary user objects (tables, partitions, or table indexes) in
IQ_SYSTEM_TEMP or, if the TEMP_DATA_IN_SHARED_TEMP option is set 'ON', and the IQ_SHARED_TEMP
dbspace contains RW files, in IQ_SHARED_TEMP (you cannot specify the IN clause with
IQ_SHARED_TEMP). All other use of the IN clause is ignored. By default, all permanent tables are placed in
the main IQ store, and all temporary tables are placed in the temporary IQ store. Global temporary and
local temporary tables can never be in the IQ store.
A BIT data type column cannot be explicitly placed in a dbspace. The following is not supported for BIT
data types:
ON COMMIT
Allowed for temporary tables only. By default, the rows of a temporary table are deleted on COMMIT.
AT
Creates a proxy table that maps to a remote location specified by the location-string clause. Proxy table
names must be 30 characters or less. The AT clause supports semicolon (;) delimiters. If a semicolon is
present anywhere in the location-string clause, the semicolon is the field delimiter. If no semicolon is
present, a period is the field delimiter. This allows file names and extensions to be used in the database and
owner fields.
Semicolon field delimiters are used primarily with server classes not currently supported; however, you can
also use them in situations where a period would also work as a field delimiter. For example, this statement
maps the table proxy_a to the SAP SQL Anywhere database mydb on the remote server myasa:
Foreign-key definitions are ignored on remote tables. Foreign-key definitions on local tables that refer to
remote tables are also ignored. Primary key definitions are sent to the remote server if the server supports
primary keys.
In a simplex environment, you cannot create a proxy table that refers to a remote table on the same node.
In a multiplex environment, you cannot create a proxy table that refers to the remote table defined within
the multiplex.
If the named object already exists, no changes are made and an error is not returned.
column-definition
Defines a table column. Allowable data types are described in Data Types. Two columns in the same table
cannot have the same name. You can create up to 45,000 columns; however, there might be performance
penalties in tables with more than 10,000 columns:
● [ NOT ] NULL] – Includes or excludes NULL values. If NOT NULL is specified, or if the column is in a
UNIQUE or PRIMARY KEY constraint, the column cannot contain any NULL values. The limit on the
number of columns per table that allow NULLs is approximately 8*(database-page-size - 30).
● DEFAULT default-value – Specify a default column value with the DEFAULT keyword in the CREATE
TABLE (and ALTER TABLE) statement. A DEFAULT value is used as the value of the column in any
INSERT (or LOAD) statement that does not specify a column value.
● DEFAULT AUTOINCREMENT – The value of the DEFAULT AUTOINCREMENT column uniquely
identifies every row in a table. Columns of this type are also known as IDENTITY columns, for
compatibility with SAP Adaptive Server Enterprise.
The IDENTITY/DEFAULT AUTOINCREMENT column stores sequential numbers that are automatically
generated during inserts and updates. When using IDENTITY or DEFAULT AUTOINCREMENT, the
column must be one of the integer data types, or an exact numeric type, with scale 0. The column
value might also be NULL. You must qualify the specified table name with the owner name.
ON inserts into the table. If a value is not specified for the IDENTITY/DEFAULT AUTOINCREMENT
column, a unique value larger than any other value in the column is generated. If an INSERT specifies a
value for the column, it is used; if the specified value is not larger than the current maximum value for
the column, that value is used as a starting point for subsequent inserts.
Deleting rows does not decrement the IDENTITY/AUTOINCREMENT counter. Gaps created by deleting
rows can only be filled by explicit assignment when using an insert.
Note
For example, this creates a table with an IDENTITY column and explicitly adds some data to it:
After an explicit insert of a row number less than the maximum, subsequent rows without explicit
assignment are still automatically incremented with a value of one greater than the previous
maximum.
You can find the most recently inserted value of the column by inspecting the @@identity global
variable.
● IDENTITY – A Transact-SQL-compatible alternative to using the AUTOINCREMENT default. In SAP IQ,
the identity column may be created using either the IDENTITY or the DEFAULT AUTOINCREMENT
clause
table-constraint
Helps ensure the integrity of data in the database. There are four types of integrity constraints:
Column identifiers appearing in table check constraints that start with the symbol ‘@’are not
placeholders.
If a statement would cause changes to the database that violate an integrity constraint, the statement is
effectively not executed and an error is reported. This means that any changes made by the statement
before the error was detected are undone.
SAP IQ enforces single-column UNIQUE constraints by creating an HG index for that column.
Note
You cannot define a column with a BIT data type as a UNIQUE or PRIMARY KEY constraint. Also, the
default for columns of BIT data type is to not allow NULL values; you can change this by explicitly
defining the column as allowing NULL values.
column-constraint
Restricts the values the column can hold. Column and table constraints help ensure the integrity of data in
the database. If a statement would cause a violation of a constraint, execution of the statement does not
complete, any changes made by the statement before error detection are undone, and an error is reported.
Column constraints are abbreviations for the corresponding table constraints. For example, these are
equivalent:
IQ UNIQUE defines the expected cardinality of a column and determines whether the column loads as Flat
FP or NBit FP. An IQ UNIQUE(n) value explicitly set to 0 loads the column as Flat FP. Columns without an IQ
UNIQUE constraint implicitly load as NBit up to the limits defined by the FP_NBIT_AUTOSIZE_LIMIT,
FP_NBIT_LOOKUP_MB, and FP_NBIT_ROLLOVER_MAX_MB options:
Using IQ UNIQUE with an n value less than the FP_NBIT_AUTOSIZE_LIMIT is not necessary. Auto-size
functionality automatically sizes all low or medium cardinality columns as NBit. Use IQ UNIQUE in cases
where you want to load the column as Flat FP or when you want to load a column as NBit when the number
of distinct values exceeds the FP_NBIT_AUTOSIZE_LIMIT.
Note
● Consider memory usage when specifying high IQ UNIQUE values. If machine resources are limited,
avoid loads with FP_NBIT_ENFORCE_LIMITS='OFF' (default).
Prior to SAP IQ 16.1, an IQ UNIQUE <n> value > 16777216 would rollover to Flat FP. In 16.1, larger IQ
UNIQUE values are supported for tokenization, but may require significant memory resource
requirements depending on cardinality and column width.
● BIT, BLOB, and CLOB data types do not support NBit dictionary compression. If
FP_NBIT_IQ15_COMPATIBILITY=’OFF’, a non-zero IQ UNIQUE column specification in a CREATE
TABLE or ALTER TABLE statement that includes these data types returns an error.
Column and table constraints help ensure the integrity of data in the database:
If possible, do not define referential integrity foreign key-primary key relationships in SAP IQ unless
you are certain there are no orphan foreign keys.
PARTITION BY
Divides large tables into smaller, more manageable storage objects. Partitions share the same logical
attributes of the parent table, but can be placed in separate dbspaces and managed individually. SAP IQ
supports several table partitioning schemes:
● Hash-partitions
● Range-partitions
● Composite-partitions
A partition-key is the column or columns that contain the table partitioning keys. Partition keys can contain
NULL and DEFAULT values, but cannot contain:
Partitions rows by a range of values in the partitioning column. Range partitioning is restricted to a single
partition key column and a maximum of 1024 partitions. In a range-partitioning-scheme, the partition-key
is the column that contains the table partitioning keys:
range-partition-decl:
<partition-name> VALUES <= ( {<constant-expr> | MAX } [ , { <constant-
expr> | MAX }]... )
[ IN <dbspace-name> ]
The partition-name is the name of a new partition on which table rows are stored. Partition names must be
unique within the set of partitions on a table. The partition-name is required.
● <VALUE> – Specifies the inclusive upper bound for each partition (in ascending order). The user must
specify the partitioning criteria for each range partition to guarantee that each row is distributed to
only one partition. NULLs are allowed for the partition column and rows with NULL as partition key
value belong to the first table partition. However, NULL cannot be the bound value.
There is no lower bound (MIN value) for the first partition. Rows of NULL cells in the first column of the
partition key will go to the first partition. For the last partition, you can either specify an inclusive upper
bound or MAX. If the upper bound value for the last partition is not MAX, loading or inserting any row
with partition key value larger than the upper bound value of the last partition generates an error.
● MAX – Denotes the infinite upper bound and can only be specified for the last partition.
● IN – specifies the dbspace in the partition-decl on which rows of the partition should reside.
These restrictions affect partitions keys and bound values for range partitioned tables:
○ Implicit conversions that result in data loss are not allowed. In this example, the partition bounds
are not compatible with the partition key type. Rounding assumptions may lead to data loss and an
error is generated:
CREATE TABLE emp_id (id INT) PARTITION BY RANGE(id) (p1 VALUES <=
(10.5), p2 VALUES <= (100.5))
● In this example, the partition bounds and the partition key data type are compatible. The bound values
are directly converted to float values. No rounding is required, and conversion is supported:
● Conversions from non-binary data types to binary data types are not allowed. For example, this
conversion is not allowed and returns an error:
Maps data to partitions based on partition-key values processed by an internal hashing function. Hash
partition keys are restricted to a maximum of eight columns with a combined declared column width of
5300 bytes or less. For hash partitions, the table creator determines only the partition key columns; the
number and location of the partitions are determined internally.
hash-partitioning-scheme:
HASH ( <partition-key> [ , <partition-key>, … ] )
Restrictions:
● You can only hash partition a base table. Attempting to partitioning a global temporary table or a local
temporary table raises an error.
● You cannot add, drop, merge, or split a hash partition.
● You cannot add or drop a column from a hash partition key.
hash-range-partitioning-scheme:
PARTITION BY HASH ( <partition-key> [ , <partition-key>, … ] )
[ SUBPARTITION BY RANGE ( <range-partition-decl> [ , <range-partition-
decl> … ] ) ]
The hash partition specifies how the data is logically distributed and colocated; the range subpartition
specifies how the data is physically placed. The new range subpartition is logically partitioned by hash with
the same hash partition keys as the existing hash-range partitioned table. The range subpartition key is
restricted to one column.
Restrictions:
● You can only hash partition a base table. Attempting to partitioning a global temporary table or a local
temporary table raises an error.
● You cannot add, drop, merge, or split a hash partition.
● You cannot add or drop a column from a hash partition key.
Note
Range-partitions and composite partitioning schemes, like hash-range partitions, require the
separately licensed VLDB Management option.
Remarks
If the table is in a SAN dbspace but its columns or range partitions are in a DAS dbspace, the CREATE TABLE
statement results in an error. Table subcomponents cannot be created on DAS dbspaces if the parent table is
not a DAS dbspace table.
You can create a table for another user by specifying an owner name. If GLOBAL TEMPORARY or LOCAL
TEMPORARY is not specified, the table is referred to as a base table. Otherwise, the table is a temporary table.
A created global temporary table exists in the database like a base table and remains in the database until it is
explicitly removed by a DROP TABLE statement. The rows in a temporary table are visible only to the
connection that inserted the rows. Multiple connections from the same or different applications can use the
same temporary table at the same time and each connection sees only its own rows. A given connection
inherits the schema of a global temporary table as it exists when the connection first refers to the table. The
rows of a temporary table are deleted when the connection ends.
When you create a local temporary table, omit the owner specification. If you specify an owner when creating a
temporary table, for example, CREATE TABLE dbo.#temp(col1 int), a base table is incorrectly created.
An attempt to create a base table or a global temporary table will fail, if a local temporary table of the same
name exists on that connection, as the new table cannot be uniquely identified by owner.table.
You can, however, create a local temporary table with the same name as an existing base table or global
temporary table. References to the table name access the local temporary table, as local temporary tables are
resolved first.
The result returned is 8. Any reference to t1 refers to the local temporary table t1 until the local temporary
table is dropped by the connection.
In a procedure, use the CREATE LOCAL TEMPORARY TABLE statement, instead of the DECLARE LOCAL
TEMPORARY TABLE statement, when you want to create a table that persists after the procedure completes.
Local temporary tables created using the CREATE LOCAL TEMPORARY TABLE statement remain until they are
either explicitly dropped, or until the connection closes.
Local temporary tables created in IF statements using CREATE LOCAL TEMPORARY TABLE also persist after
the IF statement completes.
SAP IQ does not support the CREATE TABLE ENCRYPTED clause for table-level encryption of SAP IQ tables.
However, the CREATE TABLE ENCRYPTED clause is supported for SAP SQL Anywhere tables in an SAP IQ
database.
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Base table in the IQ main Table owned by self requires CREATE object-level privilege on the dbspace where the table is
store
created along with one of:
Table owned by another user requires CREATE object-level privilege on the dbspace where the
table is created along with one of:
To enable RLV store during creation requires the CREATE TABLE system privilege and CREATE
object-level permissions on the RLV store dbspace.
Side Effects
Automatic commit
Standards
Examples
● This example creates a table named SalesOrders2 with five columns. Data pages for columns
FinancialCode, OrderDate, and ID are in dbspace Dsp3. Data pages for integer column CustomerID
● This example creates a table fin_code2 with four columns. Data pages for columns code, type, and id
are in the default dbspace, which is determined by the value of the database option DEFAULT_DBSPACE.
Data pages for CLOB column description are in dbspace Dsp2. Data pages from foreign key fk1, HG for
c1 are in dbspace Dsp4:
● This example creates a table t1 where partition p1 is adjacent to p2 and partition p2 is adjacent to p3:
● This example creates a RANGE partitioned table bar with six columns and three partitions, mapping data
to partitions based on dates:
P1 Dsp11 c3
P1 Dsp21 c6
P2 Dsp12 c3
P2 Dsp22 c6
P3 Dsp13 c3
● This example creates a HASH partitioned (table tbl42) that includes a PRIMARY KEY (column c1) and a
HASH PARTITION KEY (columns c4 and c3):
● This example creates a hash-ranged partitioned table with a PRIMARY KEY (column c1), a hash partition
key (columns c4 and c2) and a range subpartition key (column c3):
● This example creates a table for a library database to hold information on borrowed books:
● This example creates table t1 at the remote server SERVER_A and create a proxy table named t1 that is
mapped to the remote table:
CREATE TABLE t1
( a INT,
b CHAR(10))
AT 'SERVER_A.db1.joe.t1'
● This example creates a local temporary table tab1 that contains a column c1:
● The example creates tab1 in the IQ_SYSTEM_TEMP dbspace in the following cases:
○ DQP_ENABLED logical server policy option is set ON but there are no read-write files in
IQ_SHARED_TEMP
○ DQP_ENABLED option is OFF, TEMP_DATA_IN_SHARED_TEMP logical server policy option is ON, but
there are no read-write files in IQ_SHARED_TEMP
○ Both the DQP_ENABLED option and the TEMP_DATA_IN_SHARED_TEMP option are set OFF
● The example creates the same table tab1 in the IQ_SHARED_TEMP dbspace in the following cases:
○ DQP_ENABLED is ON and there are read-write files in IQ_SHARED_TEMP
○ DQP_ENABLED is OFF, TEMP_DATA_IN_SHARED_TEMP is ON, and there are read-write files in
IQ_SHARED_TEMP
● This example creates a table tab1 that is enabled to use row-level versioning, and real-time storage in the
in-memory RLV store:
Related Information
Note
Syntax
Parameters
FROM
Specifies the name of a text configuration object to use as the template for creating the new text
configuration object. The names of the default text configuration objects are DEFAULT_CHAR and
DEFAULT_NCHAR. DEFAULT_CHAR is supported for SAP IQ tables only; DEFAULT_NCHAR is supported on
SAP SQL Anywhere tables only.
Remarks
Create a text configuration object using another text configuration object as a template, then alter the options
as needed using the ALTER TEXT CONFIGURATION statement.
To view the list of all text configuration objects and their settings in the database, query the SYSTEXTCONFIG
system view.
Privileges
All text configuration objects have PUBLIC access. Any user with privilege to create a TEXT index can use any
text configuration object.
Side Effects
Automatic commit
Examples
The following example creates a text configuration object, max_term_sixteen, using the default_char text
configuration object, then use ALTER TEXT CONFIGURATION to change the maximum term length for
max_term_sixteen to 16:
Related Information
Creates a TEXT index and specifies the text configuration object to use.
Syntax
Parameters
ON
Specifies the table and column on which to build the TEXT index.
IN
Specifies the text configuration object to use when creating the TEXT index. If this clause is not specified,
the default_char text configuration object is used.
IMMEDIATE REFRESH
(Default) refreshes the TEXT index each time changes in the underlying table impact data in the TEXT
index. Only permitted value for tables in SAP IQ main store. Once created, the IMMEDIATE REFRESH
clause cannot be changed.
Remarks
Note
You cannot create a TEXT index on views or temporary tables, or on an IN SYSTEM materialized view. The
BEGIN PARALLEL IQ…END PARALLEL IQ statement does not support CREATE TEXT INDEX.
Privileges
● CREATE ANY INDEX system privilege along with CREATE object-level privilege on the dbspace where the
index is being created.
● CREATE ANY OBJECT system privilege
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Side Effects
Automatic commit
Examples
The following example creates a TEXT index, myTxtIdx, on the CompanyName column of the Customers table
in the iqdemo database, using the max_term_sixteen text configuration object:
Related Information
Creates a trigger on a table. This statement applies to SAP IQ catalog store tables only.
Syntax
<trigger-type> :
BEFORE
| AFTER
| INSTEAD OF
| RESOLVE
<trigger-event> :
DELETE
| INSERT
| UPDATE
Parameters
OR REPLACE clause
Specifying OR REPLACE creates a new trigger, or replaces an existing trigger with the same name.
Row-level triggers can be defined to execute BEFORE, AFTER, or INSTEAD OF an insert, update, or delete
operation. Statement-level triggers can be defined to execute INSTEAD OF or AFTER the statement.
BEFORE UPDATE triggers fire any time an UPDATE occurs on a row, whenever the new value differs from
the old value. That is, if a <column-list> is specified for a BEFORE UPDATE trigger, then the trigger fires
if any of the columns in <column-list> appear in the SET clause of the UPDATE statement. If a
<column-list> is specified for an AFTER UPDATE trigger, then the trigger is fired only if the value of any
of the columns in <column-list> is changed by the UPDATE statement.
INSTEAD OF triggers are the only form of trigger that you can define on a regular view. INSTEAD OF
triggers replace the triggering action with another action. When an INSTEAD OF trigger fires, the triggering
action is skipped and the specified action is performed. INSTEAD OF triggers can be defined as a row-level
or a statement-level trigger. A statement-level INSTEAD OF trigger replaces the entire statement, including
all row-level operations. If a statement-level INSTEAD OF trigger fires, then no row-level triggers fire as a
result of that statement. However, the body of the statement-level trigger could perform other operations
that, in turn, cause other row-level triggers to fire.
If you are defining an INSTEAD OF trigger, then you cannot use the UPDATE OF <column-list> clause,
the ORDER clause, or the WHEN clause.
trigger-event
When defining a trigger, you can combine DELETE, INSERT, and UPDATE events in the same definition, but
triggers for UPDATE OF events must be defined separately. You can define any number of DELETE, INSERT,
and UPDATE triggers on a table. You can define any number of triggers for UPDATE OF events on a table,
but only one per column.
DELETE event
The trigger is invoked whenever one or more rows of the table are deleted.
INSERT event
The trigger is invoked whenever one or more rows are inserted into the table.
UPDATE event
The trigger is invoked whenever one or more rows of the table are updated.
The keyword UPDATING is also supported for this clause for compatibility with other SQL dialects. The
argument for UPDATING is a quoted string (for example, UPDATING( 'mycolumn' )), whereas the
argument for UPDATE is an identifier (for example, UPDATE( mycolumn )).
UPDATE OF column-list event
The trigger is invoked whenever a row of the associated table is updated and a column in the
<column-list> is modified. This type of trigger event cannot be used in a <trigger-event-list>;
it must be the only trigger event defined for the trigger. This clause cannot be used in an INSTEAD OF
trigger.
You can write separate triggers for each event that you need to handle or, if you have some shared
actions and some actions that depend on the event, you can create a trigger for all events and use an
IF statement to distinguish the action taking place.
ORDER clause
When defining additional triggers of the same type (insert, update, or delete) to fire at the same time
(before, after, or resolve), you must specify an ORDER clause to tell the database server the order in which
to fire the triggers. Order numbers must be unique among same-type triggers configured to fire at the
same time. If you specify an order number that is not unique, then an error is returned. Order numbers do
not need to be in consecutive order (for example, you could specify 1, 12, 30). The database server fires the
triggers starting with the lowest number.
Typically, if you omit the ORDER clause, or specify 0, then the database server assigns the order of 1.
However, if another same-type trigger is already set to 1, then an error is returned.
When you create additional triggers that contain multiple event types, if you omit the ORDER clause, and
one or more of the event types is the same as in other triggers (for example, the trigger-event-list for one
trigger is UPDATE, INSERT, and the trigger-event-list for another trigger is UPDATE), the database server
does not return an error. In this case, the database server processes the triggers in an implementation-
specific order that may not be expected and is subject to change. Therefore, it is strongly recommended
that you always specify an ORDER clause when defining more than one trigger on a table.
When adding additional triggers, you may need to modify the existing same-type triggers for the event,
depending on whether the actions of the triggers interact. If they do not interact, then the new trigger must
have an ORDER value unique from other existing triggers. If they do interact, you need to consider what the
other triggers do, and you may need to change the order in which they fire.
The ORDER clause is not supported for INSTEAD OF triggers since there can only be one INSTEAD OF
trigger of each type (insert, update, or delete) defined on a table or view.
REFERENCING clause
The REFERENCING OLD and REFERENCING NEW clauses allow you to refer to the inserted, deleted, or
updated rows. With this clause an UPDATE is treated as a delete followed by an insert.
An INSERT takes the REFERENCING NEW clause, which represents the inserted row. There is no
REFERENCING OLD clause.
A DELETE takes the REFERENCING OLD clause, which represents the deleted row. There is no
REFERENCING NEW clause.
An UPDATE takes the REFERENCING OLD clause, which represents the row before the update, and it takes
the REFERENCING NEW clause, which represents the row after the update.
The meanings of REFERENCING OLD and REFERENCING NEW differ, depending on whether the trigger is a
row-level or a statement-level trigger. For row-level triggers, the REFERENCING OLD clause allows you to
refer to the values in a row before an update or delete, and the REFERENCING NEW clause allows you to
refer to the inserted or updated values. The OLD and NEW rows can be referenced in BEFORE and AFTER
triggers. The REFERENCING NEW clause allows you to modify the new row in a BEFORE trigger before the
insert or update operation takes place.
For statement-level triggers, the REFERENCING OLD and REFERENCING NEW clauses refer to declared
temporary tables holding the old and new values of the rows.
FOR EACH clause
To declare a trigger as a row-level trigger, use the FOR EACH ROW clause. To declare a trigger as a
statement-level trigger, you can either use a FOR EACH STATEMENT clause or omit the FOR EACH clause.
The trigger fires only for rows where the search-condition evaluates to true. The WHEN clause can be used
only with row level triggers. This clause cannot be used in an INSTEAD OF trigger.
trigger-body
The trigger body contains the actions to take when the triggering action occurs, and consists of a BEGIN
statement.
You can include trigger operation conditions in the BEGIN statement. Trigger operation conditions perform
actions depending on the trigger event that caused the trigger to fire. For example, if the trigger is defined
to fire for both updates and deletes, you can specify different actions for the two conditions.
You can also use Boolean conditions { INSERTING | DELETING | UPDATING [ ( '<col-
name>' ) ] } anywhere a condition can be used in the body of the trigger. This special syntax enables
you to specify an additional action to take when performing some <trigger-event>. For example, IF
INSERTING THEN SET msg = msg || 'insert'.
Remarks
The CREATE TRIGGER statement creates a trigger associated with a table in the database, and stores the
trigger in the database.
You cannot define a trigger on a materialized view. If you do, a SQLE_INVALID_TRIGGER_MATVIEW error is
returned.
A trigger is declared as either a row-level trigger, in which case it executes before or after each row is modified,
or a statement-level trigger, in which case it executes after the entire triggering statement is completed.
CREATE TRIGGER puts a table lock on the table and requires exclusive use of the table.
Privileges
You must have the CREATE ANY TRIGGER or CREATE ANY OBJECT system privilege. Additionally, you must be
the owner of the table the trigger is built on or have one of the following privileges:
To create a trigger on a view owned by someone else, you must have either the CREATE ANY TRIGGER or
CREATE ANY OBJECT system privilege, and you must have either the ALTER ANY VIEW or ALTER ANY OBJECT
system privilege.
To replace an existing trigger, you must be the owner of the table the trigger is built on, or have one of the
following:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
Automatic commit.
Standards
CREATE TRIGGER is part of optional ANSI/ISO SQL Language Feature T211 "Basic trigger capability". Row
triggers are optional ANSI/ISO SQL Language Feature T212, while INSTEAD OF triggers are optional
ANSI/ISO SQL Language Feature T213.
Some trigger features in the software are not in the standard. These include:
● The optional OR REPLACE syntax. If an existing trigger is replaced, authorization of the creation of the
new trigger instance is bypassed.
● The ORDER clause. In the ANSI/ISO SQL Standard, triggers are fired in the order they were created.
● RESOLVE triggers.
Transact-SQL
ROW and RESOLVE triggers are not supported by Adaptive Server Enterprise. The SAP IQ Transact-SQL
dialect does not support Transact-SQL INSTEAD OF triggers, though these are supported by Adaptive
Server Enterprise. Transact-SQL triggers are defined using different syntax.
Example
This example creates a statement-level trigger. First, create a table as shown in this CREATE TABLE
statement (requires the CREATE TABLE system privilege):
CREATE TABLE t0
( id INTEGER NOT NULL,
times TIMESTAMP NULL DEFAULT CURRENT TIMESTAMP,
remarks TEXT NULL,
PRIMARY KEY ( id )
);
The following example replaces the myTrig trigger created in the previous example.
The next example shows how you can use REFERENCING NEW in a BEFORE UPDATE trigger. This example
ensures that postal codes in the new Employees table are in uppercase. You must have the SELECT, ALTER,
and UPDATE object-level privileges on GROUPO.Employees to execute this statement:
The next example shows how you can use REFERENCING OLD in a BEFORE DELETE trigger. This example
prevents deleting an employee from the Employees table who has not been terminated.
The next example shows how you can use REFERENCING NEW in a BEFORE INSERT and UPDATE trigger.
The following example creates a trigger that fires before a row in the SalesOrderItems table is inserted or
updated.
The following trigger displays a message on the History tab of the Interactive SQL Results pane showing
which action caused the trigger to fire.
Related Information
Creates a user.
Syntax
Parameters
user-name
You do not have to specify a password for the user. A user without a password cannot connect to the
database. This is useful if you are creating a role and do not want anyone to connect to the database using
the role user ID. A user ID must be a valid identifier. User IDs and passwords cannot:
A password can be either a valid identifier, or a string (maximum 255 characters) placed in single quotes.
Passwords are case-sensitive. The password should be composed of 7-bit ASCII characters, as other
characters may not work correctly if the database server cannot convert them from the client's character
set to UTF-8.
You can use the VERIFY_PASSWORD_FUNCTION option to specify a function to implement password rules
(for example, passwords must include at least one digit). If you do use a password verification function, you
cannot specify more than one user ID and password in the GRANT CONNECT statement.
The encryption algorithm used for hashing the user passwords is FIPS-certified encryption support:
Name of the login policy to assign the user. No change is made if you do not specify a login policy.
FORCE PASSWORD CHANGE
Controls whether the user must specify a new password upon logging in. This setting overrides the
PASSWORD_EXPIRY_ON_NEXT_LOGIN option setting in the user's login policy.
This functionality is not currently implemented when logging in to SAP IQ Cockpit. However, when
logging in to SAP IQ outside of SAP IQ Cockpit (for example, using Interactive SQL), users are then
prompted to enter a new password.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
The following example creates a user named SQLTester with the password welcome. The SQLTester user is
assigned to the Test1 login policy and the password expires on the next login:
Related Information
Syntax
<initial-value> ::=
<expression>
<initial-value> ::=
<special-value>
| <string> | [ - ] <number>
| ( <constant-expression> )
| <built-in-function> ( <constant-expression> )
| NULL
<special-value> ::=
CURRENT
{ DATABASE
| DATE
| PUBLISHER
| TIME
| TIMESTAMP
| USER
| UTC TIMESTAMP }
| USER
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
OR REPLACE
Specifying the OR REPLACE clause drops the named variable if it already exists and re-creates it with the
new definition. OR REPLACE only replaces the value of the variable if the data type of the current and new
value are the same.
This parameter applies only to database-scope variables. Specify a valid user ID or role, or PUBLIC to set
ownership of the variable. If set to a user, only that user can use the database variable. If set to a role, users
who have that role are able to use the database variable. If set to PUBLIC, all users are able to use the
variable.
If <owner> is not specified, it is set to the user executing the CREATE VARIABLE statement.
identifier
The data type for the variable. Set the data type explicitly, or use the %TYPE or %ROWTYPE attribute to
set the data type to the data type of another object in the database. Use %TYPE to set it to the data type of
a variable or a column in a table or view. Use %ROWTYPE to set the data type to a composite data type
derived from a row in a cursor, table, or view.
%ROWTYPE and TABLE REF is not supported as data types for database-scope variables.
IF NOT EXISTS
Specify this clause to allow the statement to complete without returning an error if a database-scope
variable with the same name already exists. This parameter is only for use when creating owned database-
scope variables.
The default value for the variable. For database-scope variables, this is also the initial value after the
database is restarted.
<initial-value> must match the data type defined by <data-type>. If you do not specify an
<initial-value>, then the variable contains the NULL value until a different value is assigned, for
example by using a SET statement, a SELECT ... INTO statement, or in an UPDATE statement. If
<initial-value> is set by using an expression, then the expression is evaluated at creation time and the
resulting constant is stored (not the expression).
Remarks
(back to top)
A variable can be used in a SQL expression anywhere a column name is allowed. If a column name exists with
the same name as the variable, the variable value is used. Name resolution is performed as follows:
Variables belong to the current connection, and disappear when you disconnect from the database, or when
you use the DROP VARIABLE statement. Variables are not visible to other connections. COMMIT or ROLLBACK
statements do not affect variables.
Variables are useful for creating large text or binary objects for INSERT or UPDATE statements from Embedded
SQL programs.
Use the CREATE VARIABLE syntax to create a connection-scope variable that is available in the context of the
connection.
Use the CREATE DATABASE VARIABLE syntax to create a database-scope variable that can be used by other
users and other connections.
Use the OR REPLACE clause as an alternative to the VAREXISTS function in SQL scripts.
If you specify a variable name for <initial-value>, then the variable must already be initialized in the
database.
Local variables in procedures and triggers are declared within a compound statement.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Database-scope variables To create or replace a self-owned database-scope variable requires one of:
Side Effects
(back to top)
● Connection-scope variables – there are no side effects associated with creating a connection-scope
variable.
● Database-scope variables – creating and replacing a database-scope variable causes an automatic
commit.
(back to top)
Examples
(back to top)
● The following example creates (or updates) a database-scope variable called site_name of type
VARCHAR(50).
● The following example creates (or updates) a database-scope variable owned by PUBLIC called
database_name of type CHAR(66) and sets it to the special value CURRENT DATABASE.
● The following example creates a connection-scope variable named first_name, of data type VARCHAR(50).
● The following example creates a connection-scope variable named birthday, of data type DATE.
● The following example creates a connection-scope variable named v1 as an INT with the initial setting of 5.
● The following example creates a connection-scope variable named v1 and sets its value to 10, regardless of
whether the v1 variable already exists.
● The following example creates a connection-scope variable, ProductID, and uses the %TYPE attribute to
set its data type to the data type of the ID column in the Products table:
● The following example creates a connection-scope variable, ItemsForSale, and uses the %ROWTYPE
attribute to set its data type to a composite data type comprised of the columns defined for the Products
table. It then creates another variable, ItemID, and declares its type to be the data type of the ID column in
the ItemsForSale variable:
● The following example this code fragment inserts a large text value into the database:
Related Information
Creates a view on the database. Views are used to give a different perspective on the data even though it is not
stored that way.
Syntax
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
Replaces an existing view with the same name. Existing permissions are preserved , but INSTEAD OF
triggers on the view are dropped.
view-name
The default owner of a view is the current user ID. A view name can be used in place of a table name in
SELECT, DELETE, UPDATE, and INSERT statements. Views, however, do not physically exist in the database
as tables. They are derived each time they are used. The view is derived as the result of the SELECT
statement specified in the CREATE VIEW statement. Table names used in a view should be qualified by the
user ID of the table owner. Otherwise, a different user ID might not be able to find the table or might get the
wrong table.
AS
The SELECT statement on which the view is based must not contain an ORDER BY clause, a subquery in
the SELECT list, or a TOP or FIRST qualification. It may have a GROUP BY clause and may be a UNION.
WITH CHECK OPTION
Rejects any updates and inserts to the view that do not meet the criteria of the views as defined by its
SELECT statement. However, SAP IQ currently ignores this option (it supports the syntax for compatibility
reasons).
Remarks
(back to top)
Views can be updated unless the SELECT statement defining the view contains a GROUP BY clause, an
aggregate function, or involves a UNION operation. An update to the view causes the underlying tables to be
updated.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
View Self Requires the CREATE VIEW system privilege along with one of:
Also requires:
Materialize View Self Requires CREATE MATERIALIZED VIEW system privilege. Also re
quires one of:
Side Effects
(back to top)
Automatic commit
Standards
(back to top)
Examples
(back to top)
● The following example creates a view showing all information for male employees only. This view has the
same column names as the base table:
● The following example creates a view showing employees and the departments to which they belong:
Related Information
Syntax
Remarks
Frees all memory associated with a descriptor area, including the data items, indicator variables, and the
structure itself.
None
Standards
Examples
Related Information
Declares host variables in an Embedded SQL program. Host variables are used to exchange data with the
database.
Syntax
Remarks
A declaration section is a section of C variable declarations surrounded by the BEGIN DECLARE SECTION and
END DECLARE SECTION statements. A declaration section makes the SQL preprocessor aware of C variables
that are used as host variables. Not all C declarations are valid inside a declaration section.
None
Standards
Examples
Related Information
Syntax
DECLARE
<variable_name> [ , … ]
<data-type> [{
=
| DEFAULT}
<initial-value>]
<initial-value> ::=
<special-value>
| <string>
| [ - ] <number>
| ( <constant-expression> )
| <built-in-function> ( <constant-expression> )
| NULL
<special-value> ::=
CURRENT
Parameters
initial-value
The variable is set to that value. The data type must match the type defined by <data-type>. If you do not
specify an initial-value, the variable contains the NULL value until a SET statement assigns a different
value.
data-type
Set the data type explicitly, or you can set it by using the %TYPE or %ROWTYPE attribute. Use %TYPE to
set it to the data type of a variable or a column in a table or view. Use %ROWTYPE to set the data type to a
composite data type derived from a row in a cursor, table, or view.
Remarks
Use the DECLARE statement to declare variables used in the body of a procedure. The variable persists for the
duration of the compound statement in which it is declared and must be unique within the compound
statement.
The body of a procedure is a compound statement, and variables must be declared immediately following
BEGIN. In a Transact-SQL procedure or trigger, there is no such restriction.
Privileges
None
Standards
Examples
The following example illustrates the use of the DECLARE statement and prints a message on the server
window:
BEGIN
DECLARE varname CHAR(61);
SET varname = 'Test name';
MESSAGE varname;
END
Related Information
Declares a cursor. Cursors are the primary means for manipulating the results of queries.
Syntax
DECLARE <cursor-name>
[ SCROLL
| NO SCROLL
| DYNAMIC SCROLL
]
CURSOR FOR
{ <select-statement> FOR <for-clause>
| <statement-name>
| USING <variable-name> }
<for-clause> ::=
READ ONLY | UPDATE
Go to:
● Remarks
● Privileges
● Standards
● Examples
(back to top)
statement-name
Identifier or host-variable. Statements are named using the PREPARE statement. Cursors can be declared
only for a prepared SELECT or CALL.
SCROLL
A cursor declared as SCROLL supports the NEXT, PRIOR, FIRST, LAST, ABSOLUTE, and RELATIVE options
of the FETCH statement. A SCROLL cursor lets you fetch an arbitrary row in the result set while the cursor
is open.
NO SCROLL
A cursor declared as NO SCROLL is restricted to moving forward through the result set using only the
FETCH NEXT and FETCH ABSOLUTE (0) seek operations.
DYNAMIC SCROLL
A cursor declared as DYNAMIC SCROLL supports the NEXT, PRIOR, FIRST, LAST, ABSOLUTE, and
RELATIVE clauses of the FETCH statement. A DYNAMIC SCROLL cursor lets you fetch an arbitrary row in
the result set while the cursor is open.
Since rows cannot be returned to once the cursor leaves the row, there are no sensitivity restrictions on the
cursor. Consequently, when a NO SCROLL cursor is requested, SAP IQ supplies the most efficient kind of
cursor, which is an asensitive cursor.
READ ONLY
(Default) A cursor declared FOR READ ONLY may not be used in a positioned UPDATE or a positioned
DELETE operation.
A cursor declared FOR READ ONLY sees the version of table(s) on which the cursor is declared when the
cursor is opened, not the version of table(s) at the time of the first FETCH.
For example, when the cursor is fetched, only one row can be fetched from the table:
UPDATE
You can update the cursor result set of a cursor declared FOR UPDATE. Only asensitive behavior is
supported for updatable cursors; any other sensitivity is ignored.
When the cursor is opened, exclusive table locks are taken on all tables that are opened for update.
Standalone LOAD TABLE, UPDATE, INSERT, DELETE, and TRUNCATE statements are not allowed on tables
that are opened for update in the same transaction, since SAP IQ permits only one statement to modify a
table at a time. You can open only one updatable cursor on a specific table at a time.
You can declare a cursor on a variable in stored procedures and user-defined functions. The variable is a
string containing a SELECT statement for the cursor. The variable must be available when the DECLARE is
processed, and so must be one of the following:
Nested inside another BEGIN…END after the variable has been assigned a value. For example:
Remarks
(back to top)
The DECLARE CURSOR statement declares a cursor with the specified name for a SELECT statement or a CALL
statement.
Embedded SQL statements are named using the PREPARE statement. Cursors can be declared only for a
prepared SELECT or CALL.
SAP IQ supports one type of cursor sensitivity, which is defined in terms of which changes to underlying data
are visible. All SAP IQ cursors are asensitive, which means that changes might be reflected in the membership,
order, or values of the result set seen through the cursor, or might not be reflected at all.
With an asensitive cursor, changes effected by positioned UPDATE and positioned DELETE statements are
visible in the cursor result set, except where client-side caching prevents seeing these changes. Inserted rows
are not visible.
When using cursors, there is always a trade-off between efficiency and consistency. Asensitive cursors provide
efficient performance at the expense of consistency.
LONG VARCHAR and LONG BINARY data types are not supported in updatable cursors.
Scalar user-defined functions and user-defined aggregate functions are not supported in updatable cursors.
● Expressions in the select list against columns that are not functionally dependent on columns being
updated
● Arbitrary subqueries with asensitive behavior, that is, changes to data referenced by subqueries are not
visible in the cursor result set
● ORDER BY clause; the ORDER BY columns may be updated, but the result set does not reorder
● Columns that meet these requirements:
○ No CAST on a column
○ Base columns of a base table in the SELECT clause
○ There are no expressions or functions on that column in the SELECT clause and it is not duplicated in
the select list (for example, SELECT c1, c1).
○ Base columns of a base table restricted to those listed in the FOR UPDATE OF <column-name-
list> clause, if the clause is specified.
SAP IQ does not permit updatable cursors on queries that contain any operator that precludes a one-to-one
mapping of result set rows to rows in a base table; specifically:
● SELECT DISTINCT
● Operator that has a UNION
● Operator that has a GROUP BY
● Operator that has a SET function
● Operator that has an OLAP function, with the exception of RANK()
See the description of the UPDATE (positioned) Statement [ESQL] [SP] for information on the columns and
expressions allowed in the SET clause for the update of a row in the result set of a cursor.
SAP IQ supports inserts only on updatable cursors where all nonnullable, nonidentity columns are both
selected and updatable.
In SAP IQ, COMMIT and ROLLBACK are not allowed inside an open updatable cursor, even if the cursor is opened
as a hold cursor. SAP IQ does support ROLLBACK TO SAVEPOINT inside an updatable cursor.
Any failure that occurs after the cursor is open results in a rollback of all operations that have been performed
through this open cursor.
● The data extraction facility is enabled with the TEMP_EXTRACT_NAME1 option set to a pathname
● ANSI_CLOSE_CURSORS_ON_ROLLBACK is set OFF
● CHAINED is set OFF
● The statement is INSERT SELECT or SELECT INTO
If SAP IQ fails to set an updatable cursor when requested, see the .iqmsg file for related information.
There is a limitation regarding updatable cursors and ODBC. A maximum of 65535 rows or records can be
updated, deleted, or inserted at a time using these ODBC functions:
There is an implementation-specific limitation to the maximum value in the statement attribute that controls
the number of effected rows to the largest value of an UNSIGNED SMALL INT, which is 65535:
SQLSetStmtAttr(HANDLE,SQL_ATTR_ROW_ARRAY_SIZE, VALUE,0)
SAP IQ updatable cursors differ from ANSI SQL3 standard behavior as follows:
Note
Use the sp_iqcursorinfo system procedure to display detailed information about cursors currently
open on the server.
Privileges
(back to top)
None
Standards
(back to top)
Examples
(back to top)
● The following example declares a cursor for a prepared statement in Embedded SQL:
BEGIN
DECLARE cur_employee CURSOR FOR
SELECT emp_lname
FROM Employees;
DECLARE name CHAR(40);
OPEN cur_employee;
LOOP
FETCH NEXT cur_employee INTO name;
...
END LOOP;
CLOSE cur_employee;
END
Related Information
Syntax
DECLARE <cursor-name>
… CURSOR FOR <select-statement>
…[ FOR { READ ONLY | UPDATE } ]
SAP IQ supports a DECLARE CURSOR syntax that is not supported in SAP ASE. For information on the full
DECLARE CURSOR syntax, see DECLARE CURSOR Statement [ESQL] [SP].
Note
Use the sp_iqcursorinfo system procedure to display detailed information about cursors currently
open on the server.
Privilege
None
Standards
● SQL – the FOR UPDATE and FOR READ ONLY options are Transact-SQL extensions to ISO/ANSI SQL
grammar.
● SAP database products – there are some features of the SAP ASE DECLARE CURSOR statement that are
not supported in SAP IQ.
○ In the SAP IQ dialect, DECLARE CURSOR in a procedure or batch must immediately follow the BEGIN
keyword. In the Transact-SQL dialect, there is no such restriction.
○ In SAP ASE, when a cursor is declared in a procedure or batch, it exists for the duration of the
procedure or batch. In SAP IQ, if a cursor is declared inside a compound statement, it exists only for
the duration of that compound statement (whether it is declared in an SAP IQ or Transact-SQL
compound statement).
Related Information
Syntax
Go to:
● Remarks
● Privileges
● Standards
● Examples
Remarks
(back to top)
A local temporary table and the rows in it are visible only to the connection that created the table and inserted
the rows. By default, the rows of a temporary table are deleted on COMMIT.
Declared local temporary tables within compound statements exist within the compound statement.
Otherwise, the declared local temporary table exists until the end of the connection.
Once you create a local temporary table, either implicitly or explicitly, you cannot create another temporary
table of that name for as long as the temporary table exists. For example, you can create a local temporary
table implicitly:
Alternatively, you can create a local temporary table with an explicit by declaration:
Then if you try to select into #tmp or foo, or declare #tmp or foo again, you receive an error indicating that
#tmp or foo already exists.
If the owner name is omitted, then the error Item temp already exists is reported:
An attempt to create a base table or a global temporary table fails, if a local temporary table of the same name
exists on that connection, as the new table cannot be uniquely identified by <owner.table>.
You can, however, create a local temporary table with the same name as an existing base table or global
temporary table. References to the table name access the local temporary table, as local temporary tables are
resolved first.
The result returned is 8. Any reference to t1 refers to the local temporary table t1 until the local temporary
table is dropped by the connection.
Privileges
(back to top)
None
Standards
(back to top)
(back to top)
BEGIN
DECLARE LOCAL TEMPORARY TABLE TempTab (
number INT
);
...
END
Related Information
Deletes all the rows from the named table that satisfy the search condition. If no WHERE clause is specified, all
rows from the named table are deleted.
Syntax
DELETE
[ FROM ] [ <owner>.]<table-name> [[AS <correlation-name]>
...[ FROM <table-expression> ]
[ WHERE <search-condition> ]]
<table-expression> ::=
<table-spec>
| <table-expression join-type table-spec> [ ON <condition> ]
| <table-expression>, ...
Go to:
● Remarks
● Privileges
● Standards
● Examples
(back to top)
FROM clause
Indicates the table from which rows will be deleted. The optional second FROM clause in the DELETE
statement determines the rows to be deleted from the specified table based on joins with other tables. If
the second FROM clause is present, the WHERE clause qualifies the rows of this second FROM clause.
Rows are deleted from the table name given in the first FROM clause.
Note
You cannot use the DELETE statement on a join virtual table. If you attempt to delete from a join virtual
table, an error is reported.
WHERE clause
If specified, only rows satisfying the search condition are deleted. If no WHERE clause is specified, every
row is deleted.
Remarks
(back to top)
DELETE can be used on views provided the SELECT statement defining the view has only one table in the FROM
clause and does not contain a GROUP BY clause, an aggregate function, or involve a UNION operation.
If the same table name from which you are deleting rows is used in both FROM clauses, they are considered to
reference the same table if one of the following is true:
In cases where the server cannot determine if the table references are identical, an error appears. This prevents
the user from unintended semantics by deleting unintended rows.
DELETE
FROM table_1
FROM table_1 AS alias_1, table_2 AS alias_2
WHERE ...
table_1 is identified without a correlation name in the first FROM clause, but with a correlation name in the
second FROM clause. The use of a correlation name for table_1 in the second FROM clause ensures that only
one instance of table_1 exists in the statement. This is an exception to the general rule that where the same
table is identified with and without a correlation name in the same statement, two instances of the table are
considered.
DELETE
FROM table_1
FROM table_1 AS alias_1, table_1 AS alias_2
WHERE ...
There are two instances of table_1 in the second FROM clause. Since there is no way to identify which
instance the first FROM clause should be identified with, the general rule of correlation names means that
table_1 in the first FROM clause is identified with neither instance of table_1 in the second clause: there are
three instances of table_1 in the statement.
Privileges
(back to top)
Requires the DELETE object-level privilege on the table. See GRANT Object-Level Privilege Statement [page
1502] for assistance with granting privileges
Standards
(back to top)
Examples
(back to top)
DELETE
FROM Employees
WHERE EmployeeID = 105
● The following example removes all data prior to 1993 from the FinancialData table:
DELETE
FROM FinancialData
WHERE Year < 1993
● The following example removes all names from the Contacts table if they are already present in the
Customers table:
DELETE
FROM Contacts
FROM Contacts, Customers
WHERE Contacts.Surname = Customers.Surname
Related Information
Syntax
<table-spec> ::=
[ <owner>.]<correlation-name>
<cursor-name> ::=
<identifier> | <hostvar>
Parameters
FROM
This form of the DELETE statement deletes the current row of the specified cursor. The current row is defined
to be the last row fetched from the cursor.
The positioned DELETE statement can be used on a cursor open on a view as long as the view is updatable.
Changes effected by positioned DELETE statements are visible in the cursor result set, except where client-side
caching prevents seeing these changes.
Privileges
Requires DELETE object-level privilege on tables used in the cursor. See GRANT Object-Level Privilege
Statement [page 1502] for assistance with granting privileges
Standards
● SQL – the range of cursors that can be updated may contain vendor extensions to ISO/ANSI SQL grammar
if the ANSI_UPDATE_CONSTRAINTS option is set to OFF.
● SAP database products – Embedded SQL use is supported by Open Client/Open Server. Procedure and
trigger use is supported in SAP SQL Anywhere.
Examples
The following example removes the current row from the database:
Related Information
Gets information about the host variables required to store data retrieved from the database or host variables
used to pass data to the database.
Syntax
DESCRIBE
…[ USER TYPES ]
…[ { ALL | BIND VARIABLES FOR | INPUT
| OUTPUT | SELECT LIST FOR } ]
…[ { LONG NAMES [ <long-name-spec> ] | WITH VARIABLE RESULT } ]
…[ FOR ] { <statement-name> | CURSOR <cursor-name> }
…INTO <sqlda-name>
<long-name-spec> ::=
{ OWNER.TABLE.COLUMN
| TABLE.COLUMN
| COLUMN }
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
USER TYPES
Returns information about user-defined data types of a column. Typically, such a DESCRIBE is done when a
previous DESCRIBE returns an indicator of DT_HAS_USERTYPE_INFO.
The information returned is the same as for a DESCRIBE without the USER TYPES clause, except that the
sqlname field holds the name of the user-defined data type, instead of the name of the column.
If DESCRIBE uses the LONG NAMES clause, the sqldata field holds this information.
ALL
Describes INPUT and OUTPUT with one request to the database server. This has a performance benefit in
a multiuser environment. The INPUT information is filled in the SQLDA first, followed by the OUTPUT
information. The sqld field contains the total number of INPUT and OUTPUT variables. The
DT_DESCRIBE_INPUT bit in the indicator variable is set for INPUT variables and clear for OUTPUT
variables.
BIND VARIABLES FOR
Equivalent to the INPUT clause. When used with the INPUT clause, DESCRIBE BIND VARIABLES does not
set up the data types in the SQLDA: this needs to be done by the application.
INPUT
DESCRIBE uses the indicator variables in the SQLDA to provide additional information.
DT_PROCEDURE_IN and DT_PROCEDURE_OUT are bits that are set in the indicator variable when a CALL
statement is described. DT_PROCEDURE_IN indicates an IN or INOUT parameter and
DT_PROCEDURE_OUT indicates an INOUT or OUT parameter. Procedure RESULT columns have both bits
clear. After a DESCRIBE OUTPUT, these bits can be used to distinguish between statements that have
result sets (need to use OPEN, FETCH, RESUME, CLOSE) and statements that do not (need to use EXECUTE).
DESCRIBE INPUT sets DT_PROCEDURE_IN and DT_PROCEDURE_OUT appropriately only when a bind
variable is an argument to a CALL statement; bind variables within an expression that is an argument in a
CALL statement sets the bits.
OUTPUT
Fills in the data type and length in the SQLDA for each select list item. The name field is also filled in with a
name for the select list item. If an alias is specified for a select list item, the name is that alias. Otherwise,
the name derives from the select list item: if the item is a simple column name, it is used; otherwise, a
substring of the expression is used. DESCRIBE also puts the number of select list items in the sqld field of
the SQLDA.
● If the statement being described is a UNION of two or more SELECT statements, the column names
returned for DESCRIBE OUTPUT are the same column names which would be returned for the first
SELECT statement.
● If you describe a CALL statement, DESCRIBE OUTPUT fills in the data type, length, and name in the
SQLDA for each INOUT or OUT parameter in the procedure. DESCRIBE OUTPUT also puts the number
of INOUT or OUT parameters in the sqld field of the SQLDA.
● If you describe a CALL statement with a result set, OUTPUT fills in the data type, length, and name in
the SQLDA for each RESULT column in the procedure definition. DESCRIBE OUTPUT also puts the
number of result columns in the sqld field of the SQLDA.
SELECT LIST FOR
Retrieves column names for a statement or cursor. Without this clause, there is a 29-character limit on the
length of column names: with the clause, names of an arbitrary length are supported. If LONG NAMES is
used, the long names are placed into the SQLDATA field of the SQLDA, as if you were fetching from a
cursor. None of the other fields (SQLLEN, SQLTYPE, and so on) are filled in. The SQLDA must be set up like
a FETCH SQLDA: it must contain one entry for each column, and the entry must be a string type. The
default specification for the long names is TABLE.COLUMN.
WITH VARIABLE RESULT
Describes procedures that might have more than one result set, with different numbers or types of
columns. If WITH VARIABLE RESULT is used, the database server sets the SQLCOUNT value after the
describe to one of these values:
● 0 – the result set may change. The procedure call should be described again following each OPEN
statement.
● 1 – the result set is fixed. You need not describe again.
statement-name
Declared cursor. The cursor must have been previously declared and opened. The default action is to
describe the OUTPUT. Only SELECT statements and CALL statements have OUTPUT. A DESCRIBE
OUTPUT on any other statement, or on a cursor that is not a dynamic cursor, indicates no output by
setting the sqld field of the SQLDA to zero.
INTO sqlda-name
Identifier
Remarks
(back to top)
DESCRIBE sets up the named SQLDA to describe either the OUTPUT (equivalently SELECT LIST) or the
INPUT (BIND VARIABLES) for the named statement.
Privileges
(back to top)
None
Standards
(back to top)
Examples
(back to top)
sqlda = alloc_sqlda( 3 );
EXEC SQL DESCRIBE OUTPUT
FOR employee_statement
Related Information
Syntax
Parameters
ALL
Remarks
The DISCONNECT statement drops a connection with the database server and releases all resources used by it.
If the connection to be dropped was named on the CONNECT statement, the name can be specified.
None
Standards
Examples
● The following example uses DISCONNECT from dbisql to disconnect all connections:
DISCONNECT ALL
Related Information
Syntax
DROP
{ DBSPACE <dbspace-name>
| { DATATYPE [ IF EXISTS ]
| DOMAIN } <datatype-name>
| EVENT [ IF EXISTS ] <event-name>
| INDEX [ IF EXISTS ] [ [ <owner>].<table-name>.]<index-name>
| MESSAGE <message-number>
| TABLE [ IF EXISTS ] [ <owner>.]<table-name>
| VIEW [ IF EXISTS ] [ <owner>.]<view-name>
| MATERIALIZED VIEW [ IF EXISTS ] [ <owner>.]<view-name>
| PROCEDURE [ IF EXISTS ] [ <owner>.]<procedure-name>
| FUNCTION [ IF EXISTS ] [ <owner>.]<function-name> }
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
DBSPACE
DROP DBSPACE is prevented whenever the statement affects a table that is currently being used by
another connection.
IF EXISTS
Use if you do not want an error returned when the DROP statement attempts to remove a database object
that does not exist.
INDEX
DROP INDEX deletes any explicitly created index. It deletes an implicitly created index only if there are no
unique or foreign-key constraints or associated primary key.
DROP INDEX is prevented whenever the statement affects a table that is currently being used by another
connection.
For a nonunique HG index, DROP INDEX fails if an associated unenforced foreign key exists.
Caution
Do not delete views owned by the DBO user. Deleting such views or changing them into tables might
cause problems.
TABLE
DROP TABLE is prevented whenever the statement affects a table that is currently being used by another
connection or if the primary table has foreign-key constraints associated with it, including unenforced
foreign-key constraints. It is also prevented if the table has an IDENTITY column and IDENTITY_INSERT is
set to that table. To drop the table, you must clear IDENTITY_INSERT, that is, set IDENTITY_INSERT to '
' (an empty string), or set to another table name.
A foreign key can have either a nonunique single or a multicolumn HG index. A primary key may have
unique single or multicolumn HG indexes. You cannot drop the HG index implicitly created for an existing
foreign key, primary key, and unique constraint.
The four initial dbspaces are SYSTEM, IQ_SYSTEM_MAIN, IQ_SYSTEM_TEMP, and IQ_SYSTEM_MSG. You
cannot drop these initial dbspaces, but you may drop dbspaces from the IQ main store or catalog store,
which may contain multiple dbspaces, as long as at least one dbspace remains with readwrite mode.
You must drop tables in the dbspace before you can drop the dbspace. An error is returned if the dbspace
still contains user data; other structures are automatically relocated when the dbspace is dropped. You can
drop a dbspace only after you make it read-only.
A dbspace may contain data at any point after it is used by a command, thereby preventing a DROP
DBSPACE on it.
DROP DATATYPE is prevented if the data type is used in a table. You must change data types on all columns
defined on the user-defined data type to drop the data type. It is recommended that you use DROP DOMAIN
rather than DROP DATATYPE, as DROP DOMAIN is the syntax used in the ANSI/ISO SQL3 draft.
Remarks
(back to top)
DROP removes the definition of the indicated database structure. If the structure is a dbspace, then all tables
with any data in that dbspace must be dropped or relocated prior to dropping the dbspace; other structures
are automatically relocated. If the structure is a table, all data in the table is automatically deleted as part of the
dropping process. Also, all indexes and keys for the table are dropped by DROP TABLE.
Global temporary tables cannot be dropped unless all users that have referenced the temporary table have
disconnected.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
DBSPACE Requires the DROP ANY OBJECT system privilege and the user must be the
only connection to the database.
DBA or users with the appropriate privilege can drop an index on tables that
are owned other users without using a fully-qualified name. All other users
must provide a fully-qualified index name to drop an index on a base table
owned by the DBA.
(back to top)
● Automatic commit. Clears the Data window in dbisql. DROP TABLE and DROP INDEX close all cursors for
the current connection.
● Local temporary tables are an exception; no commit is performed when one is dropped.
Standards
(back to top)
Examples
(back to top)
● The following example drop the Departments table from the database:
● The following example drop the emp_dept view from the database:
● The following example drop the myDAS main cache from the simplex or multiplex node you are connected
to:
Related Information
Syntax
DROP AGENT removes the association between an SAP IQ agent and a server.
The SYS.ISYSIQMPXSERVERAGENT system table stores the agent connection definitions for the server.
Privileges
Side Effects
Automatic commit
Related Information
Syntax
Parameters
connection-id
Obtained using the CONNECTION_PROPERTY function to request the connection number. This statement
returns the connection ID of the current connection:
You cannot drop your current connection; you must first create another connection, then drop your first
connection.
Privileges
Requires the DROP CONNECTION system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
DROP CONNECTION 4
Related Information
Syntax
db-filename
Corresponds to the database file name you defined for the database using CREATE DATABASE. If you
specified a directory path for this value in the CREATE DATABASE command, you must also specify the
directory path for DROP DATABASE. Otherwise, SAP IQ looks for the database files in the default directory
where the server files reside.
key-spec
A string, including mixed cases, numbers, letters, and special characters. It might be necessary to protect
the key from interpretation or alteration by the command shell.
Remarks
DROP DATABASE drops all the database segment files associated with the IQ store and temporary store before
it drops the catalog store files.
You must stop a database before you can drop it. If the connection parameter AUTOSTOP=no is used, you may
need to issue a STOP DATABASE statement.
You cannot execute a DROP DATABASE statement to drop an IQ database that has a DatabaseStart event
defined for it.
Privileges
The permissions required to execute this statement are set using the -gu server command line option, as
follows:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
● The following example drops the encrypted database marvin.db, which was created with the key is!
seCret:
● The following example drops the database temp.db from the /s1/temp directory on a UNIX system:
Related Information
Syntax
Parameters
login-name
Specifies the name of the remote server. The alternate login name of the local user and password for that
server is the external login that is deleted.
Changes made by DROP EXTERNLOGIN do not take effect until the next connection to the remote server.
Note
For required parameters that accept variable names, the database server returns an error if any of the
following conditions is true:
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Side Effects
Automatic commit
Standards
Examples
The following example drops the login dba from the remote database mydb1:
Related Information
Removes the named LDAP server configuration object from the SYSLDAPSERVER system view after verifying
that the LDAP server configuration object is not in a READY or ACTIVE state.
Syntax
Parameters
Allows the removal of an LDAP server configuration object from service that has a reference in a login
policy.
WITH SUSPEND
Allows an LDAP server configuration object to be dropped even if in a READY or ACTIVE state.
Remarks
The DROP LDAP SERVER statement fails when it is issued against an LDAP server configuration object that is
in a READY or ACTIVE state. This ensures that an LDAP server configuration object in active use cannot be
accidentally dropped. The DROP LDAP SERVER statement also fails if a login policy exists with a reference to
the LDAP server configuration object.
Privileges
Requires the MANAGE ANY LDAP SERVER system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Standards
In the following example, assuming that references to the LDAP server configuration object have been removed
from all login policies, the following two sets of commands are equivalent:
DROP LDAP SERVER ldapserver1 WITH DROP ALL REFERENCES WITH SUSPEND
ALTER LDAP SERVER ldapserver1 WITH SUSPEND DROP LDAP SERVER ldapserver1 WITH
DROP ALL REFERENCES
Using the WITH DROP ALL REFERENCES and WITH SUSPEND parameters eliminates the need to execute an
ALTER LDAP SERVER statement before the DROP LDAP SERVER statement
Related Information
Drops a user-defined logical server. This statement enforces consistent shared system temporary store
settings across physical nodes shared by logical servers.
Syntax
Parameters
logical-server-name
Automatically shuts down all servers in the logical server when the TEMP_DATA_IN_SHARED_TEMP option
is changed directly or indirectly.
Remarks
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
Related Information
Syntax
Remarks
A DROP LOGIN POLICY statement fails if you attempt to drop a policy that is assigned to a user. You can use
either the ALTER USER statement to change the policy assignment of the user or DROP USER to drop the user.
Requires the MANAGE ANY LOGIN POLICY system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Examples
The following example creates, then deletes, the Test11 login policy:
Related Information
Syntax
Parameters
ls-policy-name
Any policy name except ROOT and must refer to a policy not currently used for any logical server.
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
Related Information
Syntax
Parameters
Fails with an error, when one or more logical server memberships exist for the multiplex server being
dropped. Use the WITH DROP MEMBERSHIP clause to drop the multiplex server along with all of its
memberships.
WITH DROP LOGICAL SERVER
Note
The WITH DROP LOGICAL SERVER clause is only valid when dropping the last secondary server. An
error is reported otherwise.
Remarks
Shut down each multiplex server before dropping it. This statement automatically commits.
If not already stopped as recommended, the dropped server automatically shuts down after executing this
statement.
Dropping the last secondary server converts the multiplex back to SAP IQ server. After dropping the last
secondary server within the multiplex, the coordinator automatically shuts down. If required, it needs to be
restarted.
Privileges
Requires the MANAGE MULTIPLEX system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Examples
Related Information
Syntax
Parameters
owner
The owner of the mutex. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
mutex-name
The name of the mutex. <mutex-name> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
IF EXISTS clause
Use this clause to drop a mutex only if it exists. If a mutex does not exist and this clause is specified, then
nothing happens and no error is returned.
Remarks
If the mutex is locked by another connection, the drop operation proceeds without blocking but the mutex will
persist in the namespace until the mutex is released. Connections waiting on the mutex receive an error
immediately indicating that the object has been dropped.
Privileges
For a temporary mutex, you must be the connection that created the mutex.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
Example
Related Information
Removes a user-defined role from the database or converts a user-extended role to a regular user.
Syntax
Parameters
role_name
Required to convert a user-extended role back to act as a regular user rather than remove it from the
database. The <role_name> must exist in the database.
Required when dropping a standalone or user-extended role to which users have been granted the
underlying system privileges of the role. The grant can have been made with either the WITH ADMIN
OPTION or WITH NO ADMIN OPTION clause.
Remarks
A user-defined role can be dropped from the database or converted back to a regular user at any time as long
as all dependent roles left meet the minimum required number of administrative users with active passwords.
Privileges
Requires administrative rights over the role being dropped. If the role being dropped owns objects, none are in
use by any user in any session at the time the DROP ROLE statement is executed.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example converts a user-extended role named Joe that has not been granted to other users
or roles back to a regular user:
● The following example drops a user-extended role named Jack that has not been granted to other users or
roles from the database:
● The following example converts a user-extended role named Sam that has been granted to other user or
roles back to a regular role:
Related Information
Drops a semaphore.
Syntax
Parameters
owner
The owner of the semaphore. <owner> can also be specified using an indirect identifier (for example,
'[@<variable-name>]').
semaphore-name
The name of the semaphore. <semaphore-name> can also be specified using an indirect identifier (for
example, '[@<variable-name>]').
IF EXISTS clause
Use this clause to drop a semaphore only if it exists. If a semaphore does not exist and this clause is
specified, then nothing happens and no error is returned.
Remarks
For a temporary semaphore, you must be the connection that created the mutex.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
Standards
Example
Related Information
Drops a sequence. This statement applies to SAP IQ catalog store tables only.
Syntax
Remarks
If the named sequence cannot be located, an error message is returned. When you drop a sequence, all
synonyms for the name of the sequence are dropped automatically by the database server.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None
Standards
Example
The following example creates and then drops a sequence named Test:
Related Information
Syntax
Remarks
Before DROP SERVER succeeds, drop all the proxy tables that have been defined for the remote server.
Privileges
Requires the SERVER OPERATOR system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Side Effects
Automatic commit
Examples
Related Information
Syntax
Remarks
Privileges
Requires the MANAGE ANY WEB SERVICE system privilege. See GRANT System Privilege Statement [page
1511] for assistance with granting privileges.
Examples
Related Information
Syntax
Parameters
IF EXISTS
Prevents an error from being returned when the DROP SPATIAL REFERENCE SYSTEM statement
attempts to remove a spatial reference system that does not exist.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Related Information
Syntax
Parameters
IF EXISTS
Prevents an error from being returned when the DROP SPATIAL UNIT OF MEASURE statement attempts
to remove a spatial unit of measure that does not exist.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Examples
The following example drops a fictitious spatial unit of measure named Test:
Related Information
Frees resources used by the named prepared statement. These resources are allocated by a successful
PREPARE statement, and are normally not freed until the database connection is released.
Syntax
Parameters
statement-name
Identifier or host-variable
Remarks
To drop the statement, you must first have prepared the statement.
None
Standards
Examples
Related Information
Note
Syntax
Remarks
Privileges
The privilege required to drop a text configuration depends on ownership. See GRANT System Privilege
Statement [page 1511] for assistance with granting privileges.
Self None
Side Effects
Automatic commit
Examples
The following example creates and drops the mytextconfig text configuration object:
Related Information
Note
Syntax
Parameters
ON
Remarks
You must drop dependent TEXT indexes before you can drop a text configuration object.
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Side Effects
Automatic commit
The following example creates and drops the TextIdx TEXT index:
Related Information
Removes a trigger from the database. This statement applies to SAP IQ catalog store tables only.
Syntax
Remarks
Use the IF EXISTS clause if you do not want an error returned when the DROP statement attempts to remove a
database object that does not exist.
Privileges
The privilege required to drop a trigger depends on ownership. See GRANT System Privilege Statement [page
1511] or GRANT Object-Level Privilege Statement [page 1502] for assistance with granting privileges.
Side effects
Automatic commit.
DROP TRIGGER comprises part of optional ANSI/ISO SQL Language Feature T211, "Basic trigger
capability". The IF EXISTS clause is not in the standard.
Example
This example creates, and then drops, a trigger called emp_upper_postal_code to ensure that postal codes
are in upper case before updating the Employees table. If the trigger does not exist, an error is returned.
Related Information
Removes a user.
Syntax
Parameters
user-name
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Note
When dropping a user, any objects owned by this user and any permissions granted by this user are also
removed.
Standards
Examples
The following example drops the user SQLTester from the database:
Related Information
Syntax
Parameters
identifier
Specify the owner of the database-scope variable. If <owner> is not specified, the database server looks
for a database-scope variable named <identifier> owned by the user executing the statement. If none
is found, the database server looks for a database-scope variable named <identifier> owned by
PUBLIC.
IF EXISTS clause
Specify this clause to allow the statement to complete without returning an error if a variable with the
specified name (and/or owner, if specified) is not found.
Remarks
Connection-scope variables are also automatically dropped when the database connection is terminated.
Database-scope variables must be explicitly dropped.
If a statement is still accessing a database-scope variable at the time it is dropped, then the variable is still
available in memory for that statement only.
Variables are often used for large objects, so dropping them after use or setting them to NULL can free up
significant resources such as disk space and memory.
The privilege varies by the variable scope and ownership. See GRANT System Privilege Statement [page 1511]
for assistance with granting privileges.
Side effects
Connection-scope variables: No side effects are associated with dropping a connection-scope variable.
Standards
Related Information
Syntax
EXECUTE <statement-name>
... [ { USING DESCRIPTOR <sqlda-name> | USING <host-variable-list> } ]
... [ { INTO DESCRIPTOR <into-sqlda-name> | INTO <into-host-variable-
list> ]
... [ ARRAY :<nnn> } ]
Syntax 2 – Short Form to PREPARE and EXECUTE a Statement Not Containing Bind Variables or
Output
statement-name
Identifier or host-variable.
sqlda-name
Identifier.
into-sqlda-name
Identifier.
statement
String or host-variable.
USING
OUTPUT from a SELECT statement or a CALL statement is put either into the variables in the variable list
or into the program data areas described by the named SQLDA. The correspondence is one to one from
the OUTPUT (selection list or parameters) to either the host variable list or the SQLDA descriptor array.
INTO
If used with an INSERT statement, the inserted row is returned in the second descriptor. For example,
when using autoincrement primary keys that generate primary-key values, EXECUTE provides a
mechanism to refetch the row immediately and determine the primary-key value assigned to the row.
ARRAH
Used with prepared INSERT statements to allow wide inserts, which insert more than one row at a time and
which might improve performance. The value nnn is the number of rows to be inserted. The SQLDA must
contain nnn * (columns per row) variables. The first row is placed in SQLDA variables 0 to (columns per
row)-1, and so on. Similarly, the ARRAY clause can be used for wide updates, deletes, and merges using
prepared UPDATE, DELETE, and MERGE statements.
Remarks
Syntax 1 – If the dynamic statement contains host variable placeholders, which supply information for the
request (bind variables), then either the <sqlda-name> must specify a C variable, which is a pointer to a
SQLDA containing enough descriptors for all bind variables occurring in the statement, or the bind variables
must be supplied in the <host-variable-list>.
Syntax 2 – The SQL statement contained in the string or host variable is immediately executed and is dropped
on completion.
EXECUTE can be used for any SQL statement that can be prepared. Cursors are used for SELECT statements or
CALL statements that return many rows from the database.
Note
After successful execution of an INSERT, UPDATE, or DELETE statement, the sqlerrd[2] field of the SQLCA
(SQLCOUNT) is filled in with the number of rows affected by the operation.
Standards
Examples
Related Information
Invokes a procedure, as an SAP Adaptive Server Enterprise-compatible alternative to the CALL statement.
Syntax
Remarks
EXECUTE executes a stored procedure, optionally supplying procedure parameters and retrieving output values
and return status information.
EXECUTE is implemented for Transact-SQL compatibility, but can be used in either Transact-SQL or SAP IQ
batches and procedures.
Note
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Examples
● Execute the procedure, supplying the input value of 23 for the parameter. If you are connected from an
Open Client application, PRINT messages are displayed on the client window. If you are connected from an
ODBC or Embedded SQL application, messages display on the database server window:
EXECUTE p1 23
● An alternative way of executing the procedure, which is useful if there are several parameters:
EXECUTE p1 @var = 23
EXECUTE p1
● Execute the procedure and store the return value in a variable for checking return status:
EXECUTE @status = p1 23
Related Information
Extends the range of statements that can be executed from within procedures. It lets you execute dynamically
prepared statements, such as statements that are constructed using the parameters passed in to a procedure.
Syntax
Syntax 1
<execute-option> ::=
WITH QUOTES [ ON | OFF ]
| WITH ESCAPES { ON | OFF }
| WITH RESULT SET { ON | OFF }
Syntax 2
EXECUTE ( <string-expression> )
Parameters
Any double quotes in the string expression are assumed to delimit an identifier. When not specified, the
treatment of double quotes in the string expression depends on the current setting of the
QUOTED_IDENTIFIER database option.
WITH QUOTES is useful when an object name that is passed into the stored procedure is used to construct
the statement that is to be executed, but the name might require double quotes and the procedure might
be called when QUOTED_IDENTIFIER is set to OFF.
Causes any escape sequences (such as \n, \x, or \\) in the string expression to be ignored. For example,
two consecutive backslashes remain as two backslashes, rather than being converted to a single
backslash. The default setting is ON.
You can use WITH ESCAPES OFF for easier execution of dynamically constructed statements referencing
file names that contain backslashes.
string-expression
In some contexts, escape sequences in the <string-expression> are transformed before EXECUTE
IMMEDIATE is executed. For example, compound statements are parsed before being executed, and
escape sequences are transformed during this parsing, regardless of the WITH ESCAPES setting. In these
contexts, WITH ESCAPES OFF prevents further translations from occurring. For example:
BEGIN
DECLARE String1 LONG VARCHAR;
DECLARE String2 LONG VARCHAR;
EXECUTE IMMEDIATE
'SET String1 = ''One backslash: \\\\ ''';
EXECUTE IMMEDIATE WITH ESCAPES OFF
'SET String2 = ''Two backslashes: \\\\ ''';
SELECT String1, String2
END
When specified with ON, the EXECUTE IMMEDIATE statement returns a result set. With this clause, the
containing procedure is marked as returning a result set. If you do not include this clause, an error is
reported when the procedure is called if the statement does not produce a result set.
Note
The default option is OFF, meaning that no result set is produced when the statement is executed.
Remarks
Literal strings in the statement must be enclosed in single quotes, and must differ from any existing statement
name in a PREPARE or EXECUTE IMMEDIATE statement. The statement must be on a single line.
The statement is executed with the permissions of the owner of the procedure, not with the permissions of the
user who calls the procedure.
Privileges
None
None. However, if the statement is a data definition statement with an automatic commit as a side effect, then
that commit does take place.
Standards
Examples
The following example creates a table, where the table name is supplied as a parameter to the procedure. The
full EXECUTE IMMEDIATE statement must be on a single line:
Related Information
Syntax
Remarks
Closes the Interactive SQL window, if you are running Interactive SQL as a windowed program, or terminates
Interactive SQL altogether when run in command-prompt (batch) mode. In both cases, the database
connection is also closed. Before closing the database connection, Interactive SQL automatically executes a
COMMIT statement, if the COMMIT_ON_EXIT option is set to ON. If this option is set to OFF, Interactive SQL
performs an implicit ROLLBACK. By default, the COMMIT_ON_EXIT option is set to ON.
The optional return code can be used in batch files to indicate success or failure of the commands in an
Interactive SQL command file. The default return code is 0.
Privileges
None
Side Effects
● Automatically performs a commit, if option COMMIT_ON_EXIT is set to ON (the default); otherwise this
statement performs an implicit rollback.
● On Windows operating systems, the optional return value is available as ERRORLEVEL.
Standards
Examples
● The following example sets the Interactive SQL return value to 1 if there are any rows in table T, or to 0 if T
contains no rows:
Note
You cannot write the following the statement, because EXIT is an Interactive SQL statement (not a SQL
statement), and you cannot include any Interactive SQL statement in other SQL block statements:
Related Information
Retrieves one row from the named cursor. The cursor must have been previously opened.
Syntax
FETCH
{ NEXT | PRIOR | FIRST | LAST
| ABSOLUTE <row-count> | RELATIVE <row-count> }
... <cursor-name>
... { [ INTO <host-variable-list> ]
| USING DESCRIPTOR <sqlda-name>
| INTO <variable-list> }
... [ PURGE ] [ BLOCK <n> ] [ ARRAY <fetch-count> ]
... INTO <variable-list>
... IQ CACHE <row-count>
Go to:
● Remarks
● Privileges
● Standards
● Examples
(back to top)
NEXT
(Default) Causes the cursor to advance one row before the row is fetched.
PRIOR
Used to go to a particular row. A zero indicates the position before the first row.
A one (1) indicates the first row, and so on. Negative numbers are used to specify an absolute position from
the end of the cursor. A negative one (-1) indicates the last row of the cursor. FIRST is a short form for
ABSOLUTE 1. LAST is a short form for ABSOLUTE -1.
Note
SAP IQ handles the FIRST, LAST, ABSOLUTE, and negative RELATIVE clauses less efficiently than some
other DBMS products, so there is a performance impact when using them.
RELATIVE
Moves the cursor by a specified number of rows in either direction before fetching.
A positive number indicates moving forward and a negative number indicates moving backwards. Thus, a
NEXT is equivalent to RELATIVE 1 and PRIOR is equivalent to RELATIVE -1. RELATIVE 0 retrieves the same
row as the last fetch statement on this cursor.
row-count
If it is not specified, then FETCH positions the cursor only .OPEN initially positions the cursor before the
first row. An optional positional parameter can be specified that allows the cursor to be moved before a row
is fetched.
PURGE
(Embedded SQL only) Causes the client to flush its buffers of all rows and then send the fetch request to
the server. This fetch request may return a block of rows.
BLOCK n
ARRAY fetch-count
(Embedded SQL only) Allows wide fetches, which retrieve more than one row at a time, and which might
improve performance. To use wide fetches in Embedded SQL, include the FETCH statement in your code,
where ARRAY nnn is the last item of the FETCH statement:
The fetch count nnn can be a host variable. The SQLDA must contain nnn * (columns per row) variables.
The first row is placed in SQLDA variables 0 to (columns per row) -1, and so on.
IQ CACHE row-count
Specifies the maximum number of rows buffered in the FIFO queue. If you do not specify a value for IQ
CACHE, the value of the CURSOR_WINDOW_ROWS database option is used. The default setting of
CURSOR_WINDOW_ROWS is 200.
Remarks
(back to top)
One row from the result of SELECT is put into the variables in the variable list. The correspondence from the
select list to the host variable list is one-to-one.
One or more rows from the result of SELECT are put either into the variables in the variable list or into the
program data areas described by the named SQLDA. In either case, the correspondence from the select list to
either the host variable list or the SQLDA descriptor array is one-to-one.
A cursor declared FOR READ ONLY sees the version of table(s) on which the cursor is declared when the cursor
is opened, not the version of table(s) at the time of the first FETCH
If the FETCH includes a positioning parameter and the position is outside the allowable cursor positions, then
the SQLE_NOTFOUND warning is issued.
DECLARE CURSOR must appear before FETCH in the C source code, and the OPEN statement must be executed
before FETCH. If a host variable is being used for the cursor name, then the DECLARE statement actually
generates code and thus must be executed before FETCH.
In the multiuser environment, rows can be fetched by the client more than one at a time. This is referred to as
block fetching or multirow fetching. The first fetch causes several rows to be sent back from the server. The
If the SQLSTATE_NOTFOUND warning is returned on the fetch, then the sqlerrd[2] field of the SQLCA
(SQLCOUNT) contains the number of rows that the attempted fetch exceeded the allowable cursor positions.
(A cursor can be on a row, before the first row or after the last row.) The value is 0 if the row was not found but
the position is valid, for example, executing FETCH with a RELATIVE 1 clause when positioned on the last row of
a cursor. The value is positive if the attempted fetch was further beyond the end of the cursor, and negative if
the attempted fetch was further before the beginning of the cursor.
After successful execution of the FETCH statement, the sqlerrd[1] field of the SQLCA (SQLIOCOUNT) is
incremented by the number of input/output operations required to perform the fetch. This field is actually
incremented on every database statement.
The server returns in SQLCOUNT the number of records fetched and always returns a SQLCOUNT greater than
zero unless there is an error. Older versions of the server only return a single row and the SQLCOUNT is set to
zero. Thus a SQLCOUNT of zero with no error condition indicates one valid row has been fetched.
Privileges
(back to top)
The cursor must be opened and the user must have SELECT object-level permission on the tables referenced in
the declaration of the cursor.
See GRANT Object-Level Privilege Statement [page 1502] for assistance with granting privileges
Standards
(back to top)
Examples
(back to top)
BEGIN
Related Information
Repeats the execution of a statement list once for each row in a cursor.
Syntax
[ <statement-label>: ]
FOR <for-loop-name> AS <cursor-name> [ <cursor-type> ] CURSOR
{ FOR <statement>
... [ { FOR { UPDATE <cursor-concurrency> | FOR READ ONLY } ]
| USING <variable-name> }
DO <statement-list>
END FOR [ <statement-label> ]
<cursor-type> ::=
NO SCROLL
| DYNAMIC SCROLL
| SCROLL
| INSENSITIVE
| SENSITIVE
<cursor-concurrency> ::=
BY { VALUES | TIMESTAMP | LOCK }
NO SCROLL
A cursor declared NO SCROLL is restricted to moving forward through the result set using FETCH NEXT
and FETCH RELATIVE 0 seek operations. As rows cannot be returned to once the cursor leaves the row,
there are no sensitivity restrictions on the cursor. When a NO SCROLL cursor is requested, the database
server supplies the most efficient kind of cursor, which is an asensitive cursor.
DYNAMIC SCROLL
DYNAMIC SCROLL is the default cursor type. DYNAMIC SCROLL cursors can use all formats of the FETCH
statement. When a DYNAMIC SCROLL cursor is requested, the database server supplies an asensitive
cursor. When using cursors there is always a trade-off between efficiency and consistency. Asensitive
cursors provide efficient performance at the expense of consistency.
SCROLL
A cursor declared SCROLL can use all formats of the FETCH statement. When a SCROLL cursor is
requested, the database server supplies a value-sensitive cursor. The database server must execute value-
sensitive cursors in such a way that result set membership is guaranteed. DYNAMIC SCROLL cursors are
more efficient and should be used unless the consistent behavior of SCROLL cursors is required
INSENSITIVE
A cursor declared INSENSITIVE has its values and membership fixed over its lifetime. The result set of the
SELECT statement is materialized when the cursor is opened. FETCHING from an INSENSITIVE cursor
does not see the effect of any other INSERT, UPDATE, MERGE, PUT, or DELETE statement from any
connection, including the connection that opened the cursor.
SENSITIVE
A cursor declared SENSITIVE is sensitive to changes to membership or values of the result set.
Remarks
FOR is a control statement that lets you execute a list of SQL statements once for each row in a cursor.
The FOR statement is equivalent to a compound statement with a DECLARE for the cursor and a DECLARE of a
variable for each column in the result set of the cursor, followed by a loop that fetches one row from the cursor
into the local variables and executes <statement-list> once for each row in the cursor.
The name and data type of the local variables that are declared are derived from the <statement> used in the
cursor. With a SELECT statement, the data type is the data type of the expressions in the select list. The names
are the select list item aliases where they exist; otherwise, they are the names of the columns. Any select list
item that is not a simple column reference must have an alias. With a CALL statement, the names and data
types are taken from the RESULT clause in the procedure definition.
The LEAVE statement can be used to resume execution at the first statement after the END FOR. If the ending
<statement-label> is specified, it must match the beginning <statement-label>.
None
Standards
Example
Related Information
Sends native syntax to a remote server, enabling users to specify the server to which a passthrough connection
is required.
Syntax
FORWARD TO [ <server-name> ]
Parameters
server-name
A command in the native syntax of the remote server. The command or group of commands is enclosed in
curly braces ({}) or single quotes.
Remarks
If you specify a <server-name>, but do not specify a statement in the FORWARD TO query, your session enters
passthrough mode, and all subsequent queries are passed directly to the remote server. To turn passthrough
mode OFF, issue the FORWARD TO statement without a <server_name> specification.
Note
The FORWARD TO statement is a server directive and cannot be used in stored procedures, triggers, events,
or batches.
FORWARD TO enables users to specify the server to which a passthrough connection is required. The statement
can be used:
When establishing a connection to <server-name> on behalf of the user, the server uses:
If the connection cannot be made to the server specified, the reason is contained in a message returned to the
user.
After statements are passed to the requested server, any results are converted into a form that can be
recognized by the client program.
Privileges
None
The remote connection is set to AUTOCOMMIT (unchained) mode for the duration of the FORWARD TO session.
Any work that was pending prior to the FORWARD TO statement is automatically committed.
Standards
Examples
The following example shows a passthrough session with the remote server ase_prod:
FORWARD TO aseprod
SELECT * from titles
SELECT * from authors
FORWARD TO
Related Information
Syntax
<table-name> ::=
[ <userid>.] <table-name> ]
[ [ AS ] <correlation-name> ]
[ FORCE INDEX ( <index-name> ) ]
<view-name> ::=
[ <userid>.]<view-name> [ [ AS ] <correlation-name> ]
<procedure-name> ::=
[ <owner>, ] <procedure-name> ([ <parameter>, ...])
[ WITH(<column-name datatype>, )]
[ [ AS ] <correlation-name> ]
<parameter> ::=
<scalar-expression> | <table-parameter>
<table-parameter> ::=
TABLE (<select-statement)> [ OVER ( <table-parameter-over> )]
<table-parameter-over> ::=
[ PARTITION BY {ANY
| NONE|< table-expression> } ]
[ ORDER BY { <expression> | <integer> }
[ ASC | DESC ] [, ...] ]
<derived-table> ::=
( <select-statement> )
[ AS ] <correlation-name> [ ( <column-name>, ... ) ]
<join-expression> ::=
<table-expression> <join-operator> <table-expression>
[ ON <join-condition> ]
<join-operator> ::=
[ KEY | NATURAL ] [ <join-type> ] JOIN | CROSS JOIN
<join-type> ::=
INNER
| LEFT [ OUTER ]
| RIGHT [ OUTER ]
| FULL [ OUTER ]
<openstring-expression> ::=
OPENSTRING ( { FILE | VALUE } <string-expression> )
WITH ( <rowset-schema> )
[ OPTION ( <scan-option> ... ) ]
[ AS ] <correlation-name>
<apply-expression> ::=
<table-expression> { CROSS | OUTER } APPLY <table-expression>
<contains-expression> ::=
{ <table-name> | <view-name> } CONTAINS
( <column-name> [,...], <contains-query> )
[ [ AS ] <score-correlation-name> ]
<column-schema-list> ::=
{ <column-name user-or-base-type> | filler( ) } [ , ... ]
<column-list> ::=
{ <column-name> | filler( ) } [ , ... ]
<scan-option> ::=
BYTE ORDER MARK { ON | OFF }
| COMMENTS INTRODUCED BY <comment-prefix>
| DELIMITED BY <string>
| ENCODING <encoding>
| ESCAPE CHARACTER <character>
| ESCAPES { ON | OFF }
| FORMAT { TEXT | BCP }
| HEXADECIMAL { ON | OFF }
| QUOTE <string>
| QUOTES { ON | OFF }
| ROW DELIMITED BY string
| SKIP <integer>
| STRIP { ON | OFF | LTRIM | RTRIM | BOTH }
<dml-derived-table> ::=
( <dml-statement> ) REFERENCING ( [ <table-version-names> | NONE ] )
<dml-statement> ::=
<insert-statement>
<update-statement>
<delete-statement>
<table-version-names> ::=
OLD [ AS ] <correlation-name> [ FINAL [ AS ] <correlation-name> ]
| FINAL [ AS ] <correlation-name>
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
table-name
Specifies a view to include in the query. As with tables, views owned by a different user can be qualified by
specifying the user ID. Views owned by groups to which the current user belongs are found by default
without specifying the user ID. Although the syntax permits table hints on views, these hints have no effect.
procedure-name
A stored procedure that returns a result set. This clause applies to the FROM clause of SELECT statements
only. The parentheses following the procedure name are required even if the procedure does not take
parameters. DEFAULT can be specified in place of an optional parameter.
parameter
If a subquery is used to define the TABLE parameter, then the following restrictions must hold:
Note
PARTITION BY
Logically specifies how the invocation of the function will be performed by the execution engine. The
execution engine must invoke the function for each partition and the function must process a whole
partition in each invocation. PARTITION BY or ORDER BY clauses must refer to the columns of the derived
table and outer references. An expression in the expression-list can be an integer K, which refers to the Kth
column of the TABLE input parameter.
PARTITION BY clause also specifies how the input data must be partitioned such that each invocation of
the function will process exactly one partition of data. The function must be invoked the number of times
equal to the number of partitions. For TPF, the parallelism characteristics are established through dynamic
negotiation between the server and the UDF at the runtime. If the TPF can be executed in parallel, for N
input partitions, the function can be instantiated M times, with M <=N. Each instantiation of the function
can be invoked more than once, each invocation consuming exactly one partition.
You can specify only one TABLE input parameter for PARTITION BY <expression-list> or PARTITION
BY ANY clause. For all other TABLE input parameters you must specify, explicit or implicit PARTITION BY
NONE clause.
The execution engine can invoke the function in any order of the partitions and the function is assumed
to return the same result sets regardless of the partitions order. Partitions cannot be split among two
invocations of the function.
ORDER BY
Specifies that the input data in each partition is expected to be sorted by <expression-list> by the
execution engine. The UDF expects each partition to have this physical property. If only one partition exists,
the whole input data is ordered based on the ORDER BY specification. ORDER BY clause can be specified
for any of the TABLE input parameters with PARTITION BY NONE or without PARTITION BY clause.
derived-table
PARTITION BY or ORDER BY clauses mustYou can supply a SELECT statement instead of table or view
name in the FROM clause. A SELECT statement used in this way is called a derived table, and it must be
given an alias.
join-expression, join-operator, join-type
● CROSS JOIN – returns the Cartesian product (cross product) of the two source tables
● NATURAL JOIN – compares for equality all corresponding columns with the same names in two tables
(a special case equijoin; columns are of same length and data type)
● KEY JOIN – restricts foreign-key values in the first table to be equal to the primary-key values in the
second table
● INNER JOIN – discards all rows from the result table that do not have corresponding rows in both
tables
● LEFT OUTER JOIN – preserves unmatched rows from the left table, but discards unmatched rows from
the right table
● RIGHT OUTER JOIN – preserves unmatched rows from the right table, but discards unmatched rows
from the left table
● FULL OUTER JOIN – retains unmatched rows from both the left and the right tables
Do not mix comma-style joins and keyword-style joins in the FROM clause. The same query can be written
two ways, each using one of the join styles. The ANSI syntax keyword style join is preferable.
The ON clause filters the data of inner, left, right, and full joins. Cross joins do not have an ON clause. In an
inner join, the ON clause is equivalent to a WHERE clause. In outer joins, however, the ON and WHERE
clauses are different. The ON clause in an outer join filters the rows of a cross product and then includes in
the result the unmatched rows extended with nulls. The WHERE clause then eliminates rows from both the
matched and unmatched rows produced by the outer join. You must take care to ensure that unmatched
rows you want are not eliminated by the predicates in the WHERE clause.
Specify an OPENSTRING clause to query within a file or a BLOB, treating the content of these sources as a
set of rows. When doing so, you also specify information about the schema of the file or BLOB for the result
set to be generated, since you are not querying a defined structure such as a table or view. This clause
applies to the FROM clause of a SELECT statement. It is not supported for UPDATE or DELETE statements.
apply-expression
Use the CONTAINS clause after a table name to filter the table, and return only those rows matching the
full text query specified with contains-query. Every matching row of the table is returned, along with a
score column that can be referred to using score-correlation-name, if it is specified. If score-correlation-
name is not specified, then the score column can be referred to by the default correlation name, contains.
dml-derived-table
Supports the use of a DML statement (INSERT, UPDATE, or DELETE) as a table expression in a query's
FROM clause.
Remarks
(back to top)
The SELECT statement requires a table list to specify which tables are used by the statement.
Note
Although this description refers to tables, it also applies to views unless otherwise noted.
The FROM table list creates a result set consisting of all the columns from all the tables specified. Initially, all
combinations of rows in the component tables are in the result set, and the number of combinations is usually
reduced by join conditions and/or WHERE conditions.
Tables owned by a different user can be qualified by specifying the <userid>. Tables owned by roles to which
the current user belongs are found by default without specifying the user ID.
The correlation name is used to give a temporary name to the table for this SQL statement only. This is useful
when referencing columns that must be qualified by a table name but the table name is long and cumbersome
to type. The correlation name is also necessary to distinguish between table instances when referencing the
same table more than once in the same query. If no correlation name is specified, then the table name is used
as the correlation name for the current statement.
If the same correlation name is used twice for the same table in a table expression, that table is treated as if it
were only listed once. For example, in:
SELECT *
FROM SalesOrders
KEY JOIN SalesOrderItems,
SalesOrders
KEY JOIN Employees
The two instances of the SalesOrders table are treated as one instance that is equivalent to:
SELECT *
FROM SalesOrderItems
KEY JOIN SalesOrders
KEY JOIN Employees
SELECT *
FROM Person HUSBAND, Person WIFE
For information on using the FROM clause with TEXT indexes, see SAP IQ Administration: Unstructured Data
Analytics.
Performance Considerations
Depending on the query, SAP IQ allows between 16 and 64 tables in the FROM clause with the optimizer turned
on; however, performance might suffer if you have more than 16 to 18 tables in the FROM clause in very complex
queries.
Note
If you omit the FROM clause, or if all tables in the query are in the SYSTEM dbspace, the query is processed
by SAP SQL Anywhere instead of SAP IQ and might behave differently, especially with respect to syntactic
and semantic restrictions and the effects of option settings.
If you have a query that does not require a FROM clause, you can force the query to be processed by SAP IQ
by adding the clause FROM iq_dummy, where iq_dummy is a one-row, one-column table that you create in
your database.
Privileges
(back to top)
The FILE clause of <openstring-expression> requires the READ FILE system privilege.
The TABLE clause of <openstring-expression> requires the user to own the referenced tables, or to have
the SELECT ANY TABLE privilege.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
(back to top)
...
FROM Employees
...
...
FROM Employees NATURAL JOIN Departments
...
...
FROM Customers
KEY JOIN SalesOrders
KEY JOIN SalesOrderItems
KEY JOIN Products
...
● The following example shows a query that illustrates how to use derived tables in a query:
● The following example shows a query that illustrates a valid FROM clause where the two references to the
same table T are treated as two different instances of the same table T:
● The following example uses a table parameterized function (TPF) and illustrates a valid FROM clause:
● The following example contains a derived table, MyDerivedTable, which ranks products in the Products
table by UnitPrice:
SELECT TOP 3 *
FROM ( SELECT Description,
Quantity,
UnitPrice,
RANK() OVER ( ORDER BY UnitPrice ASC )
AS Rank
FROM Products ) AS MyDerivedTable
ORDER BY Rank;
SELECT *
FROM Products pr, SalesOrders so, SalesOrderItems si
WHERE pr.ProductID = so.ProductID
AND pr.ProductID = si.ProductID;
SELECT *
FROM Products pr INNER JOIN SalesOrders so
ON (pr.ProductID = so.ProductID)
INNER JOIN SalesOrderItems si
ON (pr.ProductID = si.ProductID);
Related Information
Retrieves information about variables within a descriptor area, or retrieves actual data from a variable in a
descriptor area.
Syntax
<assignment> ::=
<hostvar> = { TYPE
| LENGTH
| PRECISION
| SCALE
| DATA
| INDICATOR
| NAME
| NULLABLE
| RETURNED_LENGTH }
Remarks
The value <n> specifies the variable in the descriptor area about which information is retrieved.
Type checking is performed when doing GET DESCRIPTOR ... DATA to ensure that the host variable and the
descriptor variable have the same data type. LONG VARCHAR and LONG BINARY are not supported by GET
DESCRIPTOR ... DATA.
None
Standards
Examples
Related Information
Syntax
<label> :
<sql-statement(s)>
GOTO <label_name>
Remarks
Statements in a procedure or batch can be labeled using a valid identifier followed by a colon (for example
mylabel:), provided that the label is at the beginning of a loop, conditional, or block. The label can then be
referenced in a GOTO statement, causing the execution point to move to the top of the loop/condition or the
first statement within the block.
If you nest compound statements, then you can only go to labels within the current compound statement and
any of its ancestor compound statements. You cannot go to labels located in other compound statements that
are nested within the ancestors.
The label use is not restricted to the beginning of loops, conditionals, or blocks; they can occur on any
statement. However, the same restrictions apply to using the GOTO statement within nested compound
statements.
Privileges
None
Standards
Examples
In the following example, if the GotoTest procedure is executed, then the GOTO lbl1 repositions execution to the
SET i2 = 200 statement. The returned values for column i2 in the result are 203 for all 5 rows in the result set:
id i i2
1 100 203
2 101 203
3 102 203
4 103 203
5 104 203
If the GotoTest procedure is changed to use GOTO lbl2 instead of GOTO lbl1, then the GOTO statement
repositions execution to the SET i2 = i2 + 1 statement immediately after the lbl2: BEGIN statement, and the
returned values in column i2 become 203, 205, 207, up to 221.
If the GotoTest procedure is changed to use GOTO lbl3, then the GOTO statement repositions execution to the
SET i2 = i2 +1 statement immediately after the lbl3: BEGIN statement, and the returned values in column i2
become 203, 204, 205, up to 212.
In the following example, the Transact-SQL batch prints the message “yes” on the server window four times:
Related Information
Allows users to manage passwords for other users and administer the CHANGE PASSWORD system privilege.
Syntax
target_user_list
Users the grantee has the potential to impersonate. The list must consist of existing users or user-
extended roles with login passwords. Separate the user_IDs in the list with commas.
ANY
All database users with login passwords become potential target users to manage passwords for each
grantee.
ANY WITH ROLES target_role_list
List of target roles for each grantee. Any users who are granted any of the target roles become potential
target users for each grantee. The <target_role_list> must consist of existing roles and the users who
are granted said roles must consist of database users with login passwords. Use commas to separate
multiple user_IDs.
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_ids with
commas.
WITH ADMIN OPTION
(Valid with the ANY clause only) The user can both manage passwords and grant the CHANGE PASSWORD
system privilege to another user.
WITH ADMIN ONLY OPTION
(Valid with the ANY clause only) The user can grant the CHANGE PASSWORD system privilege to another
user, but cannot manage passwords of other users.
WITH NO ADMIN OPTION
The user can manage passwords, but cannot grant the CHANGE PASSWORD system privilege to another
user.
Remarks
A user can be granted the ability to manage the password of any user in the database (ANY) or only specific
users (<target_users_list>) or members of specific roles (ANY WITH ROLES <target_roles_list>).
Administrative rights to the CHANGE PASSWORD system privilege can only be granted when using the ANY
clause.
If no clause is specified, ANY is used by default. If no administrative clause is specified in the grant statement,
the WITH NO ADMIN OPTION clause is used.
By default, the CHANGE PASSWORD system privilege is granted to the SYS_AUTH_SA_ROLE compatibility role
with the WITH NO ADMIN OPTION clause and to the SYS_AUTH_SSO_ROLE compatibility role with the ADMIN
ONLY OPTION clause, if they exist.
Each target user specified (target_users_list) must be an existing user or user-extended role with a login
password. Each target role specified (target_roles_list) must be an existing user-extended or user-defined role.
Requires the CHANGE PASSWORD system privilege granted with administrative rights. See GRANT System
Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example grants Sally and Laurel the ability to manage the password of Bob, Sam, and
Peter:
● The following example grants Mary the right to grant the CHANGE PASSWORD system privilege to any
user in the database. However, since the system privilege is granted with the WITH ADMIN ONLY OPTION
clause, Mary cannot manage the password of any other user.
● The following example grants Steve and Joe the ability to manage the password of any member of Role1
or Role2:
GRANT CHANGE PASSWORD (ANY WITH ROLES Role1, Role2) TO Steve, Joe
Related Information
Create a new user, and can also be used to change a password. However, it is recommended that you use the
CREATE USER statement to create users instead of the GRANT CONNECT statement.
Syntax
GRANT CONNECT
TO <userID> [, …]
IDENTIFIED BY <password> [, …]
userID
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Remarks
GRANT CONNECT can be used to create a new user or be used by any user to change their own password.
Tip
Use the CREATE USER statement rather than the GRANT CONNECT statement to create users.
If you inadvertently enter the user ID of an existing user when you are trying to add a new user, you are
actually changing the password of the existing user. You do not receive a warning because this behavior is
considered normal.
The stored procedures sp_addlogin and sp_adduser can also be used to add users. These procedures
display an error if you try to add an existing user ID.
Note
Use system procedures, not GRANT and REVOKE statements to add and remove user IDs.
A user without a password cannot connect to the database. This is useful when you are creating groups and
you do not want anyone to connect to the role user ID. To create a user without a password, do not include the
IDENTIFIED BY clause.
When specifying a password, it must be a valid identifier. Passwords have a maximum length of 255 bytes. If
the VERIFY_PASSWORD_FUNCTION database option is set to a value other than the empty string, the GRANT
CONNECT TO statement calls the function identified by the option value. The function returns NULL to indicate
that the password conforms to rules. If the VERIFY_PASSWORD_FUNCTION option is set, you can specify only
one <userid> and <password> with the GRANT CONNECT statement.
Invalid names for database user IDs and passwords include those that:
Privileges
To change your own password requires no additional privilege. To change another user's password requires the
CHANGE PASSWORD system privilege.
To create a new user requires the MANAGE ANY USER system privilege.
Examples
● The following example creates two new users for the database named Laurel and Hardy:
Related Information
Grants CREATE privilege on a specified dbspace to the specified users and roles.
Syntax
GRANT CREATE
ON <dbspace_name>
TO <user_id> [, …]
Parameters
dbspace_name
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
Requires the MANAGE ANY DBSPACE system privilege. See GRANT System Privilege Statement [page 1511].
Standards
Examples
The following example grants CREATE privilege on dbspace DspHist to users Fiona and Ciaran:
Related Information
Syntax
GRANT EXECUTE
ON [ <owner>.] {<procedure-name> | <user-defined-function-name> }
TO <user_id> [, …]
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
Standards
Related Information
Creates an explicit integrated login mapping between one or more Windows user profiles and an existing
database user ID. This allows a user who successfully logged in to their local machine to connect to a database
without having to provide a user ID or password.
Syntax
user_profile_name
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511].
Standards
Related Information
Creates a Kerberos-authenticated login mapping from one or more Kerberos principals to an existing database
user ID. This allows a user who has successfully logged in to Kerberos (user who has a valid Kerberos ticket-
granting ticket) to connect to a database without having to provide a user ID or password.
Syntax
userID
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511].
Standards
Related Information
Syntax
GRANT <object-level-privilege> [, …]
ON [ <owner>.]<object-name>
TO <user_id> [, …]
[ WITH GRANT OPTION ]
<object-level-privilege> ::=
ALL [ PRIVILEGES ]
| ALTER
| DELETE
| INSERT
| LOAD
| REFERENCE [ ( <column-name> [, …] ) ]
| SELECT [ ( <column-name> [, …] ) ]
| TRUNCATE
| UPDATE [ ( <column-name>, …) ] }
user_id
Must be the name of an existing user or immutable role. The list must consist of existing users with login
passwords. Separate the user_ids in the list with commas.
object-level-privilege
ALL
Users can alter this table with the ALTER TABLE statement. This privilege is not allowed for views.
DELETE
Users can look at information in this view or table. If column names are specified, then the users can
look at only those columns. SELECT permissions on columns cannot be granted for views, only for
tables.
TRUNCATE
Users can update rows in this view or table. If column names are specified, users can update only
those columns. UPDATE privileges on columns cannot be granted for views, only for tables. To update
a table, users must have both SELECT and UPDATE privilege on the table.
WITH GRANT OPTION
The named user ID is also given privileges to grant the same privileges to other user IDs.
Remarks
You can list the table privileges, or specify ALL to grant all privileges at once.
Privileges
If you own the object or have been granted the specific object privilege with the WITH GRANT OPTION clause
on the object, no additional privilege is required to grant an object-level privilege.
Standards
Related Information
Syntax
<role_name>
dbo
| diagnostics
| PUBLIC
| rs_systabgroup
| SA_DEBUG
| SYS
| SYS_AUTH_SA_ROLE
| SYS_AUTH_SSO_ROLE
| SYS_AUTH_DBA_ROLE
| SYS_AUTH_RESOURCE_ROLE
| SYS_AUTH_BACKUP_ROLE
| SYS_AUTH_VALIDATE_ROLE
| SYS_AUTH_WRITEFILE_ROLE
| SYS_AUTH_WRITEFILECLIENT_ROLE
| SYS_AUTH_READFILE_ROLE
| SYS_AUTH_READFILECLIENT_ROLE
| SYS_AUTH_PROFILE_ROLE
| SYS_AUTH_USER_ADMIN_ROLE
| SYS_AUTH_SPACE_ADMIN_ROLE
| SYS_AUTH_MULTIPLEX_ADMIN_ROLE
| SYS_AUTH_OPERATOR_ROLE
| SYS_AUTH_PERMS_ADMIN_ROLE
| SYS_REPLICATE_ADMIN_ROLE
| SYS_RUN_REPLICATE_ROLE
| SYS_SPATIAL_ADMIN_ROLE
| <user-defined role name>
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
role_name
Must already exist in the database. Separate multiple role names with commas.
grantee
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
WITH NO ADMIN OPTION
Each <grantee> is granted the underlying system privileges of each <role_name>, but cannot grant
<role_name> to another user.
WITH ADMIN ONLY OPTION
Each <userID> is granted administrative privileges over each <role_name>, but not the underlying
system privileges of <role_name>.
WITH ADMIN OPTION
Each userID is granted the underlying system privileges of each <role_name>, along with the ability to
grant <role_name> to another user.
WITH NO SYSTEM PRIVILEGE INHERITANCE
The underlying system privileges of the granting role are not inherited by the members of the receiving
role. However, if the receiving role is a user-extended role, the underlying system privileges are granted to
the extended user.
Remarks
(back to top)
● The WITH NO SYSTEM PRIVILEGE INHERITANCE clause can be used when granting select compatibility
roles to other roles. It prevents automatic inheritance of the compatibility role's underlying system
privileges by members of the role. When granted to user-extended roles, the WITH NO SYSTEM PRIVILEGE
INHERITANCE clause applies to members of the role only. The user acting as a role automatically inherits
the underlying system privileges regardless of the clause.
● The WITH NO ADMIN OPTION WITH NO SYSTEM PRIVILEGE INHERITANCE and WITH NO SYSTEM
PRIVILEGE INHERITANCE clauses are semantically equivalent.
Use of the WITH ADMIN OPTION or WITH ADMIN ONLY OPTION clause allows the grantee to grant or revoke
the role, but does not allow the grantee to drop the role.
By default, if no administrative clause is specified in the grant statement, each compatibility role is granted
with these default administrative rights:
WITH ADMIN OPTION WITH ADMIN ONLY OPTION WITH NO ADMIN OPTION
SYS_AUTH_VALIDATE_ROLE
SYS_AUTH_WRITEFILE_ROLE
SYS_AUTH_WRITEFILECLIENT_ROLE
SYS_AUTH_READFILE_ROLE
SYS_AUTH_READFILECLIENT_ROLE
SYS_AUTH_PROFILE_ROLE
SYS_AUTH_USER_ADMIN_ROLE
SYS_AUTH_SPACE_ADMIN_ROLE
SYS_AUTH_MULTIPLEX_ADMIN_ROLE
SYS_AUTH_OPERATOR_ROLE
SA_DEBUG
SYS_RUN_REPLICATION_ROLE
The SYS_AUTH_PERMS_ADMIN_ROLE role grants these underlying roles with these default administrative
rights:
SYS_AUTH_SPACE_ADMIN_ROLE
SYS_AUTH_MULTIPLEX_ADMIN_ROLE
SYS_AUTH_RESOURCE_ROLE
SYS_AUTH_VALIDATE_ROLE
SYS_AUTH_PROFILE_ROLE
SYS_AUTH_WRITEFILE_ROLE
SYS_AUTH_WRITEFILECLIENT_ROLE
SYS_AUTH_READFILE_ROLE
SYS_AUTH_READFILECLIENT_ROLE
Privileges
(back to top)
To grant the following system roles requires the MANAGE ROLES system privilege. See GRANT System
Privilege Statement [page 1511] for assistance with granting privileges..
● dbo
● diagnostics
● PUBLIC
● rs_systabgroup
● SA_DEBUG SYS
● SYS
● SYS_REPLICATION_ADMIN_ROLE
● SYS_RUN_REPLICATION_ROLE
● SYS_SPATIAL_ADMIN_ROLE
To grant the following compatibility roles requires you been granted the specific compatibility role with
administrative privilege. See Grant Compatibility Roles in the SAP IQ Installation and Update Guide for your
platform for assistance in granting compatibility roles.
● SYS_AUTH_SA_ROLE
● SYS_AUTH_SSO_ROLE
● SYS_AUTH_DBA_ROLE
● SYS_AUTH_RESOURCE_ROLE
● SYS_AUTH_BACKUP_ROLE
● SYS_AUTH_VALIDATE_ROLE
Standards
(back to top)
Examples
(back to top)
● The following example grants Sales_Role to Sally, with administrative privileges, which means she can
grant or revoke Sales_Role to other users as well as perform any authorized tasks granted by the role:
● The following example grants the compatibility role SYS_AUTH_PROFILE_ROLE to the role Sales_Admin
with no administrative rights:
Sales_Admin is a standalone role and Mary and Peter have been granted Sales_Admin. Since
SYS_AUTH_PROFILE_ROLE is an inheritable compatibility role, Mary and Peter are granted the
underlying system privileges of Sales_Role. Since the role is granted with no administrative rights, they
cannot grant or revoke the role.
● The following example grants the compatibility role SYS_AUTH_BACKUP_ROLE to Tom with no
administrative rights:
Tom is a user-extended role to which Betty and Laurel have been granted. Since
SYS_AUTH_BACKUP_ROLE is a non-inheritable compatibility role, the underlying system privileges of the
role are not granted to Betty and Laurel. However, since Tom is an extended user, the underlying system
privileges are granted directly to Tom.
Grants the ability for one user to impersonate another user and to administer the SET USER system privilege.
Syntax
Parameters
target_users_list
Must consist of existing users with login passwords and is the potential list of target users who can no
longer be impersonated by grantee users. Separate the user IDs in the list with commas.
ANY
The potential list of target users for each grantee consists of all database users with login passwords.
ANY WITH ROLES target_roles_list
The <target_role_list> must consist of existing roles, and the potential list of target users for each
grantee must consist of database users with login passwords that have a subset of roles in
<target_role_list>. Separate the list of roles with commas.
user_id
Each <user_id> must be the name of an existing user or immutable role. The list must consist of existing
users with login passwords. Separate the user_ids in the list with commas.
WITH ADMIN OPTION
(Valid in conjunction with the ANY clause only) The user can both issue the SETUSER command to
impersonate another user and grant the SET USER system privilege to another user.
WITH ADMIN ONLY OPTION
(Valid in conjunction with the ANY clause only) The user can grant the SET USER system privilege to
another user, but cannot issue the SETUSER command to impersonate another user.
WITH NO ADMIN OPTION
The user can issue the SETUSER command to impersonate another user, but cannot grant the SET USER
system privilege to another user.
A user can be granted the ability to impersonate any user in the database (ANY) or only specific users
(<target_users_list>) or members of specific roles (ANY WITH ROLES <target_roles_list>).
Administrative rights to the SET USER system privilege can only be granted when using the ANY clause.
If no clause is specified, ANY is used by default. If no administrative clause is specified in the grant statement,
the WITH NO ADMIN OPTION clause is used.
If regranting the SET USER system privilege to a user, the effect of the regrant is cumulative.
By default, the SET USER system privilege is granted to the SYS_AUTH_SSO_ROLE compatibility role with the
WITH NO ADMIN OPTION clause, if they exist.
The granting of the SET USER system privilege to a user only grants the potential to impersonate another user.
Validation of the at-least criteria required to successfully impersonate another user does no occur until the
SETUSER statement is issued.
Each target user specified (target_users_list) must be an existing user or user-extended role with a login
password. Each target role specified (target_roles_list) must be an existing user-extended or user-defined role.
Privileges
Requires the CHANGE PASSWORD system privilege granted with administrative rights. See GRANT System
Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example grants Sally and Laurel the ability to impersonate Bob, Sam, and Peter:
● The following example grants Mary the right to grant the SET USER system privilege to any user in the
database. However, since the system privilege is granted with the WITH ADMIN ONLY OPTION clause,
Mary cannot impersonate any other user.
● The following example grants Steve and Joe the ability to impersonate any member of Role1 or Role2:
GRANT SET USER (ANY WITH ROLES Role1, Role2) TO Steve, Joe
Grants specific system privileges to users or roles, with or without administrative rights.
Syntax
GRANT <system_privilege_name> [, …]
TO <user_id> [, …]
[ { WITH NO ADMIN
| WITH ADMIN [ ONLY ] } OPTION ]
Parameters
system_privilege_name
Must be the name of an existing user or immutable role. The list must consist of existing users with login
passwords. Separate multiple user_ids with commas.
WITH NO ADMIN OPTION
The user can manage the system privilege, but cannot grant the system privilege to another user.
WITH ADMIN ONLY OPTION
If the WITH ADMIN ONLY OPTION clause is used, each <user_id> is granted administrative privileges
over each <system_privilege>, but not the <system_privilege> itself.
WITH ADMIN OPTION
Each <user_id> is granted administrative privileges over each <system_privilege> in addition to all
underlying system privileges of <system_privilege>.
Remarks
By default, if no administrative clause is specified in the grant statement, the WITH NO ADMIN OPTION clause
is used.
You must have been granted the specific system privilege with administrative privilege.
Standards
Examples
● The following example grants the DROP CONNECTION system privilege to Joe with administrative
privileges:
● This example grants the CHECKPOINT system privilege to Sally with no administrative privileges:
● This example grants the MONITOR system privilege to Jane with administrative privileges only:
In this section:
Related Information
System privileges control the rights of users to perform authorized database tasks.
ACCESS SERVER LS Allows logical server connection using the SERVER logical server Multiplex
context.
ACCESS USER PASSWORD Allows a user to access views that contain password hashes, and User and Login Man
perform operations that involve accessing passwords, such as un agement
loading, extracting, or comparing database
ALTER ANY INDEX Allows a user to alter and comment on indexes and text indexes Indexes
on tables and views owned by any user.
ALTER ANY MATERIALIZED Allows a user to alter and comment on materialized views owned Materialized Views
VIEW by any user.
ALTER ANY OBJECT Allows a user to alter and comment on the following types of ob Objects
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
ALTER ANY OBJECT OWNER Allows a user to alter the owner of any type of table object. This Objects
privilege does not allow changing of the owner of other objects,
such as procedures, materialized views, and so on.
ALTER ANY PROCEDURE Allows a user to alter and comment on procedures and functions Procedures
owned by any user.
ALTER ANY SEQUENCE Allows a user to alter sequence generators owned by any user. Sequence
ALTER ANY TEXT CONFIGURA Allows a user to alter and comment on text configuration objects Text Configuration
TION owned by any user.
ALTER ANY VIEW Allows a user to alter and comment on views owned by any user. Views
● Upgrade a database.
● Perform cost model calibration.
● Load database statistics.
● Alter transaction logs (also requires the SERVER OPERATOR
system privilege).
● Change ownership of the database (also requires the MAN
AGE ANY MIRROR SERVER system privilege).
CHANGE PASSWORD Allows a user to manage user passwords for any user. User and Login Man
agement
This system privilege can apply to all users, or it can be limited to
a set of specified users, or users who are granted one or more
specified roles.
CHECKPOINT Allows a user to force the database server to execute a check Database
point.
COMMENT ANY OBJECT Allows a user to comment on any type of object owned by any Objects
user that can be created using the CREATE ANY OBJECT system
privilege.
CREATE ANY INDEX Allows a user to create and comment on indexes and text indexes Indexes
on tables and views owned by any user.
CREATE ANY MATERIALIZED Allows a user to create and comment on materialized views Materialized Views
VIEW owned by any user.
CREATE ANY MUTEX SEMA Allows a user to create a mutex or semaphore owned by any user. Mutex and Sema
PHORE phores
CREATE ANY OBJECT Allows a user to create and comment on the following types of ob Objects
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
CREATE ANY PROCEDURE Allows a user to create and comment on procedures and func Procedure
tions owned by any user.
CREATE ANY SEQUENCE Allows a user to create sequence generators, regardless of owner. Sequence
CREATE ANY TEXT CONFIGU Allows a user to alter and comment on text configuration objects Text Configuration
RATION owned by any user.
CREATE ANY TRIGGER Allows a user to create and comment (also requires the ALTER Triggers
object level privilege on the table) on tables and views.
CREATE ANY VIEW Allows a user to create and comment on views owned by any user. Views
CREATE DATABASE VARIABLE Allows a user to create, select from, update, and drop self-owned Database Variables
database-scope variables.
CREATE EXTERNAL REFER Allows a user to create external references in the database. External Environment
ENCE
You must have the system privileges required to create specific
database objects before you can create external references.
CREATE MATERIALIZED VIEW Allows a user to create and comment on self-owned materialized Materialized Views
views.
CREATE PROCEDURE Allows a user to create and comment on self-owned procedures Procedure
and functions. create a self-owned stored procedure or function.
CREATE PROXY TABLE Allows a user to create self-owned proxy tables. Table
CREATE TEXT CONFIGURA Allows a user to create and comment on self-owned text configu- Text Configuration
TION ration objects.
CREATE VIEW Allows a user to create and comment on self-owned views. Re Views
quired to create self-owned views.
DEBUG ANY PROCEDURE Allows a user to debug any database object. Miscellaneous
DELETE ANY TABLE Allows a user to delete rows in tables and views owned by any Table
user.
DROP ANY INDEX Allows a user to drop indexes and text indexes on tables and views Indexes
owned by any user.
DROP ANY MATERIALIZED Allows a user to drop materialized views owned by any user. Materialized View
VIEW
DROP ANY MUTEX SEMA Allows a user to drop a mutex or semaphore owned by any user. Mutex and Sema
PHORE phores
DROP ANY OBJECT Allows a user to drop the following types of objects owned by any Objects
user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
DROP ANY PROCEDURE Allows a user to drop procedures and functions owned by any Procedure
user.
DROP ANY SEQUENCE Allows a user to drop sequence generators owned by any user. Sequence
DROP ANY TABLE Allows a user to drop tables (including proxy tables) owned by any Table
user.
DROP ANY TEXT CONFIGURA Allows a user to drop text configuration objects owned by any Text Configuration
TION user.
DROP ANY VIEW Allows a user to drop views owned by any user. Views
DROP CONNECTION Allows a user to drop any connections to the database. Database
EXECUTE ANY PROCEDURE Allows a user to execute procedures and functions owned by any Procedure
user.
INSERT ANY TABLE Allows a user to insert rows into tables and views owned by any Table
user.
LOAD ANY TABLE Allows a user to load data into tables owned by any user. Table
MANAGE ANY DATABASE VARI Allows a user to create and drop database-scope variables owned Database Variables
ABLE by self or by PUBLIC.
MANAGE ANY EVENT Allows a user to create, alter, drop, trigger, and comment on Miscellaneous
events.
MANAGE ANY EXTERNAL EN Allows a user to alter, comment on, start, and stop external envi External Environment
VIRONMENT ronments.
MANAGE ANY EXTERNAL OB Allows a user to install, comment on, and remove external envi External Environment
JECT ronment objects.
MANAGE ANY LDAP SERVER Allows a user to create, alter, drop, validate, and comment on Miscellaneous
LDAP servers.
MANAGE ANY LOGIN POLICY Allows a user to create, alter, drop, and comment on login poli User and Login Man
cies. agement
MANAGE ANY PROPERTY HIS Allows a user to turn on and configure the tracking of database Server Operator
TORY server property values.
MANAGE ANY SPATIAL OB Allows a user to create, alter, drop, and comment on spatial refer Miscellaneous
JECT ence systems and spatial unit of measures.
MANAGE ANY STATISTICS Allows a user to create, alter, drop, and update database statistics Miscellaneous
for any table.
MANAGE ANY USER Allows a user to: User and Login Man
agement
● Create, alter, drop, and comment on database users (includ
ing assigning an initial password).
● Force a password change on next login for any user.
● Assign and reset the login policy for any user.
● Create, drop, and comment on integrated logins and Ker
beros logins.
● Create and drop external logins.
MANAGE ANY WEB SERVICE Allows a user to create, alter, drop, and comment on web serv Miscellaneous
ices.
MANAGE AUDITING Allows a user to run the sa_audit_string stored procedure. Procedure
MANAGE LISTENERS Allows a user to start and stop network listeners. Server Operator
MANAGE PROFILING Allows a user to manage database server tracing. The DIAGNOS Database
TICS system role is also required to fully utilize diagnostics func
tionality for user information.
MANAGE ROLES Allows a user to create new roles and act as a global administrator Roles
for new and existing roles. By default, MANAGE ROLES is granted
administrative rights on each newly created role. A user requires
administrative rights on the role to delete it.
READ CLIENT FILE Allows a user to read files on the client computer. Files
READ FILE Allows a user to read files on the database server computer. Files
REORGANIZE ANY OBJECT Allows a user to reorganize tables and materialized views owned Objects
by any user.
SELECT ANY TABLE Allows a user to query tables and views owned by any user. Table
SELECT PUBLIC DATABASE Allows a user to select the value of a database-scope variable Database Variables
VARIABLE owned by PUBLIC.
SET ANY PUBLIC OPTION Allows a user to set PUBLIC database options that do not require Database Options
the SET ANY SECURITY OPTION or the SET ANY SYSTEM OP
TION system privileges.
SET ANY SECURITY OPTION Allows a user to set any PUBLIC security database options. Database Options
SET ANY SYSTEM OPTION Allows a user to set PUBLIC system database options. Database Options
SET ANY USER DEFINED OP Allows a user to set user-defined database options. Database Options
TION
SET USER (granted with admin Allows a user to temporarily assume the roles and privileges of User and Login Man
istrative rights only) another user. agement
TRUNCATE ANY TABLE Allows a user to truncate data for tables and materialized views Table
owned by any user.
UPDATE ANY MUTEX SEMA Allows a user to update a mutex or semaphore owned by any Mutex and Sema
PHORE user. phores
UPDATE ANY TABLE Allows a user to update rows in tables and views owned by any Table
user.
UPDATE PUBLIC DATABASE Allows a user to update database-scope variables owned by PUB Database Variables
VARIABLE LIC.
UPGRADE ROLE Allows a user to be a default administrator of any system privilege Roles
that is introduced when upgrading an SAP IQ database from ver
sion 16.0. By default, the UPGRADE ROLE system privilege is
granted to the SYS_AUTH_SA_ROLE role, if it exists.
USE ANY SEQUENCE Allows a user to use sequence generators owned by any user. Sequence
VALIDATE ANY OBJECT Allows a user to validate tables, materialized views, indexes, and Objects
text indexes owned by any user.
WRITE CLIENT FILE Allows a user to write files to the client computer. Files
WRITE FILE Allows a user to write files on the database server computer. Files
Database Options SET ANY PUBLIC OPTION Allows a user to set PUBLIC database options that do not require
the SET ANY SECURITY OPTION or the SET ANY SYSTEM OP
TION system privileges.
SET ANY SECURITY OPTION Allows a user to set any PUBLIC security database options.
SET ANY SYSTEM OPTION Allows a user to set PUBLIC system database options.
SET ANY USER DEFINED OP Allows a user to set user-defined database options.
TION
Database Variables CREATE DATABASE VARIABLE Allows a user to create, select from, update, and drop self-owned
database-scope variables.
MANAGE ANY DATABASE VARI Allows a user to create and drop database-scope variables owned
ABLE by self or by PUBLIC.
SELECT PUBLIC DATABASE Allows a user to select the value of a database-scope variable
VARIABLE owned by PUBLIC.
UPDATE PUBLIC DATABASE Allows a user to update database-scope variables owned by PUB
VARIABLE LIC.
Database CHECKPOINT Allows a user to force the database server to execute a check
point.
MANAGE PROFILING Allows a user to manage database server tracing. The DIAGNOS
TICS system role is also required to fully utilize diagnostics func
tionality for user information.
● Upgrade a database.
● Perform cost model calibration.
● Load database statistics.
● Alter transaction logs (also requires the SERVER OPERATOR
system privilege).
● Change ownership of the database (also requires the MAN
AGE ANY MIRROR SERVER system privilege).
External Environment CREATE EXTERNAL REFER Allows a user to create external references in the database.
ENCE
You must have the system privileges required to create specific
database objects before you can create external references.
MANAGE ANY EXTERNAL EN Allows a user to alter, comment on, start, and stop external envi
VIRONMENT ronments.
MANAGE ANY EXTERNAL OB Allows a user to install, comment on, and remove external envi
JECT ronment objects.
Files READ CLIENT FILE Allows a user to read files on the client computer.
READ FILE Allows a user to read files on the database server computer.
WRITE CLIENT FILE Allows a user to write files to the client computer.
WRITE FILE Allows a user to write files on the database server computer.
Indexes ALTER ANY INDEX Allows a user to alter and comment on indexes and text indexes
on tables and views owned by any user.
CREATE ANY INDEX Allows a user to create and comment on indexes and text indexes
on tables and views owned by any user.
DROP ANY INDEX Allows a user to drop indexes and text indexes on tables and views
owned by any user.
Materialized View DROP ANY MATERIALIZED Allows a user to drop materialized views owned by any user.
VIEW
ALTER ANY MATERIALIZED Allows a user to alter and comment on materialized views owned
VIEW by any user.
CREATE ANY MATERIALIZED Allows a user to create and comment on materialized views
VIEW owned by any user.
CREATE MATERIALIZED VIEW Allows a user to create and comment on self-owned materialized
views.
MANAGE ANY EVENT Allows a user to create, alter, drop, trigger, and comment on
events.
MANAGE ANY LDAP SERVER Allows a user to create, alter, drop, validate, and comment on
LDAP servers.
MANAGE ANY SPATIAL OB Allows a user to create, alter, drop, and comment on spatial refer
JECT ence systems and spatial unit of measures.
MANAGE ANY STATISTICS Allows a user to create, alter, drop, and update database statistics
for any table.
MANAGE ANY WEB SERVICE Allows a user to create, alter, drop, and comment on web serv
ices.
Multiplex ACCESS SERVER LS Allows logical server connection using the SERVER logical server
context.
Mutex and Sema CREATE ANY MUTEX SEMA Allows a user to create a mutex or semaphore owned by any user.
phores PHORE
DROP ANY MUTEX SEMA Allows a user to drop a mutex or semaphore owned by any user.
PHORE
UPDATE ANY MUTEX SEMA Allows a user to update a mutex or semaphore owned by any
PHORE user.
Objects ALTER ANY OBJECT OWNER Allows a user to alter the owner of any type of table object. This
privilege does not allow changing of the owner of other objects,
such as procedures, materialized views, and so on.
ALTER ANY OBJECT Allows a user to alter and comment on the following types of ob
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
COMMENT ANY OBJECT Allows a user to comment on any type of object owned by any
user that can be created using the CREATE ANY OBJECT system
privilege.
CREATE ANY OBJECT Allows a user to create and comment on the following types of ob
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
DROP ANY OBJECT Allows a user to drop the following types of objects owned by any
user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
REORGANIZE ANY OBJECT Allows a user to reorganize tables and materialized views owned
by any user.
VALIDATE ANY OBJECT Allows a user to validate tables, materialized views, indexes, and
text indexes owned by any user.
Procedures ALTER ANY PROCEDURE Allows a user to alter and comment on procedures and functions
owned by any user.
CREATE ANY PROCEDURE Allows a user to create and comment on procedures and func
tions owned by any user.
DROP ANY PROCEDURE Allows a user to drop procedures and functions owned by any
user.
EXECUTE ANY PROCEDURE Allows a user to execute procedures and functions owned by any
user.
Roles MANAGE ROLES Allows a user to create new roles and act as a global administrator
for new and existing roles. By default, MANAGE ROLES is granted
administrative rights on each newly created role. A user requires
administrative rights on the role to delete it.
Sequence ALTER ANY SEQUENCE Allows a user to alter sequence generators owned by any user.
CREATE ANY SEQUENCE Allows a user to create sequence generators, regardless of owner.
DROP ANY SEQUENCE Allows a user to drop sequence generators owned by any user.
USE ANY SEQUENCE Allows a user to use sequence generators owned by any user.
Server Operator MANAGE ANY PROPERTY HIS Allows a user to turn on and configure the tracking of database
TORY server property values.
Table CREATE PROXY TABLE Allows a user to create self-owned proxy tables.
DELETE ANY TABLE Allows a user to delete rows in tables and views owned by any
user.
DROP ANY TABLE Allows a user to drop tables (including proxy tables) owned by any
user.
INSERT ANY TABLE Allows a user to insert rows into tables and views owned by any
user.
LOAD ANY TABLE Allows a user to load data into tables owned by any user.
SELECT ANY TABLE Allows a user to query tables and views owned by any user.
TRUNCATE ANY TABLE Allows a user to truncate data for tables and materialized views
owned by any user.
UPDATE ANY TABLE Allows a user to update rows in tables and views owned by any
user.
Text Configuration ALTER ANY TEXT CONFIGURA Allows a user to alter and comment on text configuration objects
TION owned by any user.
CREATE TEXT CONFIGURA Allows a user to create and comment on self-owned text configu-
TION ration objects.
DROP ANY TEXT CONFIGURA Allows a user to drop text configuration objects owned by any
TION user.
CREATE ANY TEXT CONFIGU Allows a user to alter and comment on text configuration objects
RATION owned by any user.
CREATE ANY TRIGGER Allows a user to create and comment (also requires the ALTER
object level privilege on the table) on tables and views.
ACCESS USER PASSWORD Allows a user to access views that contain password hashes, and
perform operations that involve accessing passwords, such as un
loading, extracting, or comparing database
CHANGE PASSWORD Allows a user to manage user passwords for any user.
MANAGE ANY LOGIN POLICY Allows a user to create, alter, drop, and comment on login poli
cies.
SET USER (granted with admin Allows a user to temporarily assume the roles and privileges of
istrative rights only) another user.
Views ALTER ANY VIEW Allows a user to alter and comment on views owned by any user.
CREATE ANY VIEW Allows a user to create and comment on views owned by any user.
CREATE VIEW Allows a user to create and comment on self-owned views. Re
quired to create self-owned views.
DROP ANY VIEW Allows a user to drop views owned by any user.
Syntax
Parameters
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Related Information
9.4.127 IF Statement
Lets you conditionally execute the first list of SQL statements whose <search-condition> evaluates to
TRUE.
Syntax
Remarks
If no <search-condition> evaluates to TRUE, and an ELSE clause exists, the <statement-list> in the
ELSE clause is executed. If no <search-condition> evaluates to TRUE, and there is no ELSE clause, the
expression returns a NULL value.
When comparing variables to the single value returned by a SELECT statement inside an IF statement, you
must first assign the result of the SELECT to another variable.
Note
Do not confuse the syntax of the IF statement with that of the IF expression. You cannot nest the IF
statement.
None
Standards
Examples
BEGIN
DECLARE X INT;
SET X = 1;
IF X = 1 THEN
PRINT '1';
ELSEIF X = 2 THEN
PRINT '2';
ELSE
PRINT 'something else';
Related Information
Syntax
IF <expression>
... <statement>
... [ ELSE [ IF <expression> ] <statement> ]...
Remarks
The Transact-SQL IF conditional and the ELSE conditional each control the performance of only a single SQL
statement or compound statement (between the keywords BEGIN and END).
In contrast to the SAP IQ IF statement, the Transact-SQL IF statement has no THEN. The Transact-SQL
version also has no ELSEIF or END IF keywords.
When comparing variables to the single value returned by a SELECT statement inside an IF statement, you
must first assign the result of the SELECT to another variable.
Note
Privileges
None
Examples
BEGIN
DECLARE @X INT
SET @X = 1
IF @X = 1
PRINT '1'
ELSEIF @X = 2
PRINT '2'
ELSE
PRINT 'something else'
END
Related Information
Includes a file into a source program to be scanned by the SQL source language preprocessor.
Syntax
INCLUDE <filename>
filename
Identifier
Remarks
The INCLUDE statement is very much like the C preprocessor #include directive.
However, the SQL preprocessor reads the given file, inserting its contents into the output C file. Thus, if an
include file contains information that the SQL preprocessor requires, it should be included with the Embedded
SQL INCLUDE statement.
Two file names are specially recognized: SQLCA and SQLDA. Any C program using Embedded SQL must
contain this statement before any Embedded SQL statements:
This statement must appear at a position in the C program where static variable declarations are allowed.
Many Embedded SQL statements require variables (invisible to the programmer) which are declared by the
SQL preprocessor at the position of the SQLCA include statement. The SQLDA file must be included if any
SQLDAs are used.
Privileges
None
Standards
Related Information
Inserts a single row or a selection of rows, from elsewhere in the current database, into the table. This
command can also insert a selection of rows from another database into the table.
Syntax
Syntax 1
Syntax 2
Syntax 3
<insert-load-options> ::=
[ LIMIT <number-of-rows> ]
[ NOTIFY <number-of-rows> ]
[ SKIP <number-of-rows> ]
<insert-select-load-options> ::=
[ WORD SKIP <number> ]
[ IGNORE CONSTRAINT <constraint-type> [, …] ]
[ MESSAGE LOG '<string>' ROW LOG '<string>' [ ONLY LOG <logwhat> [, …] ] ]
[ LOG DELIMITED BY '<string>' ]
<constraint-type> ::=
{ CHECK <integer>
| UNIQUE <integer>
| NULL <integer>
| FOREIGN KEY <integer>
| DATA VALUE <integer>
} ALL <integer> }
<logwhat> ::=
{ CHECK
| ALL
| NULL
| UNIQUE
| DATA VALUE
| FOREIGN KEY
| WORD }
Go to:
● Remarks
Parameters
(back to top)
insert-load-options
● LIMIT – specifies the maximum number of rows to insert into the table from a query. The default is 0
for no limit. The maximum is 2 GB -1.
● NOTIFY – specifies that you be notified with a message each time the number of rows are successfully
inserted into the table. The default is every 100,000 rows.
● SKIP – defines the number of rows to skip at the beginning of the input tables for this insert. The
default is 0.
WORD SKIP
Allows the load to continue when it encounters data longer than the limit specified when the word index
was created. The <number> parameter specifies the number of times to ignore the error. Setting this
option to 0 means there is no limit.
If a row is not loaded because a word exceeds the maximum permitted size, a warning is written to
the .iqmsg file. WORD size violations can be optionally logged to the MESSAGE LOG file. If the option is
not specified, the operation rolls back on the first occurrence of a word that is longer than the specified
limit.
IGNORE CONSTRAINT
Determines whether the load engine ignores CHECK, UNIQUE, NULL, DATA VALUE, and FOREIGN KEY
integrity constraint violations that occur during a load and the maximum number of violations to ignore
before initiating a rollback.
If <limit> is zero, the number of CHECK constraint violations to ignore is infinite. If CHECK is not
specified, the first occurrence of any CHECK constraint violation causes the load to roll back. If <limit> is
nonzero, then the <limit> +1 occurrence of a CHECK constraint violation causes the load to roll back
MESSAGE LOG
Specifies the file names where the load engine logs integrity constraint violations. Timestamps indicating
the start and completion of the load are logged in both the MESSAGE LOG and the ROW LOG files. Both
MESSAGE LOG and ROW LOG must be specified, or no information about integrity violations is logged.
Information is logged on all integrity constraint-type violations specified in the ONLY LOG clause or for all
word index-length violations if the keyword WORD is specified. If the ONLY LOG clause is not specified, no
information on integrity constraint violations is logged. Only the timestamps indicating the start and
completion of the load are logged.
LOG DELIMITED BY
Specifies the separator between data values in the ROW LOG file. The default separator is a comma.
ENCRYPTED PASSWORD
To enable the SAP IQ server to accept a jConnect connection with an encrypted password, set the jConnect
ENCRYPT_PASSWORD connection property to true.
PACKETSIZE
Specifies the TDS packet-size in bytes. The default TDS packet-size on most platforms is 512 bytes. If the
packet size is not specified or is specified as zero, then the default packet size value for the platform is
used.
The packet-size value must be a multiple of 512, either equal to the default network packet size or between
the default network packet size and the maximum network packet size. The maximum network packet size
and the default network packet size are multiples of 512 in the range 512 – 524288 bytes. The maximum
network packet size is always greater than or equal to the default network packet size.
QUOTED_IDENTIFIER
Sets the QUOTED_IDENTIFIER option on the remote server. The default setting is OFF. You set
QUOTED_IDENTIFIER to ON only if any of the identifiers in the SELECT statement are enclosed in double
quotes, as in this example using "c1":
ISOLATION LEVEL
Specifies an isolation level for the connection to a remote server. The levels and their characteristics are:
● READ UNCOMMITTED
○ Isolation level 0
○ Read permitted on row with or without write lock
○ No read locks are applied
○ No guarantee that concurrent transaction will not modify row or roll back changes to row
● READ COMMITTED
○ Isolation level 1
○ Read only permitted on row with no write lock
○ Read lock acquired and held for read on current row only, but released when cursor moves off the
row
○ No guarantee that data will not change during transaction
● SERIALIZABLE
○ Isolation level 3
○ Read only permitted on rows in result without write lock
○ Read locks acquired when cursor is opened and held until transaction ends
Note
For additional information on the insert-select-load-options and location-options as well as the constraint-
type and logwhat parameters, see LOAD TABLE Statement.
(back to top)
Syntax 1 allows the insertion of a single row with the specified expression values. If the list of column names is
not specified, the values are inserted into the table columns in the order they were created (the same order as
retrieved with SELECT *). The row is inserted into the table at an arbitrary position. (In relational databases,
tables are not ordered.)
Syntax 2 allows the user to perform a mass insertion into a table using the results of a fully general SELECT
statement. Insertions are done in an arbitrary order unless the SELECT statement contains an ORDER BY
clause. The columns from the select list are matched ordinally with the columns specified in the column list, or
sequentially in the order in which the columns were created.
Note
The NUMBER(*) function is useful for generating primary keys with Syntax 2 of the INSERT statement.
Syntax 3 INSERT...LOCATION is a variation of Syntax 2 that allows you to insert data from an SAP Adaptive
Server Enterprise or SAP IQ database. The <servername.dbname> specified in the LOCATION clause
identifies the remote server and database for the table in the FROM clause. To use Syntax 3, the SAP ASE or
SAP IQ remote server to which you are connecting must exist in the SAP Open Client interfaces or sql.ini
file on the local machine.
The SELECT statement can be delimited by either curly braces or straight single quotation marks.
Note
Curly braces represent the start and end of an escape sequence in the ODBC standard, and might generate
errors in the context of ODBC or SAP IQ Cockpit. The workaround is to use single quotes to escape the
SELECT statement.
The local SAP IQ server connects to the server and database you specify in the LOCATION clause. The results
from the queries on the remote tables are returned and the local server inserts the results in the current
database. If you do not specify a server name in the LOCATION clause, SAP IQ ignores any database name you
specify, since the only choice is the current database on the local server.
When SAP IQ connects to the remote server, INSERT...LOCATION uses the remote login for the user ID of the
current connection, if a remote login has been created with CREATE EXTERNLOGIN and the remote server has
been defined with a CREATE SERVER statement. If the remote server is not defined, or if a remote login has not
been created for the user ID of the current connection, SAP IQ connects using the user ID and password of the
current connection.
Note
If you rely on the user ID and password of the current connection, and a user changes the password, you
must stop and restart the server before the new password takes effect on the remote server. Remote logins
created with CREATE EXTERNLOGIN are unaffected by changes to the password for the default user ID.
Creating a remote login with the CREATE EXTERNLOGIN statement and defining a remote server with a
CREATE SERVER statement sets up an external login and password for INSERT...LOCATION such that any
For example, user russid connects to the SAP IQ database and executes this statement:
On server ase1, there exists user ID ase1user with password mydatabase. The owner of the table
SQL_Types is ase1user. The remote server is defined on the IQ server as:
INSERT...LOCATION connects to the remote server ase1 using the user ID ase1user and the password
mydatabase for user russid.
Use the ENCRYPTED PASSWORD parameter to specify the use of Open Client Library default password
encryption when connecting to a remote server. If ENCRYPTED PASSWORD is specified and the remote server
does not support Open Client Library default password encryption, an error is reported indicating that an
invalid user ID or password was used.
When used as a remote server, SAP IQ supports TDS password encryption. The SAP IQ server accepts a
connection with an encrypted password sent by the client. For information on connection properties to set for
password encryption, see Security Handshaking: Encrypted Password in the Client-Library/C Reference
Manual.
Note
Password encryption requires Open Client 15.0. TDS password encryption requires Open Client 15.0 ESD
#7 or later.
When INSERT...LOCATION is transferring data between an SAP IQ server and a remote SAP IQ or SAP ASE
server, the value of the INSERT...LOCATION TDS PACKETSIZE parameter is always 512 bytes, even if you
specify a different value for PACKETSIZE.
Note
If you specify an incorrect packet size (for example 933, which is not a multiple of 512), the connection
attempt fails with an Open Client ct_connect “Connection failed” error. Any unsuccessful connection
attempt returns a generic “Connection failed” message. The SAP ASE error log might contain more specific
information about the cause of the connection failure.
SAP IQ does not support the SAP ASE data type TEXT, but you can execute INSERT...LOCATION (Syntax 3)
from both an IQ CHAR or VARCHAR column whose length is greater than 255 bytes, and from an ASE database
column of data type TEXT. ASE TEXT and IMAGE columns can be inserted into columns of other SAP IQ data
types, if SAP IQ supports the internal conversion. By default, if a remote data column contains over 2 GB, SAP
IQ silently truncates the column value to 2 GB.
SAP IQ does not support the SAP ASE data types UNICHAR, UNIVARCHAR, or UNITEXT. An
INSERT...LOCATION command from UNICHAR or UNITEXT to CHAR or CLOB columns in the ISO_BINENG
collation may execute without error; if this happens, the data in the columns may be inconsistent. An error
is reported in this situation, only if the conversion fails.
Users must be specifically licensed to use the large object functionality of the Unstructured Data Analytics
Option.
Note
If you use INSERT...LOCATION to insert data selected from a VARBINARY column, set
ASE_BINARY_DISPLAY to OFF on the remote database.
INSERT...LOCATION (Syntax 3) does not support the use of variables in the SELECT statement.
Inserts can be done into views, provided the SELECT statement defining the view has only one table in the
FROM clause and does not contain a GROUP BY clause, an aggregate function, or involve a UNION operation.
Character strings inserted into tables are always stored in the case they are entered, regardless of whether the
database is case-sensitive or not. Thus, a string “Value” inserted into a table is always held in the database with
an uppercase V and the remainder of the letters lowercase. SELECT statements return the string as 'Value.' If
the database is not case-sensitive, however, all comparisons make 'Value' the same as 'value,' 'VALUE," and so
on. Further, if a single-column primary key already contains an entry Value, an INSERT of value is rejected, as it
would make the primary key not unique.
Whenever you execute an INSERT...LOCATION statement, SAP IQ loads the localization information needed
to determine language, collation sequence, character set, and date/time format. If your database uses a
nondefault locale for your platform, you must set an environment variable on your local client to ensure that
SAP IQ loads the correct information.
If you set the LC_ALL environment variable, SAP IQ uses its value as the locale name. If LC_ALL is not set, SAP
IQ uses the value of the LANG environment variable. If neither variable is set, SAP IQ uses the default entry in
the locales file.
Use the (DEFAULT), DEFAULT VALUES or VALUES() clauses to insert rows with all default values. Assuming
that there are 3 columns in table t2, these examples are semantically equivalent:
INSERT...VALUES also supports multiple rows. The following example inserts 3 rows into table t1:
SAP IQ treats all load/inserts as full-width inserts. Columns not explicitly specified on the load/insert
statement, the value loaded will either be the column’s DEFAULT value (if one is defined) or NULL (if no
DEFAULT value is defined for the column).
An INSERT from a stored procedure or function is not permitted, if the procedure or function uses COMMIT,
ROLLBACK, or some ROLLBACK TO SAVEPOINT statements.
The result of a SELECT…FROM may be slightly different from the result of an INSERT…SELECT…FROM due to an
internal data conversion of an imprecise data type, such as DOUBLE or NUMERIC, for optimization during the
insert. If a more precise result is required, a possible workaround is to declare the column as a DOUBLE or
NUMERIC data type with a higher precision.
Privileges
(back to top)
Requires the INSERT object-level privilege on the table. See GRANT Object-Level Privilege Statement [page
1502] for assistance with granting privileges
Standards
(back to top)
Examples
(back to top)
● The following example fills the table dept_head with the names of department heads and their
departments:
● The INSERT statement permits a list of values allowing several rows to be inserted at once:
Related Information
Syntax
<source> ::=
{ FILE <file-name> | URL <url-value> }
Go to:
● Remarks
● Privileges
● Standards
● Examples
(back to top)
NEW
(Default) requires that the referenced Java classes be new classes, rather than updates of currently
installed classes. An error occurs if a class with the same name exists in the database and the NEW install
mode clause is used.
UPDATE
An install mode of specifies that the referenced Java classes may include replacements for Java classes
already installed in the given database.
JAR
A character string value of up to 255 bytes that is used to identify the retained JAR in subsequent
INSTALL, UPDATE, and REMOVE statements. <jar-name> or text-pointer must designate a JAR file or a
column containing a JAR. JAR files typically have extensions of .jar or .zip.
Installed JAR and zip files can be compressed or uncompressed. However, JAR files produced by the Sun
JDK jar utility are not supported. Files produced by other zip utilities are supported.
If the JAR option is specified, then the JAR is retained as a JAR after the classes that it contains have been
installed. That JAR is the associated JAR of each of those classes. The set of JARs installed in a database
with the JAR clause are called the retained JARs of the database.
Retained JARs are referenced in INSTALL and REMOVE statements. Retained JARs have no effect on other
uses of Java-SQL classes. Retained JARs are used by the SQL system for requests by other systems for the
class associated with given data. If a requested class has an associated JAR, the SQL system can supply
that JAR, rather than the individual class.
source
Specifies the location of the Java classes to be installed and must identify either a class file or a JAR file.
The formats supported for <file-name> include fully qualified file names, such as 'c:\libs
\jarname.jar' and '/usr/u/libs/jarname.jar', and relative file names, which are relative to the
current working directory of the database server.
The class definition for each class is loaded by the VM of each connection the first time that class is used.
When you INSTALL a class, the VM on your connection is implicitly restarted. Therefore, you have
immediate access to the new class, whether the INSTALL uses an install-mode clause of NEW or UPDATE.
For other connections, the new class is loaded the next time a VM accesses the class for the first time. If
the class is already loaded by a VM, that connection does not see the new class until the VM is restarted for
that connection (for example, with a STOP JAVA and START JAVA).
Remarks
(back to top)
Only new connections established after installing the class, or that use the class for the first time after installing
the class, use the new definition. Once the Java VM loads a class definition, it stays in memory until the
connection closes.
Privileges
(back to top)
Requires the MANAGE ANY EXTERNAL OBJECT system privilege. See GRANT System Privilege Statement
[page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● The following example installs the user-created Java class named “Demo” by providing the file name and
location of the class:
After installation, the class is referenced using its name. Its original file path location is no longer used. For
example, this statement uses the class installed in the previous statement:
If the Demo class was a member of the package SAP.work, the fully qualified name of the class must be
used:
● The following example installs all the classes contained in a zip file and associate them within the database
with a JAR file name:
INSTALL JAVA
JAR 'Widgets'
FROM FILE 'C:\Jars\Widget.zip'
The location of the zip file is not retained and classes must be referenced using the fully qualified class
name (package name and class name).
Syntax
<monitor-options>
{ -summary
| {-append | -truncate } -bufalloc
| -cache
| -cache_by_type
| -contention
| -debug
| -file_suffix <suffix>
| -io
| -interval <seconds>
| -threads }...
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
START MONITOR
Monitors all tables in the temp buffer cache of the temporary Store.
dummy_table_name
Controls buffer cache monitor output. You can specify more than one, and they must be enclosed with
quotation marks. Valid <options> are:
-summary Displays summary information for both the main and temp buffer monitor_options -
caches. If you do not specify any monitor options, you receive a summary
summary report.
-cache Displays main or temp buffer cache activity in detail. Critical fields monitor_options -
are Finds, HR%, and BWaits. cache
- Breaks -cache results down by IQ page type. (An exception is the monitor_options -
cache_by_t Bwaits column, which shows a total only.) This format is most cache_by_type
ype useful when you need to supply information to Technical Support.
-io Displays main or temp (private) buffer cache I/O rates and com monitor_options -io
pression ratios during the specified interval. These counters repre
sent all activity for the server; the information is not broken out by
device.
-bufalloc Displays information on the main or temp buffer allocator, which monitor_options -
reserves space in the buffer cache for objects like sorts, hashes, bufalloc
and bitmaps.
- Displays many key buffer cache and memory manager locks. monitor_options -
contention These lock and mutex counters show the activity within the buffer contention
cache and heap memory and how quickly these locks were re
solved. Timeout numbers that exceed 20 percent indicate a prob
lem.
-threads Displays the processing thread manager counts. Values are server- monitor_options -
wide (it does not matter whether you select this option for main or threads
private).
-interval Specifies the reporting interval in seconds. The default is every 60 monitor_options -
seconds. The minimum is every 2 seconds. You can usually get interval
useful results by running the monitor at the default interval during
a query or time of day with performance problems. Short intervals
may not give meaningful results. Intervals should be proportional
to the job time; one minute is generally more than enough.
-append | Appends or truncates output to existing output file. Truncate is the monitor_options -
-truncate default. append and
monitor_options -
truncate
STOP MONITOR
Similar to START MONITOR, except that you do not need to specify any options:
● To simplify monitor use, create a stored procedure to declare the dummy table, specify its output
location, and start the monitor.
● The interval, with two exceptions, applies to each line of output, not to each page. The exceptions are
the -cache_by_type and -debug clauses, where a new page begins for each display.
Remarks
(back to top)
Issue separate commands to monitor each buffer cache. Keep each session open while the monitor collects
results; a monitor run stops when you close its connection. A connection can run up to a maximum of two
monitor runs, one for the main and one for the temp buffer cache.
To control the directory placement of monitor output files, set the MONITOR_OUTPUT_DIRECTORY option. If
this option is not set, the monitor sends output to the same directory as the database. All monitor output files
are used for the duration of the monitor runs. They remain after a monitor run has stopped.
Either declare a temporary table for use in monitoring, or create a permanent dummy table when you create a
new database, before creating any multiplex query servers. These solutions avoid DDL changes, so that data
stays up on query servers during production runs.
On UNIX-like operating systems, you can watch monitor output as queries are running.
For example, starting the monitor with this command sends the output to an ASCII file with the name
dbname.conn#-[main|temp]-iqmon:
So, for the iqdemo database, the buffer monitor would send the results to iqdemo.2-main-iqmon.
The buffer cache monitor writes the results of each run to these logs:
The prefix <dbname.connection#> represents your database name and connection number. If you see more
than one connection number and are uncertain which is yours, you can run the catalog stored procedure
sa_conn_info. This procedure displays the connection number, user ID, and other information for each active
connection to the database. The -file_suffix clause to change the suffix iqmon to a suffix of your choice.
Use a text editor to display or print a file. Running the monitor again from the same database and connection
number, overwrites the previous results. To save the results of a monitor run, copy the file to another location
or use the -append option.
(back to top)
None
Standards
(back to top)
Examples
(back to top)
The following example starts the buffer cache monitor and record activity for the IQ temp buffer cache:
Related Information
Syntax
LEAVE <statement-label>
Remarks
LEAVE is a control statement that lets you leave a labeled compound statement or a labeled loop. Execution
resumes at the first statement after the compound statement or loop.
Privileges
None
Standards
Examples
● The following example shows how to use the LEAVE statement to leave a loop:
SET i = 1;
lbl:
LOOP
INSERT
INTO Counters ( number )
VALUES ( i ) ;
IF i >= 10 THEN
LEAVE lbl ;
END IF ;
SET i = i + 1
END LOOP lbl
outer_loop:
LOOP
SET i = 1;
inner_loop:
LOOP
...
SET i = i + 1;
IF i >= 10 THEN
LEAVE outer_loop
END IF
END LOOP inner_loop
END LOOP outer_loop
Related Information
Note
Sections in this topic are minimized. To expand or recollapse a section, click the title next to the greater-
than symbol (>).
Syntax
<load-specification> ::=
{ <column-name> [ <filler-type> <column-spec> ]
| FILLER ( <filler-type> ) }
<column-spec> ::=
{ ASCII ( <input-width> )
| PREFIX { 1 | 2 | 4 }
| BINARY [ WITH NULL BYTE ]
| PREFIX { 1 | 2 | 4 } BINARY [ WITH NULL BYTE ] [ VARYING ]
| '<delimiter-string>'
| DATE ( <input-date-format> )
| DATETIME ( <input-datetime-format> )
<filler-type> ::=
{ <input-width>
| PREFIX { 1 | 2 | 4 }
| '<delimiter-string>' }
<log-what> ::=
{ CHECK
| ALL
| NULL
| UNIQUE
| DATA VALUE
| FOREIGN KEY
| WORD }
Parameters
FROM
Identifies one or more files from which to load data. To specify more than one file, use a comma to separate
each filename-string. The <filename-string> is passed to the server as a string. The string is
therefore subject to the same formatting requirements as other SQL strings.
To indicate directory paths on Windows, represent the backslash character (\) with two backslashes.
Therefore, the statement to load data from the file c:\temp\input.dat into the Employees table is:
The path name is relative to the database server, not to the client application. If you are running the
statement on a database server on some other computer, the directory names refer to directories on the
server machine, not on the client machine. When loading a multiplex database, use absolute (fully
qualified) paths in all file names. Do not use relative path names.
Because of resource constraints, SAP IQ does not guarantee that all the data can be loaded. If resource
allocation fails, the entire load transaction is rolled back. Any SKIP or LIMIT clause only applies in the
beginning of the load, not to each file. Multiple files are processed in parallel, except when using the SKIP
or LIMIT clauses. The rows being skipped are processed single threaded from the files in the order
specified in the LOAD statement. Once the SKIP completes, the rest of the files are processed in parallel if
there is no LIMIT clause. If a LIMIT clause is specified, the entire load process is single threaded, and the
number of rows are loaded from the files in the order specified in the LOAD statement.
USING FILE loads one or more files from the server. This clause is synonymous with specifying the FROM
<filename> clause.
USING CLIENT FILE bulk loads one or more files from a client. The character set of the file on the client
side must be the same as the server collation. Client-side bulk loading incurs no administrative overhead,
such as extra disk space, memory, or network-monitoring daemon requirements, but does forces single
threaded processing for each file.
When bulk loading large objects, the USING CLIENT FILE clause applies to both primary and secondary
files.
The LOAD TABLE statement can load compressed client and server files only in gzip format. Any file with
an extension ".gz" or ".gzip" is assumed to be a compressed file. Named pipes or secondary files are not
supported during a compressed file load. Compressed files and uncompressed files can be specified in the
same LOAD TABLE statement. Each compressed file in a load is processed by one thread.
During client-side loads, the IGNORE CONSTRAINT log files are created on the client host and any error
while creating the log files causes the operation to roll back.
Client-side bulk loading is supported by Interactive SQL and ODBC/JDBC clients using the Command
Sequence protocol. It is not supported by clients using the TDS protocol. For data security over a network,
use Transport Layer Security. To control who can use client-side bulk loads, use the secure feature (-sf)
server startup switch, enable the ALLOW_READ_CLIENT_FILE database option, and the READ CLIENT
FILE access control.
The FORMAT parquet clause does not support the use of client files. You may use USING FILE with
FORMAT parquet, but SAP IQ returns an error and rolls back the LOAD TABLE statement if you specify
USING CLIENT FILE with FORMAT parquet.
CHECK CONSTRAINTS { ON | OFF }
Evaluates check constraints, which you can ignore or log. CHECK CONSTRAINTS defaults to ON.
Setting CHECK CONSTRAINTS OFF causes SAP IQ to ignore all check constraint violations. This can be
useful, for example, during database rebuilding. If a table has check constraints that call user-defined
functions that are not yet created, the rebuild fails unless this option is set to OFF.
This option is mutually exclusive to the following options. If any of these options are specified in the same
load, an error results:
Uses a column's default value. This option is ON by default. If the DEFAULTS option is OFF, any column not
present in the column list is assigned NULL.
The setting for the DEFAULT option applies to all column DEFAULT values, including AUTOINCREMENT.
QUOTES { ON | OFF }
Indicates that input strings are enclosed in quote characters. QUOTES is an optional parameter and is ON by
default. The first such character encountered in a string is treated as the quote character for the string.
String data must be terminated with a matching quote.
With QUOTES ON, column or row delimiter characters can be included in the column value. Leading and
ending quote characters are assumed not to be part of the value and are excluded from the loaded data
value.
To include a quote character in a value with QUOTES ON, use two quotes. For example, this line includes a
value in the third column that is a single quote character:
With STRIP turned on (the default), trailing blanks are stripped from values before they are inserted.
Trailing blanks are stripped only for non-quoted strings. Quoted strings retain their trailing blanks. Leading
blank or TAB characters are trimmed only when the setting is ON.
The data extraction facility provides options for handling quotes (TEMP_EXTRACT_QUOTES,
TEMP_EXTRACT_QUOTES_ALL, and TEMP_EXTRACT_QUOTE). If you plan to extract data to be loaded into
an IQ main store table and the string fields contain column or row delimiter under default ASCII extraction,
use the TEMP_EXTRACT_BINARY option for the extract and the FORMAT binary and QUOTES OFF options
for LOAD TABLE.
Limits:
Exceptions:
● If LOAD TABLE encounters any nonwhite characters after the ending quote character for an enclosed
field, this error is reported and the load operation is rolled back:
For TEXT data only; identifies the enclosure character to be placed around string values. If not specified,
the default QUOTE character is either a single (') or double (") quotation mark, depending on what is used in
the field. If QUOTES OFF is defined, QUOTE is ignored.
If the specified <enclosure_character> is multibyte, only the first byte is used; the remaining bytes are
ignored.
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the QUOTE
<enclosure_character> clause and issues a message with this information.
QUOTE ESCAPE 'escape_character'
Specifies the escape character used in the data. If not specified, the default QUOTE ESCAPE character is
the value of QUOTE. For example, if QUOTE is defined as percent (%), but QUOTE ESCAPE is not defined, the
default value for QUOTE ESCAPE becomes %. If neither QUOTE ESCAPE nor QUOTE are defined, QUOTE
defaults to either a single (') or double (") quotation mark, depending on what is used in the field, and
QUOTE ESCAPE defaults to match QUOTE.
If the specified ESCAPE character is multibyte, only the first byte is used; the remaining bytes are ignored.
If QUOTES ON and QUOTE ESCAPE is not defined, single quote becomes the ESCAPE character and must
be escaped by another quote.
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the QUOTE ESCAPE
clause and issues a message with this information.
ESCAPES
If you omit a <column-spec> definition for an input field and ESCAPES is ON (the default), characters
following the backslash character are recognized and interpreted as special characters by the database
server. You can include newline characters as the combination \n, and other characters as hexadecimal
ASCII codes, such as \x09 for the Tab character. A sequence of two backslash characters ( \\ ) is
interpreted as a single backslash.
Note
SAP IQ supports ASCII and binary input fields. The format is usually defined by the <column-spec>
described above. If you omit that definition for a column, by default SAP IQ uses the format defined by this
option. Input lines are assumed to have ASCII (the default) or binary fields, one row per line, with values
separated by the column delimiter character.
bcp
● The BCP data file loaded into SAP IQ tables using the LOAD TABLE FORMAT BCP statement must
be exported (BCP OUT) in cross-platform file format using the -c option.
● For FORMAT bcp, the default column delimiter for the LOAD TABLE statement is <tab> and the
default row terminator is <newline>.
● For FORMAT bcp, the last column in a row must be terminated by the row terminator, not by the
column delimiter. If the column delimiter is present before the row terminator, then the column
delimiter is treated as a part of the data.
● Data for columns that are not the last column in the load specification must be delimited by the
column delimiter only. If a row terminator is encountered before a column delimiter for a column
that is not the last column, then the row terminator is treated as a part of the column data.
● Column delimiter can be specified via the DELIMITED BY clause. For FORMAT bcp, the delimiter
must be less than or equal to 10 characters in length. An error is returned, if the delimiter length is
more than 10.
● For FORMAT bcp, the load specification may contain only column names, NULL, and ENCRYPTED.
An error is returned if any other option is specified in the load specification.
For example, these LOAD TABLE load specifications are valid:
csv
A row in a CSV file must be terminated either by a row deliminator (default newline) or a column
deliminator (default coma) followed by a row delimiter. The maximum size of a delimiter is 4 bytes. An
error message appears if the deliminator exceeds 4 bytes.
A CSV file may contain partial rows, defined as any row with the number of fields less than the number
of columns specified (either explicitly or implicitly) in the LOAD TABLE statement. All fields missing
from a partial row are assigned a NULL value. If the column is not nullable, an error message appears,
and no data is imported.
If a table has K columns, a CSV file has M fields, and the LOAD TABLE statement indicates N columns
(either explicitly or implicitly), when N <= K and N < M, columns missing from the column list in the
load statement are assigned default values.
parquet
To load a Parquet format file into a table, use the FORMAT parquet clause and specify .parquet
or .parq as the file name extension in the LOAD TABLE statement.
Not all LOAD TABLE clauses work with FORMAT parquet; some are ignored, while others can cause
the LOAD TABLE statement to roll back. See Loading Parquet Files in SAP IQ Administration: Load
Management for details.
DELIMITED BY 'string'
To use the newline character as a delimiter, you can specify either the special combination '\n' or its ASCII
value '\x0a'. Although you can specify up to four characters in the <column-spec> <delimiter-
string>, you can specify only a single character in the DELIMITED BY clause.
When specifying the DATE column with the NULL clause, the date column is treated as a variable-width
date field if DELIMITED BY clause is included. Otherwise, date column is treated as fixed-width date field.
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the DELIMITED BY
clause and issues a message with this information.
STRIP { OFF | RTRIM }
Determines whether unquoted values should have trailing blanks stripped off before they are inserted. The
LOAD TABLE command accepts these STRIP keywords:
With STRIP turned on (the default), SAP IQ strips trailing blanks from values before inserting them. This is
effective only for VARCHAR data. STRIP OFF preserves trailing blanks.
Trailing blanks are stripped only for unquoted strings. Quoted strings retain their trailing blanks. If you do
not require blank sensitivity, you can use the FILLER option as an alternative to be more specific in the
number of bytes to strip, instead of all the trailing spaces. STRIP OFF is more efficient for SAP IQ, and it
adheres to the ANSI standard when dealing with trailing blanks. (CHAR data is always padded, so the STRIP
option only affects VARCHAR data.)
The STRIP option applies only to variable-length non-binary data and does not apply to ASCII fixed-width
inserts. For example, assume this schema:
Determines whether SAP IQ performs a checkpoint. This option is useful only when loading SAP SQL
Anywhere tables in an SAP IQ database.
The default setting is OFF. If this clause is set to ON, a checkpoint is issued after successfully completing
and logging the statement. If the server fails after a connection commits and before the next checkpoint,
the data file used to load the table must be present for the recovery to complete successfully. However, if
WITH CHECKPOINT ON is specified, and recovery is subsequently required, the data file doesn't need to be
present at the time of recovery.
Caution
If you set the CONVERSION_ERROR database option to OFF, you may load bad data into your table
without any error being reported. If you do not specify WITH CHECKPOINT ON, and the database
needs to be recovered, the recovery may fail as CONVERSION_ERROR is ON (the default value) during
recovery. It is recommended that you do not load tables when CONVERSION_ERROR is set to OFF and
WITH CHECKPOINT ON is not specified.
Specifies the byte order during reads. This option applies to all binary input fields. If none are defined, this
option is ignored. You can specify:
● NATIVE – (default)SAP IQ always reads binary data in the format native to the machine it is running
on.
● HIGH – when multibyte quantities have the high-order byte first (for big-endian platforms like Sun, IBM
AIX, and HP).
● LOW – when multibyte quantities have the low-order byte first (for little-endian platforms like
Windows).
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the BYTE ORDER
clause and issues a message with this information.
LIMIT number-of-rows
Specifies the maximum number of rows to insert into the table. The default is 0 for no limit. The maximum
is 231 - 1 (2147483647) rows.
NOTIFY number-of-rows
Specifies that you be notified with a message each time the specified number of rows is successfully
inserted into the table. The default is 0, meaning no notifications are printed. The value of this option
overrides the value of the NOTIFY_MODULUS database option.
ON FILE ERROR { ROLLBACK | FINISH | CONTINUE }
Specifies the action SAP IQ takes when an input file cannot be opened because it does not exist or you
have incorrect privileges to read the file. You can specify one of the following:
Displays the layout of input into the destination table including starting position, name, and data type of
each column. SAP IQ displays this information at the start of the load process. If you are writing to a log file,
this information is also included in the log.
ROW DELIMITED BY 'delimiter-string'
Specifies a string up to 4 bytes in length that indicates the end of an input record. You can use this option
only if all fields within the row are any of the following:
Always include ROW DELIMITED BY to ensure parallel loads. Omitting this clause from the LOAD
specification may cause SAP IQ to load serially rather than in parallel.
You cannot use this option if any input fields contain binary data. With this option, a row terminator causes
any missing fields to be set to NULL. All rows must have the same row delimiters, and it must be distinct
from all column delimiters. The row and field delimiter strings cannot be an initial subset of each other. For
example, you cannot specify "*" as a field delimiter and "*#" as the row delimiter, but you could specify "#"
as the field delimiter with that row delimiter.
If a row is missing its delimiters, SAP IQ returns an error and rolls back the entire load transaction. The only
exception is the final record of a file where it rolls back that row and returns a warning message.
On Windows, a row delimiter is usually indicated by the newline character followed by the carriage return
character. You might need to specify this as the <delimiter-string> (see above for description) for
either this option or FILLER.
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the ROW DELIMITED
BY clause and issues a message with this information.
SKIP number-of-rows
Defines the number of rows to skip at the beginning of the input tables for this load. The maximum number
of rows to skip is 231 - 1 (2147483647). The default is 0. SKIP runs in single-threaded mode as it reads the
rows to skip.
The FORMAT parquet clause does not support SKIP <number-of-rows>. If you specify FORMAT
parquet with SKIP <number-of-rows>, SAP IQ returns an error and rolls back the LOAD TABLE
statement.
HEADER SKIP [ALL] number … HEADER DELIMITED BY 'string'
When you include ALL in your load statement for a multiple-file load, LOAD TABLE skips the number of
header rows (that you specify with <number>) from the start of each file, while omitting ALL just skips the
number of header rows from just the first file. ALL does not change the results for a single-file load.
HEADER SKIP <number>, without ALL, specifies a number of lines at the beginning of the data file,
including header rows, for LOAD TABLE to skip. All LOAD TABLE column specifications and other load
options are ignored, until the specified number of rows is skipped.
● You cannot specify HEADER SKIP and HEADER SKIP ALL in the same LOAD TABLE statement; doing
so results in an error.
● You can specify HEADER SKIP ALL <number> and SKIP <number-of-rows> together. When you
do, the number of header rows (specified in <number>) is skipped first, then SKIP <number-of-
rows> is performed until the statement reaches the number of rows you specified.
● You can specify HEADER SKIP ALL <number> and LIMIT <number-of-rows>. When you do, the
number of header rows (specified in <number>) is skipped first, then rows are loaded for the number
of rows you specify.
WORD SKIP number
Allows the load to continue when it encounters data longer than the limit specified when the word index
was created.
If a row is not loaded because a word exceeds the maximum permitted size, a warning is written to
the .iqmsg file. WORD size violations can be optionally logged to the MESSAGE LOG file and rejected rows
logged to the ROW LOG file specified in the LOAD TABLE statement.
● If the option is not specified, LOAD TABLE reports an error and rolls back on the first occurrence of a
word that is longer than the specified limit.
● <number> specifies the number of times the “Words exceeding the maximum permitted word
length not supported” error is ignored.
● 0 (zero) means there is no limit.
ON PARTIAL INPUT ROW { ROLLBACK | CONTINUE }
Specifies the action to take when a partial input row is encountered during a load. You can specify one of
the following:
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the ON PARTIAL
INPUT ROW clause and issues a message with this information.
IGNORE CONSTRAINT constraint-type string
Specifies whether to ignore CHECK, UNIQUE, NULL, DATA VALUE, and FOREIGN KEY integrity constraint
violations that occur during a load. <string> is an integer that indicates the maximum number of
violations to ignore before initiating a rollback. Specifying each <constraint-type> has the following
result:
Whenever any of these limits is exceeded, the LOAD TABLE statement rolls back.
Note
A single row can have more than one integrity constraint violation. Every occurrence of an integrity
constraint violation counts towards the limit of that type of violation.
Set the IGNORE CONSTRAINT option limit to a nonzero value if you are logging the ignored
integrity constraint violations. Logging an excessive number of violations affects the performance
of the load
If CHECK, UNIQUE, NULL, or FOREIGN KEY is not specified in the IGNORE CONSTRAINT clause, then the
load rolls back on the first occurrence of each of these types of integrity constraint violation.
If DATA VALUE is not specified in the IGNORE CONSTRAINT clause, then the load rolls back on the first
occurrence of this type of integrity constraint violation, unless the CONVERSION_ERROR database option is
OFF. If so, a warning is reported for any DATA VALUE constraint violation and the load continues.
When the load completes, an informational message regarding integrity constraint violations is logged in
the .iqmsg file. This message contains the number of integrity constraint violations that occurred during
the load and the number of rows that were skipped.
[MESSAGE LOG 'string'] [ROW LOG 'string' ]
● If the ONLY LOG clause is not specified, no information on integrity constraint violations is logged. Only
the timestamps indicating the start and completion of the load are logged.
● Information is logged on all integrity constraint-type violations specified in the ONLY LOG clause or for
all word index-length violations if the keyword WORD is specified.
● If constraint violations are being logged, every occurrence of an integrity constraint violation generates
exactly one row of information in the MESSAGE LOG file.
The number of rows (errors reported) in the MESSAGE LOG file can exceed the IGNORE CONSTRAINT
option limit, because the load is performed by multiple threads running in parallel. More than one
thread might report that the number of constraint violations has exceeded the specified limit.
● If constraint violations are being logged, exactly one row of information is logged in the ROW LOG file
for a given row, regardless of the number of integrity constraint violations that occur on that row.
The number of distinct errors in the MESSAGE LOG file might not exactly match the number of rows in
the ROW LOG file. The difference in the number of rows is due to the parallel processing of the load
described above for the MESSAGE LOG.
● The MESSAGE LOG and ROW LOG files cannot be raw partitions or named pipes.
● If the MESSAGE LOG or ROW LOG file already exists, new information is appended to the file.
● Specifying an invalid file name for the MESSAGE LOG or ROW LOG file generates an error.
● Specifying the same file name for the MESSAGE LOG and ROW LOG files generates an error.
Various combinations of the IGNORE CONSTRAINT and MESSAGE LOG options result in different logging
actions:
Yes Yes All ignored integrity constraint violations are logged, in
cluding the user specified limit, before the rollback.
Tip
Set the IGNORE CONSTRAINT option limit to a nonzero value, if you are logging the ignored integrity
constraint violations. If a single row has more than one integrity constraint violation, a row for each
violation is written to the MESSAGE LOG file. Logging an excessive number of violations affects the
performance of the load.
LOG DELIMITED BY
Specifies the separator between data values in the ROW LOG file. The default separator is a comma.
● If the specified load format is not ascii, binary, or bcp, SAP IQ returns the message “Only ASCII,
BCP and BINARY are supported LOAD formats.”
● If the LOAD TABLE column specification contains anything other than column name, NULL, or
ENCRYPTED, then SAP IQ returns the error message “Invalid load specification for
LOAD ... FORMAT BCP.”
● If the column delimiter or row terminator size for the FORMAT bcp load is greater than 10 characters,
then SAP IQ returns the message “Delimiter '%2' must be 1 to %3 characters in
length.” (where %3 equals 10).
Messages corresponding to error or warning conditions, which can occur for FORMAT bcp as well as
FORMAT ascii, are the same for both formats.
● If the load default value specified is AUTOINCREMENT, IDENTITY, or GLOBAL AUTOINCREMENT, SAP IQ
returns the error “Default value %2 cannot be used as a LOAD default value. %1”
● If the LOAD TABLE specification does not contain any columns that need to be loaded from the file
specified, SAP IQ returns the error “The LOAD statement must contain at least one
column to be loaded from input file.” and the LOAD TABLE statement rolls back.
● If a load exceeds the limit on the maximum number of terms for a text document with TEXT indexes,
SAP IQ returns the error “Text document exceeds maximum number of terms. Support up
to 4294967295 terms per document.”
Remarks
The LOAD TABLE statement allows efficient mass insertion into a database table from a file with ASCII or
binary data.
The LOAD TABLE options also let you control load behavior when integrity constraints are violated and to log
information about the violations.
You can use LOAD TABLE on a temporary table, but the temporary table must have been declared with ON
COMMIT PRESERVE ROWS, or the next COMMIT removes the rows you have loaded.
Column Options
SAP IQ supports loading from both ASCII and binary data, and it supports both fixed- and variable-length
formats. To handle all of these formats, you supply a <load-specification> to tell SAP IQ what kind of data
to expect from each “column” or field in the source file. The <column-spec> lets you define these formats:
● ASCII with a fixed length of bytes. The <input-width> value is an integer indicating the fixed width in
bytes of the input field in every record.
● Binary or non-binary fields that use a PREFIX clause, which comprises two parts:
Part Description
When you perform a load of binary data, the length of the associated data portion differs based on whether
you specify the VARYING option with the PREFIX clause:
○ PREFIX <n> BINARY with VARYING – the length of the associated data portion is variable, and is the
same as the actual data length.
○ PREFIX <n> BINARY without VARYING – the length of the associated data portion is fixed, and is the
declared length for the varchar/varbinary column. For example, if the column is varchar(10), the
associated data portion is 10 bytes long. The prefix portion indicates the actual length of data in the
field, even if that length is shorter than the field in the file — in which case, the remaining data after the
actual data is ignored, and is not inserted in the column in the table.
If you plan to use PREFIX <n> BINARY for a varchar or varbinary column for a file that was generated by
the binary mode option for extraction, use the TEMP_EXTRACT_LENGTH_PREFIX option for extraction to
specify the length of the prefix portion, and TEMP_EXTRACT_VARYING to extract the associated data
portion with a variable length of actual data (instead of the declared length of varchar/varbinary).
Specifying TEMP_EXTRACT_VARYING allows you to extract the varchar or varbinary column without trailing
padding in the extracted file. With PREFIX <n> BINARY, trailing blanks for the varchar column (and
trailing zeros for the varbinary column) are not stripped from values when inserted into the column.
If the data is unloaded using the extraction facility with the TEMP_EXTRACT_BINARY option set ON, you
must use the BINARY WITH NULL BYTE parameter for each column when you load the binary data.
● Variable-length characters delimited by a separator. You can specify the terminator as hexadecimal ASCII
characters and the bytes (1, 2, or 4) to specify the length of the input. This <delimiter-string>
variable-length characters, delimited by a separator, can be any string of up to 4 characters, including any
combination of printable characters, and any 8-bit hexadecimal ASCII code that represents a nonprinting
character. For example, specify:
○ "\x09" to represent a tab as the terminator.
○ "\x00" for a null terminator (no visible terminator as in “C” strings).
○ "\x0a" for a newline character as the terminator. You can also use the special character combination
of '\n' for newline.
Note
The delimiter string can be from 1 to 4 characters long, but you can specify only a single character in
the DELIMITED BY clause. For BCP, the delimiter can be up to 10 characters.
● DATE or DATETIME string as ASCII characters. You must define the <input-date-format> or <input-
datetime-format> of the string using one of the corresponding formats for the date and datetime data
types supported by SAP IQ. Use DATE for DATE values and DATETIME for DATETIME and TIME values.
Formatting dates and times are:
yy or YY
mm or MM Represents number of month. Always use leading zero or blank for number of the month
where appropriate, for example, '05' for May. DATE value must include a month. For example,
if the DATE value you enter is 1998, you receive an error. If you enter '03', SAP IQ applies the
default year and day and converts it to '1998-03-01'.
dd or DD Represents number of day. Default day is 01. Always use leading zeros for number of day
where appropriate, for example, '01' for first day. J or j indicates a Julian day (1 to 366) of the
jjj or JJJ year.
hh or HH Represents hour. Hour is based on 24-hour clock. Always use leading zeros or blanks for hour
where appropriate, for example, '01' for 1 am. '00' is also valid value for hour of 12 a.m.
nn Represents minute. Always use leading zeros for minute where appropriate, for example, '08'
for 8 minutes.
pp Represents the p.m. designation only if needed. (This is an incompatibility with SAP IQ ver
sions earlier than 12.0; previously, “pp” was synonymous with “aa”.)
hh SAP IQ assumes zero for minutes and seconds. For example, if the SAP IQ value you enter is
'03', SAP IQ converts it to '03:00:00.0000'.
hh:nn or hh:mm SAP IQ assumes zero for seconds. For example, if the time value you enter is '03:25', SAP IQ
converts it to '03:25:00.0000'.
● The FILE NAME option allows you to insert the name of a file into a column when you perform a LOAD
INTO TABLE statement. When you do, the name of the file (but not its contents) is loaded into the column
for each of the rows in the table.
○ You can only specify this option for VARCHAR and CHAR columns.
○ The length of the file name cannot be longer than the maximum length of the column you are
specifying.
○ You can only specify one column for use with FILE NAME.
○ After the load inserts the file name into a column, the system does not add any information to mark
the column as a FILE NAME column.
SAP IQ has built-in load optimizations for common date, time, and datetime formats. If your data to be loaded
matches one of these formats, you can significantly decrease load time by using the appropriate format.
You can also specify the date/time field as an ASCII fixed-width field (as described above) and use the
FILLER(1) option to skip the column delimiter.
The NULL portion of the <column-spec> indicates how to treat certain input values as NULL values when
loading into the table column. These characters can include BLANKS, ZEROS, or any other list of literals you
define. When specifying a NULL value or reading a NULL value from the source file, the destination column
must be able to contain NULLs.
ZEROS are interpreted as follows: the cell is set to NULL if (and only if) the input data (before conversion, if
ASCII) is all binary zeros (and not character zeros).
For example, if your LOAD TABLE statement includes col1 date('yymmdd') null(zeros) and the date is
000000, you receive an error indicating that 000000 cannot be converted to a DATE(4). To get LOAD TABLE
to insert a NULL value in col1 when the data is 000000, either write the NULL clause as null('000000'), or
modify the data to equal binary zeros and use NULL (ZEROS).
If the length of a VARCHAR cell is zero and the cell is not NULL, you get a zero-length cell. For all other data
types, if the length of the cell is zero, SAP IQ inserts a NULL. This is ANSI behavior. For non-ANSI treatment of
zero-length character data, set the NON_ANSI_NULL_VARCHAR database option.
Use the DEFAULT option to specify a load default column value. You can load a default value into a column, even
if the column does not have a default value defined in the table schema. This feature provides more flexibility at
load time.
● The LOAD TABLE DEFAULTS option must be ON in order to use the default value specified in the LOAD
TABLE statement. If the DEFAULTS option is OFF, the specified load default value is not used and a NULL
value is inserted into the column instead.
● The LOAD TABLE statement must contain at least one column that needs to be loaded from the file
specified in the LOAD TABLE statement. Otherwise, an error is reported and the load is not performed.
● The specified load default value must conform to the supported default values for columns and default
value restrictions. The LOAD TABLE DEFAULT option does not support AUTOINCREMENT, IDENTITY, or
GLOBAL AUTOINCREMENT as a load default value.
● The LOAD TABLE DEFAULT <default-value> must be of the same character set as that of the
database.
● Encryption of the default value is not supported for the load default values specified in the LOAD TABLE
DEFAULT clause.
● A constraint violation caused by evaluation of the specified load default value is counted for each row that
is inserted in the table.
Another important part of the <load-specification> is the FILLER option. This option indicates you want
to skip over a specified field in the source input file. For example, there may be characters at the end of rows or
Parquet Files
When you specify FORMAT parquet in the LOAD TABLE statement, SAP IQ ignores the following <column-
spec> options and issues a message with this information:
ASCII ( <input-width> )
PREFIX { 1 | 2 | 4 }
'<delimiter-string>'
The FORMAT parquet clause does not support the following <load-specification> options. If you specify
both, SAP IQ returns an error and rolls back the LOAD TABLE statement:
FILLER <filler-type>
<filler-type> ::=
{ <input-width>
| PREFIX { 1 | 2 | 4 }
| '<delimiter-string>'
}
Privileges
The privileges required depends on the database server -gl command line option, as follows. See GRANT
System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502] for assistance
with granting privileges.
-gl NONE Execution of the LOAD TABLE statement is not permitted regardless of privi
lege.
For more information on the -gl command line option, see SAP IQ Utility Reference > start_iq Database Server
Startup Utility > start_iq Server Options.
LOAD TABLE also requires a write lock on the table. When using the USING CLIENT FILE clause, you require:
Standards
Examples
● This example loads data from one file into the Products table on a Windows system. A tab is used as the
column delimiter following the Description and Color columns:
● This example loads data from two files into the product_new table (which allows NULL values) on a UNIX
system. The tab character is the default column delimiter, and the newline character is the row delimiter:
● This example ignores 10 word-length violations; on the 11th, deploy the new error and roll back the load:
● This example loads data into table t1 from the BCP character file bcp_file.bcp using the FORMAT BCP
load option:
● This example loads default values 12345 into c1 using the DEFAULT load option, and load c2 and c3 with
data from the LoadConst04.dat file:
● This example loads c1 and c2 with data from the file bcp_file.bcp using the FORMAT BCP load option
and set c3 to the value 10:
● This code fragment ignores one header row at the beginning of the data file, where the header row is
delimited by '&&':
LOAD TABLE
...HEADER SKIP 1 HEADER DELIMITED by '&&'
...
● This code fragment ignores 2 header rows at the beginning of the data file, where each header row is
delimited by '\n':
LOAD TABLE
...HEADER SKIP 2
...
● This example loads a table from a CSV file, using a double quotation mark for the QUOTE enclosure
character and the backslash (\) for the QUOTE ESCAPE character.
c1_val1, c3_value1,
c1_val2, c3_value2,
c1_val21, c3_value21,
c1_val22, c3_value22,
c1 c2 c3
========= ======== =========
c1_val1 datefile1.csv c3_value1
c1_val2 datefile1.csv c3_value2
c1_val21 datefile2.csv c3_value21
c1_val22 datefile2.csv c3_value22
● This example loads a table named test2 (which contains columns c1 and c2, both of which are
VARCHAR(20)), using files datafile1.csv and datafile2.csv, skipping the first header row from the
start of each file:
c1_val1, c3_value1,
c1_val2, c3_value2,
c1_val1, c3_value1,
c1_val2, c3_value2,
c1 c2
========= ======
c1_val2 c3_value2
c1_val22 c3_value22
● To execute a VARCHAR or VARBINARY load from a file generated by the data extraction facility, without
specifying the TEMP_EXTRACT_LENGTH_PREFIX option:
LOAD TABLE t1 (c1 BINARY WITH NULL BYTE) FROM 'yyy' FORMAT BINARY
You can specify this LOAD <column-spec>, binary, for any column data type — the column in this
example is c1.
LOAD TABLE t1 (c1 PREFIX 2 BINARY WITH NULL BYTE) FROM 'xxx' FORMAT BINARY
You can only specify prefix <n> binary for VARCHAR and VARBINARY columns; which in this example is
c1.
Related Information
Syntax
Parameters
owner
The owner of the mutex. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
mutex-name
The name of the mutex. <mutex-name> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
IN { SHARE | EXCLUSIVE } MODE clause
The amount of time, in milliseconds (greater than 0), to wait to acquire the lock. If the TIMEOUT clause is
not specified, then the connection waits indefinitely until the lock can be acquired.
Remarks
Recursive LOCK MUTEX statements are allowed; however, an equal number of releases (RELEASE MUTEX) are
required to release the mutex for connection-scope mutexes.
If a connection executes the LOCK MUTEX statement in SHARE MODE, and then again in EXCLUSIVE MODE, it
may be blocked if other connections have the mutex locked in SHARE MODE. If not, then then the lock mode
changes to an exclusive lock and remains that way until the lock is completely released by the connection.
For transaction-scope mutexes (that is, the SCOPE TRANSACTION clause was specified at creation time), the
mutex is held until the end of the transaction. For connection-scope mutexes (that is, the SCOPE
CONNECTION clause was specified at creation time), the mutex is held until a RELEASE MUTEX statement is
execute, or the connection is terminated.
LOCK MUTEX statements benefit from the same deadlock detection used for table and row locks.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None.
Standards
Example
Related Information
Prevents other concurrent transactions from accessing or modifying a table within the specified time or lock
released by earlier transaction.
Syntax
<table-list> ::=
[ <owner>. ] <table-name> [ , [ <owner.> ] <table-name>, ...]
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
table-name
LOCK TABLE either locks all tables in the table list, or none. The table must not be enabled for row-level
versioning (RLV). If obtaining a lock for a SAP SQL Anywhere table or when obtaining SHARE or
EXCLUSIVE locks, you may only specify a single table. Standard SAP IQ object qualification rules are used
to parse <>.
WITH HOLD
The lock is held until the end of the connection. If the clause is not specified, the lock is released when the
current transaction is committed or rolled back. Using the WITH HOLD clause in the same statement with
WRITE MODE is unsupported and returns the error Must be a base table, not a view. WRITE mode is only
valid for IQ base tables. SQLCODE=-131, ODBC 3 State="42000".
SHARE
Must be a base table, not a view. WRITE mode is only valid for IQ basePrevents other transactions from
modifying the table, but allows them read access. In this mode, you can change data in the table as long as
no other transaction has locked the row being modified, either indirectly, or explicitly by using LOCK
TABLE.
WRITE
Prevents other transactions from modifying a list of tables. Unconditionally commits the connections
outermost transaction. The transaction’s snapshot version is established not by the LOCK TABLE IN
WRITE MODE statement, but by the execution of the next command processed by SAP IQ.
WRITE mode locks are released when the transaction commits or rolls back, or when the connection
disconnects.
EXCLUSIVE
Prevents other transactions from accessing the table. In this mode, no other transaction can execute
queries, updates of any kind, or any other action against the table.
WAIT time
Specifies maximum blocking time for all lock types. This clause is mandatory when lock mode is WRITE.
When a time argument is given, the server locks the specified tables only if available within the specified
time. The time argument can be specified in the format hh:nn:ss:sss. If a date part is specified, the server
ignores it and converts the argument into a timestamp. When no time argument is given, the server waits
indefinitely until a WRITE lock is available or an interrupt occurs.
Remarks
(back to top)
sp_iqlocks on the coordinator confirms that the table coord1 has an exclusive (E) lock.
The result of sp_iqlocks run on a connection on a secondary server does not show the exclusive lock on table
coord1. The user on this connection can see updates to table coord1 on the coordinator.
Other connections on the coordinator can see the exclusive lock on table coord1 and attempting to select
from table coord1 from another connection on the coordinator returns User DBA has the row in
coord1 locked.
LOCK TABLE on views is unsupported. Attempting to lock a view acquires a shared schema lock regardless of
the mode specified in the command. A shared schema lock prevents other transactions from modifying the
table schema.
The Transact-SQL (T-SQL) stored procedure dialect does not support LOCK TABLE. For example, this
statement returns Syntax error near LOCK:
The Watcom-SQL stored procedure dialect supports LOCK TABLE. The default command delimiter is a
semicolon (;). For example:
Privileges
(back to top)
The privilege varies by lock mode. See GRANT System Privilege Statement [page 1511] or GRANT Object-Level
Privilege Statement [page 1502] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● This example obtains a WRITE lock on the Customers and Employees tables, if available within 5 minutes
and 3 seconds:
● This example waits indefinitely until the WRITE lock on the Customers and Employees tables is available,
or an interrupt occurs:
Related Information
Syntax
[ <statement-label>: ]
... [ WHILE <search-condition> ] LOOP
... <statement-list>
... END LOOP [ <statement-label> ]
Remarks
The WHILE and LOOP statements are control statements that let you repeatedly execute a list of SQL
statements while a <search-condition> evaluates to TRUE. The LEAVE statement can be used to resume
execution at the first statement after the END LOOP.
Privileges
None
Standards
Examples
...
SET i = 1 ;
WHILE i <= 10 LOOP
INSERT INTO Counters( number ) VALUES ( i ) ;
SET i = i + 1 ;
END LOOP ;
...
SET i = 1;
lbl:
LOOP
INSERT
INTO Counters( number )
VALUES ( i ) ;
IF i >= 10 THEN
LEAVE lbl ;
END IF ;
SET i = i + 1 ;
END LOOP lbl
Related Information
Displays a message, which can be any expression. Clauses can specify where the message is displayed.
Syntax
MESSAGE <expression>, …
[ TYPE { INFO | ACTION | WARNING | STATUS } ]
[ TO { CONSOLE
| CLIENT [ FOR { CONNECTION <conn_id> [ IMMEDIATE ] | ALL } ]
| [ EVENT | SYSTEM ] LOG }
[ DEBUG ONLY ] ]
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
FOR
The FOR clause can be used to notify another application of an event detected on the server without the
need for the application to explicitly check for the event. When the FOR clause is used, recipients receive
the message the next time they execute a SQL statement. If the recipient is currently executing a SQL
statement, the message is received when the statement completes. If the statement being executed is a
stored procedure call, the message is received before the call is completed.
If an application requires notification within a short time after the message is sent and when the
connection is not executing SQL statements, you can use a second connection. This connection can
execute one or more WAITFOR DELAY statements. These statements do not consume significant
resources on the server or network (as would happen with a polling approach), but permit applications to
receive notification of the message shortly after it is sent.
TYPE
Has an effect only if the message is sent to the client. The client application must decide how to handle the
message. Interactive SQL displays messages in these locations:
Controls whether debugging messages added to stored procedures are enabled or disabled by changing
the setting of the DEBUG_MESSAGES database option. When DEBUG ONLY is specified, the MESSAGE
statement is executed only when the DEBUG_MESSAGES option is set to ON.
Note
DEBUG ONLY messages are inexpensive when the DEBUG_MESSAGES option is set to OFF, so these
statements can usually be left in stored procedures on a production system. However, they should be
used sparingly in locations where they would be executed frequently; otherwise, they might result in a
small performance penalty.
(back to top)
The procedure issuing a MESSAGE … TO CLIENT statement must be associated with a connection.
For example, the message box is not displayed because the event occurs outside of a connection:
Valid expressions can include a quoted string or other constant, variable, or function. However, queries are not
permitted in the output of a MESSAGE statement, even though the definition of an expression includes queries.
ESQL and ODBC clients receive messages via message callback functions. In each case, these functions must
be registered. To register ESQL message handlers, use the db_register_callback function.
ODBC clients can register callback functions using the SQLSetConnectAttr function.
Privileges
(back to top)
The privilege varies by clause. See GRANT System Privilege Statement [page 1511] for assistance with granting
privileges.
TO EVENT LOG or TO SYSTEM LOG Requires the SERVER OPERATOR system privilege
Standards
(back to top)
Examples
(back to top)
● The following example displays the string The current date and time, and the current date and time,
on the database server message window:
● The following example registers a callback in ODBC by first declaring the message handler:
rc = SQLSetConnectAttr(
dbc,
ASA_REGISTER_MESSAGE_CALLBACK,
(SQLPOINTER) &my_msgproc, SQL_IS_POINTER );
Related Information
Syntax
Parameters
owner
The owner of the semaphore. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
semaphore-name
The name of the semaphore. <semaphore-name> can also be specified using an indirect identifier (for
example, `[@<variable-name>]`).
INCREMENT BY clause
Specify a positive integer to indicate how much to increment the counter associated with the semaphore. If
this clause is not specified, then the counter is incremented by 1.
If you set <number> to NULL, or if it is set to a variable and the variable value is NULL, the behavior is
equivalent to not specifying the clause.
Remarks
If the counter is 0, and a connection is blocked on a WAITFOR SEMAPHORE statement on this semaphore, the
NOTIFY SEMAPHORE statement notifies the connection.
If a connection that notified a semaphore is dropped or canceled, the counter increment persists, so your
application needs to be able to address this case.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None.
Example
The following statement increments the counter for the license_counter semaphore by 1:
Related Information
Syntax
OPEN <cursor-name>
... [ USING [ DESCRIPTOR { <sqlda-name> | <host-variable> [, …] } ] ]
... [ WITH HOLD ]
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
Identifier or host-variable.
If the cursor name is specified by an identifier or string, then the corresponding DECLARE CURSOR
statement must appear prior to the OPEN in the C program; if the cursor name is specified by a host
variable, then the DECLARE CURSOR statement must execute before the OPEN statement.
USING
Specifies the host variables that are bound to the placeholder bind variables in the SELECT statement for
which the cursor has been declared.
sqlda-name
Identifier
WITH HOLD
Keeps the cursor open for subsequent transactions. The cursor remains open until the end of the current
connection or until an explicit CLOSE statement is executed. Cursors are automatically closed when a
connection is terminated.
Remarks
(back to top)
By default, all cursors are automatically closed at the end of the current transaction (COMMIT or ROLLBACK).
A cursor declared using the FOR READ ONLY clause sees the version of table(s) on which the cursor is declared
when the cursor is opened, not the version of table(s) at the time of the first FETCH statement.
The USING DESCRIPTOR sqlda-name, host-variable, and BLOCK n clauses are for Embedded SQL only.
After successful execution of the OPEN statement, the sqlerrd[3] field of the SQLCA (SQLIOESTIMATE) is filled
in with an estimate of the number of input/output operations required to fetch all rows of the query. Also, the
sqlerrd[2] field of the SQLCA (SQLCOUNT) is filled in with either the actual number of rows in the cursor (a
value greater than or equal to 0), or an estimate thereof (a negative number whose absolute value is the
estimate). The sqlerrd[2] field is the actual number of rows, if the database server can compute this value
without counting the rows.
Privileges
(back to top)
● Must have SELECT object-level permission on all tables in a SELECT statement or EXECUTE object-level
permission on the procedure in a CALL statement.
● When the cursor is on a CALL statement, OPEN causes the procedure to execute until the first result set
(SELECT statement with no INTO clause) is encountered. If the procedure completes and no result set is
found, the SQLSTATE_PROCEDURE_COMPLETE warning is set.
Standards
(back to top)
Examples
(back to top)
and
BEGIN
DECLARE cur_employee CURSOR FOR
SELECT Surname
FROM Employees ;
DECLARE name CHAR(40) ;
OPEN cur_employee;
LOOP
FETCH NEXT cur_employee into name ;
...
END LOOP
CLOSE cur_employee;
END
Related Information
Syntax
OUTPUT TO <filename>
[ APPEND ] [ VERBOSE ]
[ FORMAT <output-format> ]
[ ESCAPE CHARACTER <character> ]
[ DELIMITED BY <string> ]
[ QUOTE <string> [ ALL ] ]
[ COLUMN WIDTHS ( <integer>, … ) ]
[ HEXADECIMAL { ON | OFF | ASIS } ]
[ ENCODING <encoding> ]
[ WITH COLUMN NAMES ]
<output-format> ::=
TEXT | FIXED | HTML | SQL | XML
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
FORMAT
The output format. If no FORMAT clause is specified, the Interactive SQL OUTPUT_FORMAT database option
setting is used.
TEXT
Output is a TEXT format file with one row per line in the file. All values are separated by commas, and
strings are enclosed in apostrophes (single quotes). The delimiter and quote strings can be changed using
the DELIMITED BY and QUOTE clauses. If the ALL clause is specified in the QUOTE clause, all values (not
just strings) are quoted. TEXT is the default output format.
Three other special sequences are also used. The two characters \n represent a newline character, \\
represents a single \, and the sequence \xDD represents the character with hexadecimal code DD.
If you are exporting Java methods that have string return values, you must use the HEXADECIMAL OFF
clause.
Output is fixed format with each column having a fixed width. The width for each column can be specified
using the COLUMN WIDTHS clause. No column headings are output in this format.
If you omit the COLUMN WIDTHS clause, the width for each column is computed from the data type for the
column, and is large enough to hold any value of that data type. The exception is that LONG VARCHAR and
LONG BINARY data defaults to 32 KB.
HTML
Output is an Interactive SQL INPUT statement required to re-create the information in the table.
Note
SAP IQ does not support the INPUT statement. Change this statement to a valid LOAD TABLE (or
INSERT) statement to use it to load data back in.
XML
Output is an XML file encoded in UTF-8 and containing an embedded DTD. Binary values are encoded in
CDATA blocks with the binary data rendered as 2-hex-digit strings. The LOAD TABLE statement does not
accept XML as a file format.
APPEND
Appends the results of the query to the end of an existing output file without overwriting the previous
contents of the file. By default, if you do not use APPEND clause, the OUTPUT statement overwrites the
contents of the output file.
The APPEND clause is valid if the output format is TEXT, FIXED, or SQL.
VERBOSE
Error messages about the query, the SQL statement used to select the data, and the data itself are written
to the output file. By default, if you omit the VERBOSE clause, only the data is written to the file. The
VERBOSE clause is valid if the output format is TEXT, FIXED, or SQL.
ESCAPE CHARACTER
The default escape character for characters stored as hexadecimal codes and symbols is a backslash (\),
so \x0A is the line feed character, for example.
To change this default, use the ESCAPE CHARACTER clause. For example, to use the exclamation mark as
the escape character, enter:
DELIMITED BY
For the TEXT output format only. The delimiter string, by default a comma, is placed between columns.
QUOTE
For the TEXT output format only. The quote string, by default a single quote character, is placed around
string values. If ALL is specified in the QUOTE clause, the quote string is placed around all values, not just
around strings.
COLUMN WIDTHS
Specifies how binary data is to be unloaded for the TEXT format only. When set to ON, binary data is
unloaded in the format 0xabcd. When set to OFF, binary data is escaped when unloaded (\xab\xcd). When
set to ASIS, values are written without any escaping even if the value contains control characters. ASIS is
useful for text that contains formatting characters such as tabs or carriage returns.
ENCODING
Specifies the encoding that is used to write the file. You can use the ENCODING clause only with the TEXT
format. Can be a string or identifier.
If you do not specify the ENCODING clause, Interactive SQL determines the code page that is used to write
the file as follows, where code page values occurring earlier in the list take precedence over those
occurring later:
● The code page specified with the DEFAULT_ISQL_ENCODING option (if this option is set)
● The default code page for the computer Interactive SQL is running on
Remarks
(back to top)
The current query is the SELECT or LOAD TABLE statement that generated the information that appears on
the Results tab in the Results pane. The OUTPUT statement reports an error if there is no current query.
Note
OUTPUT is especially useful in making the results of a query or report available to another application, but is
not recommended for bulk operations. For high-volume data movement, use the ASCII and BINARY data
extraction functionality with the SELECT statement. The extraction functionality provides much better
performance for large-scale data movement, and creates an output file you can use for loads.
Privileges
(back to top)
None
Side Effects
(back to top)
In Interactive SQL, the Results tab displays only the results of the current query. All previous query results are
replaced with the current query results.
(back to top)
Examples
(back to top)
● The following example places the contents of the Employees table in a text file:
● The following example places the contents of the Employees table at the end of an existing file, and
includes any messages about the query in this file as well:
● The following example exports a value that contains an embedded line feed character. A line feed character
has the numeric value 10, which you can represent as the string '\x0a' in a SQL statement.
Execute this statement with HEXADECIMAL ON:
The result is a file with one line in it, containing this text:
line10x0aline2
line1\x0aline2
If you set HEXADECIMAL to ASIS, the result is a file with two lines:
'line1
line2'
Using ASIS generates two lines, because the embedded line feed character has been exported without
being converted to a two-digit hex representation, and without a prefix.
Related Information
Syntax
Remarks
PARAMETERS specifies how many parameters there are to a command file and also names those parameters so
that they can be referenced later in the command file.
Parameters are referenced by putting the named parameter into the command file where you want the
parameter to be substituted:
{parameter1}
There can be no spaces between the braces and the parameter name.
If a command file is invoked with fewer than the required number of parameters, dbisql prompts for values of
the missing parameters.
Privileges
None
Standards
Examples
Syntax
PREPARE <statement-name>
FROM <statement> [ FOR { READ ONLY | UPDATE [ OF <column-name-list> ] } ]
... [ DESCRIBE <describe-type> INTO [ [ SQL ] DESCRIPTOR ] <descriptor> ]
... [ WITH EXECUTE ]
<describe-type> ::=
{ ALL
| BIND VARIABLES
| INPUT
| OUTPUT
| SELECT LIST } ... { LONG NAMES [ [ OWNER.]TABLE.]COLUMN ]
| WITH VARIABLE RESULT }
Go to:
● Remarks
● Privileges
● Side Effects
● Standards
● Examples
Parameters
(back to top)
statement-name
Referenced to execute the statement, or to open a cursor if the statement is a SELECT statement.
<statement-name> may be a host variable of type a_sql_statement_number defined in the sqlca.h
header file that is automatically included. If an identifier is used for the <statement-name>, only one
statement per module may be prepared with this <statement-name>.
FOR UPDATE | FOR READ ONLY
Defines the cursor updatability if the statement is used by a cursor. A FOR READ ONLY cursor cannot be
used in an UPDATE (positioned) or a DELETE (positioned) operation. FOR READ ONLY is the default. In
response to any request for a cursor that specifies FOR UPDATE, SAP IQ provides either a value-sensitive
cursor or a sensitive cursor. Insensitive and asensitive cursors are not updatable.
The prepared statement is described into the specified descriptor. The describe type may be any of the
describe types allowed in the DESCRIBE statement.
The DESCRIBE INTO DESCRIPTOR clause might improve performance, as it decrease the required client/
server communication.
WITH EXECUTE
The statement is executed if and only if it is not a CALL or SELECT statement, and it has no host variables.
The statement is immediately dropped after a successful execution. If PREPARE and DESCRIBE (if any) are
successful but the statement cannot be executed, a warning SQLCODE 111, SQLSTATE 01W08 is set,
and the statement is not dropped.
The WITH EXECUTE clause might improve performance, as it decrease the required client/server
communication.
WITH VARIABLE RESULT
Describes procedures that may have more than one result set, with different numbers or types of columns.
If the WITH VARIABLE RESULT clause is used, the database server sets the SQLCOUNT value after the
describe to one of these values:
● 0 – the result set may change: the procedure call should be described again following each OPEN
statement.
● 1 – the result set is fixed. No reßdescribing is required.
Remarks
(back to top)
The PREPARE statement prepares a SQL statement from the <statement> and associates the prepared
statement with <statement-name>.
If a host variable is used for <statement-name>, it must have the type short int. There is a typedef for this
type in sqlca.h called a_sql_statement_number. This type is recognized by the SQL preprocessor and can
be used in a DECLARE section. The host variable is filled in by the database during the PREPARE statement and
need not be initialized by the programmer.
● ALTER
● CALL
● COMMENT ON
● CREATE
● DELETE
● DROP
● GRANT
● INSERT
● REVOKE
● SELECT
Preparing COMMIT, PREPARE TO COMMIT, and ROLLBACK statements is still supported for compatibility.
However, perform all transaction management operations with static Embedded SQL, because certain
application environments may require it. Also, other Embedded SQL systems do not support dynamic
transaction management operations.
Note
Make sure that you DROP the statement after use. If you do not, then the memory associated with the
statement is not reclaimed.
Privileges
(back to top)
None
Side Effects
(back to top)
Standards
(back to top)
Examples
(back to top)
Syntax
Remarks
The PRINT statement returns a message to the client window if you are connected from an Open Client
application or JDBC application. If you are connected from an Embedded SQL or ODBC application, the
message displays on the database server window.
The format string can contain placeholders for the arguments in the optional argument list. These placeholders
are of the form <%nn!>, where <nn> is an integer between 1 and 20.
Privileges
None
Standards
● This statement returns the string Procedure called successfully to the client:
EXECUTE print_test
● The following example uses placeholders in the PRINT statement; execute these statements inside a
procedure:
For an alternate way to disallow connections, use the LOGIN_PROCEDURE option or the
sp_iqmodifylogin system stored procedure.
Related Information
Syntax
Parameters
cursor-name
Identifier or hostvar
sqlda-name
Identifier
sqlda-name
Can be used to carry out wide puts, which insert more than one row at a time and which might improve
performance. The value <nnn> is the number of rows to be inserted. The SQLDA must contain <nnn> *
(columns per row) variables. The first row is placed in SQLDA variables 0 to (columns per row) - 1, and so
on.
Note
For scroll (values-sensitive) cursors, the inserted row appears if the new row matches the WHERE
clause and the keyset cursor has not finished populating. For dynamic cursors, if the inserted row
matches the WHERE clause, the row might appear. Insensitive cursors cannot be updated.
Remarks
Inserts a row into the named cursor. Values for the columns are taken from the first SQLDA or the host variable
list, in a one-to-one correspondence with the columns in the INSERT statement (for an INSERT cursor) or the
columns in the select list (for a SELECT cursor).
The PUT statement can be used only on a cursor over an INSERT or SELECT statement that references a single
table in the FROM clause, or that references an updatable view consisting of a single base table.
If the sqldata pointer in the SQLDA is the null pointer, no value is specified for that column. If the column has a
DEFAULT VALUE associated with it, that is used; otherwise, a NULL value is used.
The second SQLDA or host variable list contains the results of the PUT statement.
Side Effects
● When inserting rows into a value-sensitive (keyset-driven) cursor, the inserted rows appear at the end of
the result set, even when they do not match the WHERE clause of the query or if an ORDER BY clause
would normally have placed them at another location in the result set.
Privileges
Requires the INSERT object-level privilege. See GRANT Object-Level Privilege Statement [page 1502] for
assistance with granting privileges
Side Effects
Automatic commit
Standards
Examples
Related Information
Syntax
Parameters
error-number
A 5-digit integer greater than 17000. The error number is stored in the global variable <@@error>.
format-string
If not supplied or is empty, the error number is used to locate an error message in the system tables. SAP
Adaptive Server Enterprise obtains messages 17000-19999 from the SYSMESSAGES table. In SAP IQ, this
table is an empty view, so errors in this range should provide a format string. Messages for error numbers
of 20000 or greater are obtained from the SYS.SYSUSERMESSAGES table.
The <format-string> can be up to 255 bytes long. This is the same as in SAP ASE.
The format string can contain placeholders for the arguments in the optional argument list. These
placeholders are of the form %nn!, where <nn> is an integer between 1 and 20.
Remarks
There is no comma between the <error-number> and the <format-string> parameters. The first item
following a comma is interpreted as the first item in the argument list.
The extended values supported by the SQL Server or SAP ASE RAISERROR statement are not supported in
SAP IQ.
Intermediate RAISERROR status and code information is lost after the procedure terminates. If at return time
an error occurs along with the RAISERROR, then the error information is returned and the RAISERROR
information is lost. The application can query intermediate RAISERROR statuses by examining the @@error
global variable at different execution points.
Privileges
None
Examples
The following example raises error 99999, which is in the range for user-defined errors, and send a message to
the client:
Related Information
Syntax
Go to:
● Privileges
● Standards
● Examples
Parameters
(back to top)
ENCODING
An identifier or string that specifies encoding that is used to read the file.
If <filename> has no file extension, Interactive SQL searches for the same file name with the
extension .sql.
If <filename> does not contain an absolute path, Interactive SQL searches for the file. The location of
<filename> is based on the location of the READ statement, as follows:
● If the READ statement is executed directly in Interactive SQL, Interactive SQL first attempts to resolve
the path to <filename> relative to the directory in which Interactive SQL is running. If unsuccessful,
Interactive SQL looks for <filename> in the directories specified in the environment variable
SQLPATH, then the directories specified in the environment variable PATH.
● If the READ statements reside in an external file (for example, a .sql file), Interactive SQL first
attempts to resolve the path to <filename> relative to the location of the external file. If unsuccessful,
Interactive SQL looks for <filename> in a path relative to the directory in which Interactive SQL is
running. If still unsuccessful, Interactive SQL looks in the directories specified in the environment
variable SQLPATH, then the directories specified in the environment variable PATH.
parameter
Can be listed after the name of the SQL script file, and correspond to the parameters named in the
PARAMETERS statement at the beginning of the statement file.
Parameter names must be enclosed in square brackets. Interactive SQL substitutes the corresponding
parameter wherever the source file contains { <parameter-name> }.
The parameters passed to a script file can be identifiers, numbers, quoted identifiers, or strings. Any
quotes around a parameter are placed into the text during the substitution. Parameters that are not
identifiers, numbers, or strings (contain spaces or tabs) must be enclosed in square brackets ([ ]). This
allows for arbitrary textual substitution in the script file.
If not enough parameters are passed to the script file, Interactive SQL prompts for values for the missing
parameters.
When executing a reload.sql file with Interactive SQL, you must specify the encryption key as a
parameter. If you do not provide the key in the READ statement, Interactive SQL prompts for the key.
Privileges
(back to top)
None
(back to top)
Examples
(back to top)
● The following example reads from the file status.rpt and birthday.sql and passes the parameter
values to the variables within the file:
● The following example uses the PARAMETERS clause to pass parameters to a script file:
[test1.sql]
PARAMETERS par1, par2;
BEGIN
DECLARE v_par1 int;
DECLARE v_par2 varchar(200)
SET v_par1 = {par1};
SET v_par2 = {par2};
MESSAGE STRING('PAR1 Value: ', v_par1 ) TO CLIENT;
MESSAGE STRING('PAR2 Value: ', v_par2 ) TO CLIENT;
END;
Note
You must enclose the second parameter value 041028 in quotes, as <v_par2> is declared as a
character data type.
Related Information
Syntax
Parameters
WITH
Use the WITH clause to specify what kind of locks to obtain on the underlying base tables during the
refresh. The types of locks obtained determine how the text index is populated and how concurrency for
transactions is affected. If you do not specify the WITH clause, the default is WITH ISOLATION LEVEL
READ UNCOMMITTED, regardless of any isolation level set for the connection.
● ISOLATION LEVEL <isolation-level> – changes the isolation level for the execution of the refresh
operation. The original isolation level of the connection is restored at the end of the statement
execution.
● EXCLUSIVE MODE – use if you do not want to change the isolation level, but want to guarantee that the
data is updated to be consistent with committed data in the underlying table. When using WITH
EXCLUSIVE MODE, exclusive table locks are placed on the underlying base table and no other
transaction can execute queries, updates, or any other action against the underlying table(s) until the
refresh operation is complete. If table locks cannot be obtained, the refresh operation fails and an error
is returned.
● SHARE MODE – use to give read access on the underlying table to other transactions while the refresh
operation takes place. When this clause is specified, shared table locks are obtained on the underlying
base table before the refresh operation is performed and are held until the refresh operation
completes.
FORCE { BUILD | INCREMENTAL }
Use this clause to specify the refresh method. If this clause is not specified, the database server decides
whether to do an incremental update or a full rebuild based on how much of the table has changed:
● FORCE BUILD – refreshes the text index by re-creating it. Use this clause to force a complete rebuild of
the text index.
● FORCE INCREMENTAL – refreshes the text index based only on what has changed in the underlying
table. An incremental refresh takes less time to complete if there have not been a significant amount of
updates to the underlying table. Use this clause to force an incremental update of the text index.
An incremental refresh does not remove deleted entries from the text index. As a result, the size of the
text index may be larger than expected to contain the current and historic data. Typically, this issue
occurs with text indexes that are always manually refreshed with the FORCE INCREMENTAL clause. On
Remarks
This statement can only be used on text indexes defined as MANUAL REFRESH or AUTO REFRESH.
When using the FORCE clause, you can examine the results of the sa_text_index_stats system procedure
to decide whether a complete rebuild (FORCE BUILD), or incremental update (FORCE INCREMENTAL) is most
appropriate.
You cannot execute the REFRESH TEXT INDEX statement on a text index that is defined as IMMEDIATE
REFRESH.
For MANUAL REFRESH text indexes, use the sa_text_index_stats system procedure to determine whether
the text index should be refreshed. Divide pending_length by doc_length, and use the percentage as a guide for
deciding whether a refresh is required. To determine the type of rebuild required, use the same process for
deleted_length and doc_count.
This statement cannot be executed when there are cursors opened with the WITH HOLD clause that use either
statement or transaction snapshots.
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Standards
Examples
The following example refreshes a fictitious text index called MarketingTextIndex, forcing it to be rebuilt:
Related Information
Syntax
Parameters
owner
The owner of the mutex. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
mutex-name
The name of the mutex. <mutex-name> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
Remarks
The RELEASE MUTEX statement releases one instance of lock on the mutex. So, if a connection has locked the
mutex multiple times, then only one lock on the mutex is released per RELEASE MUTEX statement.
An error is returned if the mutex was not locked by the current connection or if the release is being requested
for a transaction-scope mutex.
The RELEASE MUTEX statement will succeed on a dropped mutex that is locked by the current connection.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None.
Standards
Example
Related Information
Syntax
savepoint-name
Remarks
Releasing a savepoint does not perform any type of COMMIT; it simply removes the savepoint from the list of
currently active savepoints.
There must have been a corresponding SAVEPOINT within the current transaction.
Privileges
None
Standards
Related Information
Removes a class, a package, or a JAR file from a database. Removed classes are no longer available for use as a
variable type. Any class, package, or JAR to be removed must already be installed.
Syntax
Parameters
CLASS java_class_name
Specifies the name of one or more Java classes to be removed. Those classes must be installed classes in
the current database.
PACKAGE java_package_name
Specifies the name of one or more Java packages to be removed. Those packages must be the name of
packages in the current database.
JAR jar_name
Specifies a character string value of maximum length 255. Each <jar_name> must be equal to the
<jar_name> of a retained JAR in the current database. Equality of <jar_name> is determined by the
character string comparison rules of the SQL system.
RETAIN CLASSES
The specified JARs are no longer retained in the database, and the retained classes have no associated
JAR. If RETAIN CLASSES is specified, this is the only action of the REMOVE statement.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
The following example removes a Java class named "Demo" from the current database:
Related Information
Syntax
RESIGNAL [ <exception-name> ]
Remarks
Within an exception handler, RESIGNAL lets you quit the compound statement with the exception still active, or
quit reporting another named exception. The exception is handled by another exception handler or returned to
the application. Any actions by the exception handler before the RESIGNAL are undone.
Privileges
None
Standards
This code fragment returns all exceptions except for “Column Not Found” to the application:
...
DECLARE COLUMN_NOT_FOUND EXCEPTION
FOR SQLSTATE '52003';
...
EXCEPTION
WHEN COLUMN_NOT_FOUND THEN
SET message='Column not found' ;
WHEN OTHERS THEN
RESIGNAL ;
Related Information
Syntax
Syntax 1
Syntax 2
<restore-option> ::=
[MULTIPLEX]
READONLY <dbspace-or-file> [, … ]
KEY <key_spec>
RENAME <file-name> TO <new-file-path> …
Syntax 3
Parameters
db_file
Relative or absolute path of the database to be restored. Can be the original location, or a new location for
the catalog store file.
FROM archive_device
Specifies the name of the <archive_device> from which you are restoring, delimited with single
quotation marks. If you are using multiple archive devices, specify them using separate FROM clauses. A
comma-separated list is not allowed. Archive devices must be distinct. The number of FROM clauses
determines the amount of parallelism Quoted string including mixed cases, numbers, letters, and special
characters. It might be necessary to protect the key from interpretation or alteration by the command
shell.SAP IQ attempts with regard to input devices.
The backup and restore API DLL implementation lets you specify arguments to pass to the DLL when
opening an archive device. For third-party implementations, the <archive_device> string has this
format:
'<dll_name>::<vendor_specific_information>'
For example:
'spsc::workorder=12;volname=ASD002'
The <archive_device> string can be up to 1023 bytes long. The <dll_name> portion is 1 to 30 bytes
long and can only contain alphanumeric and underscore characters. The
<vendor_specific_information> portion of the string is passed to the third-party implementation
without checking its contents.
Only certain third-party products are certified with SAP IQ using this syntax. Before using any third-party
product to back up your SAP IQ database, make sure it is certified.
For the SAP IQ implementation of the backup and restore API, you need not specify information other than
the tape device name or file name. However, if you use disk devices, you must specify the same number of
archive devices on the restore as given on the backup; otherwise, you may have a different number of
restoration devices than the number used to perform the backup. A specific example of an archive device
for the SAP IQ API DLL that specifies a non-rewinding tape device on a UNIX-like operating system is:
'/dev/rmt/0n'
CATALOG ONLY
Restores only the backup header record from the archive media. Cannot be used with the MULTIPLEX
keyword.
Instructs server to search along multiple paths for point-in-time recovery logs:
Use a comma as a delimiter between directory names. The log name does not need to be specified. If any
required files are missing, the server reports an error.
On multiplex servers, use the current transaction log from the coordinator node. Do not include transaction
logs from secondary nodes. Including transaction logs from secondary nodes causes point-in-time
recovery to fail, and return a Files are missing for Point-in-time-Recovery error.
RECOVER UNTIL . . .
Recovers data from the recovery logs up to the date and time specified by the timestamp, or transaction
log-offset:
<timestamptz> is a TIMESTAMP WITH TIMEZONE data type. <logoffset> is an UNSIGNED BIGINT that
represents a transaction log offset.
Always specify a timestamp that is greater than the backup time of the data backup specified in the FROM
clause of the restore command. This ensures that the database includes all committed transactions in the
recovery logs. If the specified point in time is earlier than the last checkpoint in the backup database, the
server returns an error.
Restriction
● A dbspace that you create after enabling point-in-time recovery may only be recorded in the point-
in-time recovery log. If this is the case, you cannot rename the dbspace during a RESTORE
DATABASE RECOVER UNTIL operation.
● If RESTORE DATABASE cannot locate the transaction log, RLV log, and point-in-time recovery logs
during a point-in-time recovery, the recovery operation fails. This can happen in cases when the
database is restored to a new location and there are no logs available in the new environment. In
cases like this, use the CLEAR LOG clause to ignore the current log area of the database you want
to restore and cancel the automatic log backups that normally occur during a recovery operation.
ON TIMELINE
ON TIMELINE '<GUID>'
The ON TIMELINE clause requires a timeline GUID that identifies the alternate timeline. Point-in-time
recovery operations without this clause restore to the current timeline.
OVERWRITE EXISTING
Overwrites existing dbfiles and transaction logs during a point-in-time recovery operation:
OVERWRITE EXISTING
Point-in-time recovery operations generally restore dbfiles to a different location than the current dbspace.
Use the OVERWRITE EXISTING clause to restore a database to a location that already has a database, and
overwrite any existing dbfiles with the same name.
Ignores the current log area of the database during a restore: cancels automatic log backups during a
point-in-time recovery operation:
CLEAR LOG
If RESTORE DATABASE cannot locate the transaction log, RLV log, and point-in-time recovery logs during a
point-in-time recovery, the recovery operation fails. This can happen in cases when the database is
restored to a new location and there are no logs available in the new environment. In cases like this, use the
CLEAR LOG clause to ignore the current log area of the database you want to restore and cancel the
automatic log backups that normally occur during a recovery operation.
During point-in-time recovery, the restore automatically backs up the existing transaction log and RLV log.
Every point-in-time restore looks for the existing transaction log, RLV log, and point in time logs in the
existing database directories. The restore operation returns an error if it fails to locate these logs. If you
want to restore to a new environment, however, and have no existing logs to back up, supply a RESTORE
command with the CLEAR LOG clause to stop SAP IQ from seeking existing log files in the database
directories.
RENAME
Restores one or more SAP IQ database files to a new location. Specify each <dbspace-name> you are
moving as it appears in the table. Specify <new-dbspace-path> as the new raw partition, or the new full
or relative path name, for that dbspace.
If relative paths were used to create the database files, the files are restored by default relative to the
catalog store file (the SYSTEM dbspace), and a rename clause is not required. If absolute paths were used
to create the database files and a rename clause is not specified for a file, it is restored to its original
location.
Relative path names in the RENAME clause work as they do when you create a database or dbspace: the
main IQ store dbspace, temporary store dbspaces, and Message Log are restored relative to the location of
db_file (the catalog store); user-created IQ store dbspaces are restored relative to the directory that
holds the main IQ dbspace.
Do not use the RENAME clause to move the SYSTEM dbspace, which holds the catalog store. To move the
catalog store, and any files created relative to it and not specified in a RENAME clause, specify a new
location in the <db_file> parameter.
If the dbspace name contains a file extension such as .iq or .iqtmp, enclose the dbspace name in double
quotation marks when specifying the name in a RESTORE DATABASEQuoted string including mixed cases,
numbers, letters, and special command RENAME clause, such as the following two examples:
VERIFY [ COMPATIBLE ]
Directs the server to validate the specified SAP IQ database backup archives for a full, incremental,
incremental since full, or virtual backup. The backup must be SAP IQ version 12.6 or later. The verification
You cannot use the RENAME clause with the VERIFY clause; an error is reported.
The backup verification process can run on a different host than the database host. You must have the
BACKUP DATABASE system privilege to run RESTORE DATABASE VERIFY.
If the COMPATIBLE clause is specified with VERIFY, the compatibility of an incremental archive is checked
with the existing database files. If the database files do not exist on the system on which RESTORE
DATABASE…VERIFY COMPATIBLE is invoked, an error is returned. If COMPATIBLE is specified while
verifying a full backup, the keyword is ignored; no compatibility checks need to be made while restoring a
full backup.
You must have the database and log files (.db and .log) to validate the backup of a read-only dbspace
within a full backup. If you do not have these files, validate the entire backup by running RESTORE
DATABASE…VERIFY without the READONLY <dbspace> clause.
Note
The verification of a backup archive is different than the database consistency checker (DBCC) verify
mode (sp_iqcheckdb 'verify...'). RESTORE DATABASE VERIFY validates the consistency of
the backup archive to be sure it can be restored, whereas DBCC validates the consistency of the
database data.
Run sp_iqcheckdb 'verify...' before taking a backup. If an inconsistent database is backed up,
then restored from the same backup archive, the data continues to be in an inconsistent state, even if
RESTORE DATABASE VERIFY reports a successful validation.
Remarks
The RESTORE DATABASE command requires exclusive access on an Oracle Solaris platform. On Solaris, user
with the SERVER OPERATOR system privilege to the database. This exclusive access is achieved by setting the
-gd switch to DBA, which is the default when you start the server engine.
Issue the RESTORE DATABASE command before you start the database (you must be connected to the
utility_db database). Once you finish specifying RESTORE DATABASE CHECKPOINT of the last backup you
restored. You can now specify a START DATABASE to allow other users to access the restored database.
The maximum size for a complete RESTORE DATABASE command, including all clauses, is 32KB.
When restoring to a raw device, make sure the device is large enough to hold the dbspace you are restoring.
SAP IQ RESTORE DATABASE checks the raw device size and returns an error, if the raw device is not large
enough to restore the dbspace.
BACKUP DATABASE allows you to specify full or incremental backups. There are two kinds of incremental
backups. INCREMENTAL backs up only those blocks that have changed and committed since the last backup
of any type (incremental or full). INCREMENTAL SINCE FULL backs up all the blocks that have changed since
the last full backup. If a restore of a full backup is followed by one or more incremental backups (of either type),
no modifications to the database are allowed between successive RESTORE DATABASE commands. This rule
prevents a restore from incremental backups on a database in need of crash recovery, or one that has been
Before starting a full restore, you must delete two files: the catalog store file (default name dbname.db) and the
transaction log file (default name dbname.log).
If you restore an incremental backup, RESTORE DATABASE ensures that backup media sets are accessed in the
proper order. This order restores the last full backup tape set first, then the first incremental backup tape set,
then the next most recent set, and so forth, until the most recent incremental backup tape set. If a user with
the SERVER OPERATOR system privilege produced an INCREMENTAL SINCE FULL backup, only the full
backup tape set and the most recent INCREMENTAL SINCE FULL backup tape set is required; however, if there
is an INCREMENTAL backup made since the INCREMENTAL SINCE FULL backup, it also must be applied.
SAP IQ ensures that the restoration order is appropriate, or it displays an error. Any other errors that occur
during the restore results in the database being marked corrupt and unusable. To clean up a corrupt database,
do a restore from a full backup, followed by any additional incremental backups. Since the corruption probably
happened with one of those backups, you might need to ignore a later backup set and use an earlier set.
To restore read-only files or dbspaces from an archive backup, the database may be running and the
administrator may connect to the database when issuing the RESTORE DATABASE statement. The read-only
file pathname doesn't need to match the names in the backup, if they otherwise match the database system
table information.
The database must not be running to restore a FULL, INCREMENTAL SINCE FULL, or INCREMENTAL restore of
either a READWRITE FILES ONLY or an all files backup. The database may or may not be running to restore a
backup of read-only files. When restoring specific files in a read-only dbspace, the dbspace must be offline.
When restoring read-only files in a read-write dbspace, the dbspace can be online or offline. The restore closes
the read-only files, restores the files, and reopens those files at the end of the restore.
You can use selective restore to restore a read-only dbspace, as long as the dbspace is still in the same read-
only state.
● RESTORE DATABASE to disk does not support raw devices as archival devices.
● SAP IQ does not rewind tapes before using them; on rewinding tape devices, it does rewind tapes after
using them. You must position each tape to the start of the SAP IQ data before starting the restore.
● During backup and restore operations, if SAP IQ cannot open the archive device (for example, when it
needs the media loaded) and the ATTENDED option is ON, it waits for ten seconds for you to put the next
tape in the drive, and then tries again. It continues these attempts indefinitely until either it is successful or
the operation is terminated with Ctrl + C .
● If you press Ctrl + C , RESTORE DATABASE fails and returns the database to its state before the
restoration began.
● If disk striping is used, the striped disks are treated as a single device.
● The file_name column in the SYSFILE system table for the SYSTEM dbspace is not updated during a
restore. For the SYSTEM dbspace, the file_name column always reflects the name when the database was
created. The file name of the SYSTEM dbspace is the name of the database file.
The permissions required to execute this statement are set using the -gu server command line option, as
follows:
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● (UNIX) This example restores the iqdemo database from tape devices /dev/rmt/0 and /dev/rmt/2 on a
Sun Solaris platform. On Solaris, a RESTORE from tape must specify the use of the rewinding device.
Therefore, do not include the letter n after the device name, which specifies no rewind on close. To
specify this feature with RESTORE DATABASE commands, use the naming convention appropriate for your
UNIX platform. (Windows does not support this feature.)
● This example restores an encrypted database named marvin that was encrypted with the key <is!
seCret>:
● This example shows the syntax of a BACKUP DATABASE statement and two possible RESTORE DATABASE
statements. (This example uses objects in the iqdemo database for illustration purposes. Note that
iqdemo includes a sample user dbspace named iq_main that may not be present in your database.)
Given this BACKUP DATABASE statement:
The dbspace iq_main can be restored using either of these RESTORE DATABASE statements:
A selective backup backs up either all READWRITE dbspaces or specific read-only dbspaces or dbfiles.
Selective backups are a subtype of either full or incremental backups.
Notes:
○ You can take a READONLY selective backup and restore all objects from this backup (as in the second
example above).
○ You can take an all-inclusive backup and restore read-only files and dbspaces selectively.
○ You can take a READONLY selective backup of multiple read-only files and dbspaces and restore a
subset of read-only files and dbspaces selectively. See Permissions.
○ You can restore the read-only backup, only if the read-only files have not changed since the backup.
Once the dbspace is made read-write again, the read-only backup is invalid, unless you restore the
entire read-write portion of the database back to the point at which the read-only dbspace was read-
only.
○ Decide which backup subtype to use (either selective or non-selective) and use it consistently. If you
must switch from a non-selective to a selective backup, or vice versa, always take a non-selective full
backup before switching to the new subtype, to ensure that you have all changes.
● This example validates the database archives using the VERIFY clause, without performing any write
operations:
● When you use validate, specify a different database name to avoid Database name not unique errors.
If the original database is iqdemo.db, for example, use iq_demo_new.db instead:
● Point-in-time recovery using point-in-time recovery logs and point-in-time recovery log backup archives:
● Re-enabling point-in-time recovery logging after a multiplex failover. In this scenario, writer 1 becomes the
coordinator after the failover:
Related Information
Syntax
RESUME <cursor-name>
RESUME [ ALL ]
Parameters
cursor-name
Identifier or host-variable
The procedure executes until the next result set (SELECT statement with no INTO clause) is encountered. If the
procedure completes and no result set is found, the SQLSTATE_PROCEDURE_COMPLETE warning is set. This
warning is also set when you RESUME a cursor for a SELECT statement.
Syntax 1 – supported in dbisqlc but not dbisql (Interactive SQL) or when connected to the database using
the SAP SQL Anywhere JDBC driver.
Syntax 2 – supported in dbisql. Resumes the current procedure. If ALL is not specified, executing RESUME
displays the next result set or, if no more result sets are returned, completes the procedure. In dbisql, the
RESUME ALL statement cycles through all result sets in a procedure, without displaying them, and completes
the procedure. This is useful mainly in testing procedures.
Privileges
None
Standards
Examples
CALL sample_proc() ;
RESUME ALL;
Related Information
Exits a function or procedure unconditionally, optionally providing a return value. Statements following RETURN
are not executed.
Syntax
RETURN [ ( <expression> ) ]
Parameters
expression
If supplied, the value of <expression> is returned as the value of the function or procedure.
Within a function, the expression should be of the same data type as the RETURN data type of the function.
Remarks
RETURN is used in procedures for Transact-SQL-compatibility, and is used to return an integer error code.
Privileges
None
Standards
product (2,3,4)
24
Related Information
Removes the ability of a user to manage passwords and administer the system privilege.
Syntax
Parameters
target_user_list
Users the grantee has the potential to impersonate. The list must consist of existing users or user-
extended roles with login passwords. Separate the user_IDs in the list with commas.
All database users with login passwords become potential target users to manage passwords for each
grantee.
ANY WITH ROLES target_role_list
List of target roles for each grantee. Any users who are granted any of the target roles become potential
target users for each grantee. The <target_role_list> must consist of existing roles and the users who
are granted said roles must consist of database users with login passwords. Use commas to separate
multiple user_IDs.
user_ID
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Remarks
Depending on how the CHANGE PASSWORD system privilege was initially granted, using the ADMIN OPTION
FOR clause when revoking CHANGE PASSWORD has different results:
Clause Used When CHANGE PASSWORD was Originally Result When Using ADMIN OPTION FOR when revoking
Granted CHANGE PASSWORD
The WITH ADMIN OPTION clause Revokes only the ability to administer the CHANGE PASS
WORD system privilege (that is, grant the system privilege to
another user) — the ability to actually manage passwords for
other users remains.
The WITH ADMIN ONLY OPTION clause Semantically equivalent to revoking the entire CHANGE
PASSWORD system privilege
The WITH NO ADMIN OPTION clause Nothing is revoked because there were no administrative
rights granted in the first place.
You can revoke the CHANGE PASSWORD system privilege from any combination of users and roles granted.
Privileges
Requires the CHANGE PASSWORD system privilege granted with administrative rights. See GRANT System
Privilege Statement [page 1511] for assistance with granting privileges.
Standards
● The following example removes the ability of Joe to manage the passwords of Sally or Bob:
● The following example if the CHANGE PASSWORD system privilege was originally granted to Sam with the
WITH ADMIN OPTION clause, this example removes the ability of Sam to grant the CHANGE PASSWORD
system privilege to another user, but still allows Sam to manage passwords for those users specified in the
original GRANT CHANGE PASSWORD statement. However, if the CHANGE PASSWORD system privilege was
originally granted to Sam with the WITH ADMIN ONLY OPTION clause, this example removes all
permissions to the system privilege from Sam.
Related Information
Syntax
REVOKE CONNECT
FROM <user_id> [,...]
Parameters
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Remarks
Use system procedures or CREATE USER and DROP USER statements, not GRANT and REVOKE statements, to
add and remove user IDs.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Note
If revoking CONNECT permissions or table permissions from another user, the target user cannot be
connected to the database.
Standards
Related Information
Removes CREATE privileges on the specified dbspace from the specified user IDs.
Syntax
Parameters
dbspace-name
Identifier.
user_id
Privileges
Requires the MANAGE ANY DBSPACE system privilege. See GRANT System Privilege Statement [page 1511]
for assistance with granting privileges.
Standards
Examples
● The following example revokes the CREATE privilege on dbspace DspHist from user Smith:
● The following example revokes the CREATE privilege on dbspace DspHist from user ID fionat from the
database:
Related Information
Removes EXECUTE permissions that were given using the GRANT statement.
Syntax
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Related Information
Removes the INTEGRATED LOGIN permissions that were given using the GRANT statement.
Syntax
userID
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Related Information
Removes KERBEROS LOGIN permissions that were given using the GRANT statement.
Syntax
Parameters
userID
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Requires the MANAGE ANY USER system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Related Information
Removes object-level privileges that were given using the GRANT statement.
Syntax
Parameters
user_id
Must be the name of an existing user or immutable role. The list must consist of existing users with login
passwords. Separate the user_ids in the list with commas.
ALL
Users can create indexes on the named tables, and foreign keys that reference the named tables. If column
names are specified, then users can reference only those columns. REFERENCES privileges on columns
cannot be granted for views, only for tables.
SELECT
Users can look at information in this view or table. If column names are specified, then the users can look
at only those columns. SELECT permissions on columns cannot be granted for views, only for tables.
TRUNCATE
Users can update rows in this view or table. If column names are specified, users can update only those
columns. UPDATE privileges on columns cannot be granted for views, only for tables. To update a table,
users must have both SELECT and UPDATE privilege on the table.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example prevents user Dave from inserting into the Employees table:
Related Information
Removes a users membership in a role or his or her ability to administer the role.
Syntax
<system-role> ::=
dboword translate="no">DIAGNOSTICS†††
| PUBLIC†††
| rs_systabgroup†††
| SA_DEBUG†††
| SYS†††
| SYS_REPLICATION_ADMIN_ROLE
| SYS_RUN_REPLICATION_ROLE
| SYS_SPATIAL_ADMIN_ROLE
<grantee> ::=
{ <system-role> | <user_id> }
†††The ADMIN OPTION FOR clause is not supported for system roles.
Syntax 2 – Revokes User-Defined Roles
<grantee> ::=
{ <system-role> | <user_id> }
<compatibility-role-name> ::=
SYS_AUTH_BACKUP_ROLE
| SYS_AUTH_DBA_ROLE
| SYS_AUTH_PROFILE_ROLE
| SYS_AUTH_READCLIENTFILE_ROLE
| SYS_AUTH_READFILE_ROLE
<grantee> ::=
{ <system-role> | <user_id> }
Go to:
● Privileges
● Standards
● Examples
Parameters
(back to top)
role_name
Must already exist in the database. Separate multiple role names with commas.
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
{ EXERCISE | ADMIN } OPTION FOR
Specify the ADMIN OPTION FOR clause to revoke administration rights for the role, but leave exercise
rights. Specify the EXERCISE OPTION FOR clause to revoke exercise rights for the role, but leave
administration rights. If the clause is not specified, both rights are revoked.
Remarks
(back to top)
If a role that is being revoked was not granted to <grantee>, then the statement does nothing, and does not
return an error.
REVOKE ROLE fails with an error if, as a consequence of executing the statement, the number of
administrators for the role being revoked would fall below the required minimum as set by the min_role_admins
database option.
When revoking a role from the MANAGE ROLES system privilege, you must use the special internal
representation SYS_MANAGE_ROLES_ROLE. For example, REVOKE ROLE <role-name> FROM
SYS_MANAGE_ROLES_ROLE;.
The REVOKE syntax related to authorities, permissions, and groups used in pre-16.0 versions of the software is
still supported but deprecated.
(back to top)
To revoke the following roles requires the MANAGE ROLES system privilege. See GRANT System Privilege
Statement [page 1511] for assistance with granting privileges.
● diagnostics
● dbo
● PUBLIC
● rs_systabgroup
● SA_DEBUG
● SYS
● SYS_RUN_REPLICATE_ROLE
● SYS_SPATIAL_ADMIN_ROLE
To revoke the following compatibility role requires you be granted the specific compatibility role with
administrative privilege. See Grant Compatibility Roles in the SAP IQ Installation and Update Guide for your
platform for assistance in granting compatibility roles.
● SYS_AUTH_SA_ROLE
● SYS_AUTH_SSO_ROLE
● SYS_AUTH_DBA_ROLE
● SYS_AUTH_RESOURCE_ROLE
● SYS_AUTH_BACKUP_ROLE
● SYS_AUTH_VALIDATE_ROLE
● SYS_AUTH_WRITEFILE_ROLE
● SYS_AUTH_WRITEFILECLIENT_ROLE
● SYS_AUTH_READFILE_ROLE
● SYS_AUTH_READFILECLIENT_ROLE
● SYS_AUTH_PROFILE_ROLE
● SYS_AUTH_USER_ADMIN_ROLE
● SYS_AUTH_SPACE_ADMIN_ROLE
● SYS_AUTH_MULTIPLEX_ADMIN_ROLE
● SYS_AUTH_OPERATOR_ROLE
● SYS_AUTH_PERMS_ADMIN_ROLE
● <user-defined role name>
Standards
(back to top)
(back to top)
● The following example revokes the user-defined (standalone) role role1 from user1:
● After you execute this command, user1 no longer has the rights to perform any authorized tasks using
any system privileges granted to role1.
● The following example revokes the ability for user1 to administer the compatibility role
SYS_AUTH_WRITEFILE_ROLE:
user1 retains the ability to perform any authorized tasks granted by SYS_AUTH_WRITEFILE_ROLE.
Related Information
Removes the ability for one user to impersonate another user and to administer the SET USER system
privilege.
Syntax
Parameters
target_user_list
Must consist of existing users with login passwords and is the potential list of target users who can no
longer be impersonated by grantee users. Separate the user IDs in the list with commas.
ANY
The potential list of target users for each grantee consists of all database users with login passwords.
ANY WITH ROLES target_role_list
Each <user_id> must be the name of an existing user or immutable role. The list must consist of existing
users with login passwords. Separate the user_ids in the list with commas.
Remarks
Depending on how the SET USER system privilege was initially granted, using the ADMIN OPTION FOR clause
when revoking the SET USER system privilege has different results. If the SET USER system privilege was
originally granted with the WITH ADMIN OPTION clause, including the ADMIN OPTION FOR clause in the
revoke statement revokes only the ability to administer the SET USER system privilege (that is, grant the
system privilege to another user). The ability to actually impersonate another user remains. However, if the
SET USER system privilege was originally granted with the WITH ADMIN ONLY OPTION clause, including the
ADMIN OPTION FOR clause in the revoke statement is semantically equivalent to revoking the entire SET USER
system privilege. Finally, if the SET USER system privilege was originally grant with the WITH NO ADMIN
OPTION clause, and the ADMIN OPTION FOR clause is included in the revoke statement, nothing is revoked
because there were no administrative system privileges granted in the first place.
Privileges
Requires the SET USER system privilege granted with administrative rights. See GRANT System Privilege
Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example stops Bob from being able to impersonate Sally or Bob:
● The following example if the SET USER system privilege was originally granted to Sam with the WITH
ADMIN OPTION clause, this example removes the ability of Sam to grant the SET USER system privilege to
another user, but still allows Sam to impersonate those users already granted to him or her. However, if the
Related Information
Removes specific system privileges from specific users and the right to administer the privilege.
Syntax
Parameters
Each <system_privilege> must currently be granted to each <user_id> specified with administrative
privileges.
Note
This clause revokes only the administrative privileges of the system privilege; the system privilege itself
remains granted. However, if the system privilege was originally granted with the WITH ADMIN ONLY
OPTION clause, the ADMIN OPTION FOR clause completely revokes the system privilege. Under this
scenario, use of the ADMIN OPTION FOR clause is not required to revoke administrative privileges.
system_privilege_name
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Depending on how the system privilege was initially granted, using the ADMIN OPTION FOR clause when
revoking a system privilege has different results. If the system privilege was originally granted with the WITH
ADMIN OPTION clause, including the ADMIN OPTION FOR clause in the revoke statement revokes only the
ability to administer the system privilege (that is, grant the system privilege to another user). The ability to
actually use the system privilege remains. However, if the system privilege was originally granted with the WITH
ADMIN ONLY OPTION clause, including the ADMIN OPTION FOR clause in the revoke statement is semantically
equivalent to revoking the entire system privilege.
Finally, if the system privilege was originally grant with the WITH NO ADMIN OPTION clause, and the ADMIN
OPTION FOR clause is included in the revoke statement, nothing is revoked because there were no
administrative system privileges granted in the first place.
Privileges
Requires administrative privilege over the system privilege being revoked. See GRANT System Privilege
Statement [page 1511] for assistance with granting privileges.
Standards
Examples
● The following example revokes the BACKUP DATABASE system privilege from user Jim:
● In the following example, assuming the BACKUP DATABASE system privilege was originally granted to user
Jim with the WITH ADMIN OPTION clause, this example revokes the ability to administer the BACKUP
DATABASE system privilege from user Jim:
The ability to perform tasks authorized by the system privilege remains. However, if the BACKUP
DATABASE system privilege was originally granted to user Jim with the WITH ADMIN ONLY OPTION
clause, this example removes all permissions to the system privilege from user Jim.
In this section:
Related Information
System privileges control the rights of users to perform authorized database tasks.
ACCESS SERVER LS Allows logical server connection using the SERVER logical server Multiplex
context.
ACCESS USER PASSWORD Allows a user to access views that contain password hashes, and User and Login Man
perform operations that involve accessing passwords, such as un agement
loading, extracting, or comparing database
ALTER ANY INDEX Allows a user to alter and comment on indexes and text indexes Indexes
on tables and views owned by any user.
ALTER ANY MATERIALIZED Allows a user to alter and comment on materialized views owned Materialized Views
VIEW by any user.
ALTER ANY OBJECT Allows a user to alter and comment on the following types of ob Objects
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
ALTER ANY OBJECT OWNER Allows a user to alter the owner of any type of table object. This Objects
privilege does not allow changing of the owner of other objects,
such as procedures, materialized views, and so on.
ALTER ANY PROCEDURE Allows a user to alter and comment on procedures and functions Procedures
owned by any user.
ALTER ANY SEQUENCE Allows a user to alter sequence generators owned by any user. Sequence
ALTER ANY TEXT CONFIGURA Allows a user to alter and comment on text configuration objects Text Configuration
TION owned by any user.
ALTER ANY VIEW Allows a user to alter and comment on views owned by any user. Views
● Upgrade a database.
● Perform cost model calibration.
● Load database statistics.
● Alter transaction logs (also requires the SERVER OPERATOR
system privilege).
● Change ownership of the database (also requires the MAN
AGE ANY MIRROR SERVER system privilege).
CHANGE PASSWORD Allows a user to manage user passwords for any user. User and Login Man
agement
This system privilege can apply to all users, or it can be limited to
a set of specified users, or users who are granted one or more
specified roles.
CHECKPOINT Allows a user to force the database server to execute a check Database
point.
COMMENT ANY OBJECT Allows a user to comment on any type of object owned by any Objects
user that can be created using the CREATE ANY OBJECT system
privilege.
CREATE ANY INDEX Allows a user to create and comment on indexes and text indexes Indexes
on tables and views owned by any user.
CREATE ANY MATERIALIZED Allows a user to create and comment on materialized views Materialized Views
VIEW owned by any user.
CREATE ANY MUTEX SEMA Allows a user to create a mutex or semaphore owned by any user. Mutex and Sema
PHORE phores
CREATE ANY OBJECT Allows a user to create and comment on the following types of ob Objects
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
CREATE ANY PROCEDURE Allows a user to create and comment on procedures and func Procedure
tions owned by any user.
CREATE ANY SEQUENCE Allows a user to create sequence generators, regardless of owner. Sequence
CREATE ANY TEXT CONFIGU Allows a user to alter and comment on text configuration objects Text Configuration
RATION owned by any user.
CREATE ANY TRIGGER Allows a user to create and comment (also requires the ALTER Triggers
object level privilege on the table) on tables and views.
CREATE ANY VIEW Allows a user to create and comment on views owned by any user. Views
CREATE DATABASE VARIABLE Allows a user to create, select from, update, and drop self-owned Database Variables
database-scope variables.
CREATE EXTERNAL REFER Allows a user to create external references in the database. External Environment
ENCE
You must have the system privileges required to create specific
database objects before you can create external references.
CREATE MATERIALIZED VIEW Allows a user to create and comment on self-owned materialized Materialized Views
views.
CREATE PROCEDURE Allows a user to create and comment on self-owned procedures Procedure
and functions. create a self-owned stored procedure or function.
CREATE PROXY TABLE Allows a user to create self-owned proxy tables. Table
CREATE TEXT CONFIGURA Allows a user to create and comment on self-owned text configu- Text Configuration
TION ration objects.
CREATE VIEW Allows a user to create and comment on self-owned views. Re Views
quired to create self-owned views.
DEBUG ANY PROCEDURE Allows a user to debug any database object. Miscellaneous
DELETE ANY TABLE Allows a user to delete rows in tables and views owned by any Table
user.
DROP ANY INDEX Allows a user to drop indexes and text indexes on tables and views Indexes
owned by any user.
DROP ANY MATERIALIZED Allows a user to drop materialized views owned by any user. Materialized View
VIEW
DROP ANY MUTEX SEMA Allows a user to drop a mutex or semaphore owned by any user. Mutex and Sema
PHORE phores
DROP ANY OBJECT Allows a user to drop the following types of objects owned by any Objects
user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
DROP ANY PROCEDURE Allows a user to drop procedures and functions owned by any Procedure
user.
DROP ANY SEQUENCE Allows a user to drop sequence generators owned by any user. Sequence
DROP ANY TABLE Allows a user to drop tables (including proxy tables) owned by any Table
user.
DROP ANY TEXT CONFIGURA Allows a user to drop text configuration objects owned by any Text Configuration
TION user.
DROP ANY VIEW Allows a user to drop views owned by any user. Views
DROP CONNECTION Allows a user to drop any connections to the database. Database
EXECUTE ANY PROCEDURE Allows a user to execute procedures and functions owned by any Procedure
user.
INSERT ANY TABLE Allows a user to insert rows into tables and views owned by any Table
user.
LOAD ANY TABLE Allows a user to load data into tables owned by any user. Table
MANAGE ANY DATABASE VARI Allows a user to create and drop database-scope variables owned Database Variables
ABLE by self or by PUBLIC.
MANAGE ANY EVENT Allows a user to create, alter, drop, trigger, and comment on Miscellaneous
events.
MANAGE ANY EXTERNAL EN Allows a user to alter, comment on, start, and stop external envi External Environment
VIRONMENT ronments.
MANAGE ANY EXTERNAL OB Allows a user to install, comment on, and remove external envi External Environment
JECT ronment objects.
MANAGE ANY LDAP SERVER Allows a user to create, alter, drop, validate, and comment on Miscellaneous
LDAP servers.
MANAGE ANY LOGIN POLICY Allows a user to create, alter, drop, and comment on login poli User and Login Man
cies. agement
MANAGE ANY PROPERTY HIS Allows a user to turn on and configure the tracking of database Server Operator
TORY server property values.
MANAGE ANY SPATIAL OB Allows a user to create, alter, drop, and comment on spatial refer Miscellaneous
JECT ence systems and spatial unit of measures.
MANAGE ANY STATISTICS Allows a user to create, alter, drop, and update database statistics Miscellaneous
for any table.
MANAGE ANY USER Allows a user to: User and Login Man
agement
● Create, alter, drop, and comment on database users (includ
ing assigning an initial password).
● Force a password change on next login for any user.
● Assign and reset the login policy for any user.
● Create, drop, and comment on integrated logins and Ker
beros logins.
● Create and drop external logins.
MANAGE ANY WEB SERVICE Allows a user to create, alter, drop, and comment on web serv Miscellaneous
ices.
MANAGE AUDITING Allows a user to run the sa_audit_string stored procedure. Procedure
MANAGE LISTENERS Allows a user to start and stop network listeners. Server Operator
MANAGE PROFILING Allows a user to manage database server tracing. The DIAGNOS Database
TICS system role is also required to fully utilize diagnostics func
tionality for user information.
MANAGE ROLES Allows a user to create new roles and act as a global administrator Roles
for new and existing roles. By default, MANAGE ROLES is granted
administrative rights on each newly created role. A user requires
administrative rights on the role to delete it.
READ CLIENT FILE Allows a user to read files on the client computer. Files
READ FILE Allows a user to read files on the database server computer. Files
REORGANIZE ANY OBJECT Allows a user to reorganize tables and materialized views owned Objects
by any user.
SELECT ANY TABLE Allows a user to query tables and views owned by any user. Table
SELECT PUBLIC DATABASE Allows a user to select the value of a database-scope variable Database Variables
VARIABLE owned by PUBLIC.
SET ANY PUBLIC OPTION Allows a user to set PUBLIC database options that do not require Database Options
the SET ANY SECURITY OPTION or the SET ANY SYSTEM OP
TION system privileges.
SET ANY SECURITY OPTION Allows a user to set any PUBLIC security database options. Database Options
SET ANY SYSTEM OPTION Allows a user to set PUBLIC system database options. Database Options
SET ANY USER DEFINED OP Allows a user to set user-defined database options. Database Options
TION
SET USER (granted with admin Allows a user to temporarily assume the roles and privileges of User and Login Man
istrative rights only) another user. agement
TRUNCATE ANY TABLE Allows a user to truncate data for tables and materialized views Table
owned by any user.
UPDATE ANY MUTEX SEMA Allows a user to update a mutex or semaphore owned by any Mutex and Sema
PHORE user. phores
UPDATE ANY TABLE Allows a user to update rows in tables and views owned by any Table
user.
UPDATE PUBLIC DATABASE Allows a user to update database-scope variables owned by PUB Database Variables
VARIABLE LIC.
UPGRADE ROLE Allows a user to be a default administrator of any system privilege Roles
that is introduced when upgrading an SAP IQ database from ver
sion 16.0. By default, the UPGRADE ROLE system privilege is
granted to the SYS_AUTH_SA_ROLE role, if it exists.
USE ANY SEQUENCE Allows a user to use sequence generators owned by any user. Sequence
VALIDATE ANY OBJECT Allows a user to validate tables, materialized views, indexes, and Objects
text indexes owned by any user.
WRITE CLIENT FILE Allows a user to write files to the client computer. Files
WRITE FILE Allows a user to write files on the database server computer. Files
Database Options SET ANY PUBLIC OPTION Allows a user to set PUBLIC database options that do not require
the SET ANY SECURITY OPTION or the SET ANY SYSTEM OP
TION system privileges.
SET ANY SECURITY OPTION Allows a user to set any PUBLIC security database options.
SET ANY SYSTEM OPTION Allows a user to set PUBLIC system database options.
SET ANY USER DEFINED OP Allows a user to set user-defined database options.
TION
Database Variables CREATE DATABASE VARIABLE Allows a user to create, select from, update, and drop self-owned
database-scope variables.
MANAGE ANY DATABASE VARI Allows a user to create and drop database-scope variables owned
ABLE by self or by PUBLIC.
SELECT PUBLIC DATABASE Allows a user to select the value of a database-scope variable
VARIABLE owned by PUBLIC.
UPDATE PUBLIC DATABASE Allows a user to update database-scope variables owned by PUB
VARIABLE LIC.
Database CHECKPOINT Allows a user to force the database server to execute a check
point.
MANAGE PROFILING Allows a user to manage database server tracing. The DIAGNOS
TICS system role is also required to fully utilize diagnostics func
tionality for user information.
● Upgrade a database.
● Perform cost model calibration.
● Load database statistics.
● Alter transaction logs (also requires the SERVER OPERATOR
system privilege).
● Change ownership of the database (also requires the MAN
AGE ANY MIRROR SERVER system privilege).
External Environment CREATE EXTERNAL REFER Allows a user to create external references in the database.
ENCE
You must have the system privileges required to create specific
database objects before you can create external references.
MANAGE ANY EXTERNAL EN Allows a user to alter, comment on, start, and stop external envi
VIRONMENT ronments.
MANAGE ANY EXTERNAL OB Allows a user to install, comment on, and remove external envi
JECT ronment objects.
Files READ CLIENT FILE Allows a user to read files on the client computer.
READ FILE Allows a user to read files on the database server computer.
WRITE CLIENT FILE Allows a user to write files to the client computer.
WRITE FILE Allows a user to write files on the database server computer.
Indexes ALTER ANY INDEX Allows a user to alter and comment on indexes and text indexes
on tables and views owned by any user.
CREATE ANY INDEX Allows a user to create and comment on indexes and text indexes
on tables and views owned by any user.
DROP ANY INDEX Allows a user to drop indexes and text indexes on tables and views
owned by any user.
Materialized View DROP ANY MATERIALIZED Allows a user to drop materialized views owned by any user.
VIEW
ALTER ANY MATERIALIZED Allows a user to alter and comment on materialized views owned
VIEW by any user.
CREATE ANY MATERIALIZED Allows a user to create and comment on materialized views
VIEW owned by any user.
CREATE MATERIALIZED VIEW Allows a user to create and comment on self-owned materialized
views.
MANAGE ANY EVENT Allows a user to create, alter, drop, trigger, and comment on
events.
MANAGE ANY LDAP SERVER Allows a user to create, alter, drop, validate, and comment on
LDAP servers.
MANAGE ANY SPATIAL OB Allows a user to create, alter, drop, and comment on spatial refer
JECT ence systems and spatial unit of measures.
MANAGE ANY STATISTICS Allows a user to create, alter, drop, and update database statistics
for any table.
MANAGE ANY WEB SERVICE Allows a user to create, alter, drop, and comment on web serv
ices.
Multiplex ACCESS SERVER LS Allows logical server connection using the SERVER logical server
context.
Mutex and Sema CREATE ANY MUTEX SEMA Allows a user to create a mutex or semaphore owned by any user.
phores PHORE
DROP ANY MUTEX SEMA Allows a user to drop a mutex or semaphore owned by any user.
PHORE
UPDATE ANY MUTEX SEMA Allows a user to update a mutex or semaphore owned by any
PHORE user.
Objects ALTER ANY OBJECT OWNER Allows a user to alter the owner of any type of table object. This
privilege does not allow changing of the owner of other objects,
such as procedures, materialized views, and so on.
ALTER ANY OBJECT Allows a user to alter and comment on the following types of ob
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
COMMENT ANY OBJECT Allows a user to comment on any type of object owned by any
user that can be created using the CREATE ANY OBJECT system
privilege.
CREATE ANY OBJECT Allows a user to create and comment on the following types of ob
jects owned by any user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
DROP ANY OBJECT Allows a user to drop the following types of objects owned by any
user:
● Data types
● Events
● Functions
● Indexes
● Materialized views
● Messages
● Procedures
● Sequence generators
● Spatial reference systems
● Spatial units of measure
● Statistics
● Tables
● Text configuration objects
● Text indexes
● Triggers
● Views
REORGANIZE ANY OBJECT Allows a user to reorganize tables and materialized views owned
by any user.
VALIDATE ANY OBJECT Allows a user to validate tables, materialized views, indexes, and
text indexes owned by any user.
Procedures ALTER ANY PROCEDURE Allows a user to alter and comment on procedures and functions
owned by any user.
CREATE ANY PROCEDURE Allows a user to create and comment on procedures and func
tions owned by any user.
DROP ANY PROCEDURE Allows a user to drop procedures and functions owned by any
user.
EXECUTE ANY PROCEDURE Allows a user to execute procedures and functions owned by any
user.
Roles MANAGE ROLES Allows a user to create new roles and act as a global administrator
for new and existing roles. By default, MANAGE ROLES is granted
administrative rights on each newly created role. A user requires
administrative rights on the role to delete it.
Sequence ALTER ANY SEQUENCE Allows a user to alter sequence generators owned by any user.
CREATE ANY SEQUENCE Allows a user to create sequence generators, regardless of owner.
DROP ANY SEQUENCE Allows a user to drop sequence generators owned by any user.
USE ANY SEQUENCE Allows a user to use sequence generators owned by any user.
Server Operator MANAGE ANY PROPERTY HIS Allows a user to turn on and configure the tracking of database
TORY server property values.
Table CREATE PROXY TABLE Allows a user to create self-owned proxy tables.
DELETE ANY TABLE Allows a user to delete rows in tables and views owned by any
user.
DROP ANY TABLE Allows a user to drop tables (including proxy tables) owned by any
user.
INSERT ANY TABLE Allows a user to insert rows into tables and views owned by any
user.
LOAD ANY TABLE Allows a user to load data into tables owned by any user.
SELECT ANY TABLE Allows a user to query tables and views owned by any user.
TRUNCATE ANY TABLE Allows a user to truncate data for tables and materialized views
owned by any user.
UPDATE ANY TABLE Allows a user to update rows in tables and views owned by any
user.
Text Configuration ALTER ANY TEXT CONFIGURA Allows a user to alter and comment on text configuration objects
TION owned by any user.
CREATE TEXT CONFIGURA Allows a user to create and comment on self-owned text configu-
TION ration objects.
DROP ANY TEXT CONFIGURA Allows a user to drop text configuration objects owned by any
TION user.
CREATE ANY TEXT CONFIGU Allows a user to alter and comment on text configuration objects
RATION owned by any user.
CREATE ANY TRIGGER Allows a user to create and comment (also requires the ALTER
object level privilege on the table) on tables and views.
ACCESS USER PASSWORD Allows a user to access views that contain password hashes, and
perform operations that involve accessing passwords, such as un
loading, extracting, or comparing database
CHANGE PASSWORD Allows a user to manage user passwords for any user.
MANAGE ANY LOGIN POLICY Allows a user to create, alter, drop, and comment on login poli
cies.
SET USER (granted with admin Allows a user to temporarily assume the roles and privileges of
istrative rights only) another user.
Views ALTER ANY VIEW Allows a user to alter and comment on views owned by any user.
CREATE ANY VIEW Allows a user to create and comment on views owned by any user.
CREATE VIEW Allows a user to create and comment on self-owned views. Re
quired to create self-owned views.
DROP ANY VIEW Allows a user to drop views owned by any user.
Syntax
Parameters
user_id
Must be the name of an existing user or role that has a login password. Separate multiple user_IDs with
commas.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Related Information
Syntax
ROLLBACK [ WORK ]
Remarks
ROLLBACK ends a logical unit of work (transaction) and undoes all changes made to the database during this
transaction. A transaction is the database work done between COMMIT or ROLLBACK statements on one
database connection.
Privileges
None
Side Effects
Related Information
Cancels any changes made since a savepoint was established. Changes made prior to the savepoint are not
undone; they are still pending.
Syntax
Parameters
savepoint-name
An identifier that was specified on a SAVEPOINT statement within the current transaction. If <savepoint-
name> is omitted, the most recent savepoint is used. Any savepoints since the named savepoint are
automatically released.
Remarks
There must have been a corresponding SAVEPOINT within the current transaction.
Privileges
None
Related Information
Cancels any changes made since a savepoint was established using SAVE TRANSACTION. Changes made prior
to the SAVE TRANSACTION are not undone; they are still pending.
Syntax
Parameters
savepoint-name
An identifier that was specified on a SAVE TRANSACTION statement within the current transaction. If
<savepoint-name> is omitted, all outstanding changes are rolled back. Any savepoints since the named
savepoint are automatically released.
Remarks
Privileges
None
Examples
The following example returns five rows with values 10, 20, and so on. The effect of the delete, but not the prior
inserts or update, is undone by the ROLLBACK TRANSACTION statement:
BEGIN
SELECT row_num INTO #tmp
FROM sa_rowgenerator( 1, 5 )
SAVE TRANSACTION before_delete
UPDATE #tmp SET row_num=row_num*10
DELETE FROM #tmp WHERE row_num >= 3
ROLLBACK TRANSACTION before_delete
SELECT * FROM #tmp
END
Related Information
Syntax
Parameters
savepoint-name
An identifier that can be used in a ROLLBACK TRANSACTION statement. All savepoints are automatically
released when a transaction ends.
None
Standards
Examples
The following example returns five rows with values 10, 20, and so on. The effect of the delete, but not the prior
inserts or update, is undone by the ROLLBACK TRANSACTION statement:
BEGIN
SELECT row_num INTO #tmp
FROM sa_rowgenerator( 1, 5 )
UPDATE #tmp SET row_num=row_num*10
SAVE TRANSACTION before_delete
DELETE FROM #tmp WHERE row_num >= 3
ROLLBACK TRANSACTION before_delete
SELECT * FROM #tmp
END
Related Information
Syntax
SAVEPOINT [ <savepoint-name> ]
savepoint-name
Remarks
Savepoints that are established while a trigger is executing or while an atomic compound statement is
executing are automatically released when the atomic operation ends.
Privileges
None
Standards
Related Information
Syntax
<select-list> ::=
{ <column-name>
| <expression> [ [ AS ] <alias-name> ]
| * }
<row-limitation-option1> ::=
FIRST
| TOP {ALL | <limit-expression>} [START AT <startat-expression> ]
<limit-expression> ::=
<simple-expression>
<startat-expression> ::=
<simple-expression>
<row-limitation-option2> ::=
LIMIT { [ <offset-expression>, ] <limit-expression>
| <limit-expression> OFFSET <offset-expression> }
<offset-expression> ::=
<simple-expression>
<simple-expression> ::=
<integer>
| <variable>
| ( <simple-expression> )
| ( <simple-expression> { + | - | * } <simple-expression> )
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
ALL or DISTINCT
Filters query results. If neither is specified, all rows that satisfy the clauses of the SELECT statement are
retrieved. If DISTINCT is specified, duplicate output rows are eliminated. This is called the projection of the
result of the statement. In many cases, statements take significantly longer to execute when DISTINCT is
specified, so reserve the use of DISTINCT for cases where it is necessary.
Specifies the number of rows returned from a query. FIRST returns the first row selected from the query.
TOP returns the specified number of rows from the query where <number-of-rows> is in the range 1 –
2147483647 and can be an integer constant or integer variable.
Note
FIRST and TOP are used primarily with the ORDER BY clause. If you use these keywords without an ORDER
BY clause, the result might vary from run to run of the same query, as the optimizer might choose a
different query plan.
FIRST and TOP are permitted only in the top-level SELECT of a query, so they cannot be used in derived
tables or view definitions. Using FIRST or TOP in a view definition might result in the keyword being ignored
when a query is run on the view.
Using FIRST is the same as setting the ROW_COUNT database option to 1. Using TOP is the same as setting
the ROW_COUNT option to the same number of rows. If both TOP and ROW_COUNT are set, then the value of
TOP takes precedence.
The ROW_COUNT option could produce inconsistent results when used in a query involving global variables,
system functions or proxy tables. See ROW_COUNT Option for details.
select-list
Is a comma delimited list of expressions that specify what is retrieved from the database. If an asterisk (*)
is specified, all columns of all tables in the FROM clause (table-name all columns of the named table) are
selected. Aggregate functions and analytical functions are allowed in the <select-list>.
Note
In SAP IQ, scalar subqueries (nested selects) are allowed in the select list of the top level SELECT, as in
SAP SQL Anywhere and SAP Adaptive Server Enterprise. Subqueries cannot be used inside a
conditional value expression (for example, in a CASE statement).
Subqueries can also be used in a WHERE or HAVING clause predicate (one of the supported predicate
types). However, inside the WHERE or HAVING clause, subqueries cannot be used inside a value
expression or inside a CONTAINS or LIKE predicate. Subqueries are not allowed in the ON clause of
outer joins or in the GROUP BY clause.
alias-names
Can be used throughout the query to represent the aliased expression. Alias names are also displayed by
Interactive SQL at the top of each column of output from the SELECT statement. If the optional <alias-
name> is not specified after an expression, Interactive SQL displays the expression. If you use the same
name or expression for a column alias as the column name, the name is processed as an aliased column,
not a table column name.
INTO { host-variable-list | variable-list | table-name }
● <host-variable-list> – specifies where the results of the SELECT statement go. There must be
one <host-variable> item for each item in the <select-list>. Select list items are put into the
Creates a local, temporary table and populates it with the results of the query. When you use this clause,
you do not need to start the temporary table name with #.
FROM table-list
Retrieves rows and views specified in the <table-list>. Joins can be specified using join operators. For
more information, see FROM Clause. A SELECT statement with no FROM clause can be used to display the
values of expressions not derived from tables. For example, the following displays the value of the
@@version global variable:
SELECT @@version
SELECT @@version
FROM DUMMY
Note
If you omit the FROM clause, or if all tables in the query are in the SYSTEM dbspace, the query is
processed by SAP SQL Anywhere instead of SAP IQ and might behave differently, especially with
respect to syntactic and semantic restrictions and the effects of option settings.
WHERE search-condition
Specifies which rows are selected from the tables named in the FROM clause. It is also used to do joins
between multiple tables. This is accomplished by putting a condition in the WHERE clause that relates a
column or group of columns from one table with a column or group of columns from another table. Both
tables must be listed in the FROM clause.
The use of the same CASE statement is not allowed in both the SELECT and the WHERE clause of a
grouped query.
SAP IQ also supports the disjunction of subquery predicates. Each subquery can appear within the WHERE
or HAVING clause with other predicates and can be combined using the AND or OR operators.
GROUP BY
Groups columns, alias names, or functions. GROUP BY expressions must also appear in the select list. The
result of the query contains one row for each distinct set of values in the named columns, aliases, or
functions. The resulting rows are often referred to as groups since there is one row in the result for each
group of rows from the table list. In the case of GROUP BY, all NULL values are treated as identical.
Aggregate functions can then be applied to these groups to get meaningful results.
GROUP BY must contain more than a single constant. You do not need to add constants to the GROUP BY
clause to select the constants in grouped queries. If the GROUP BY expression contains only a single
constant, an error is returned and the query is rejected.
When GROUP BY is used, the select list, HAVING clause, and ORDER BY clause cannot reference any
identifiers except those named in the GROUP BY clause. This exception applies: The <select-list> and
HAVING clause may contain aggregate functions.
ROLLUP operator
Subtotals GROUP BY expressions that roll up from a detailed level to a grand total.
CUBE operator
Analyzes data by forming the data into groups in more than one dimension. CUBE requires an ordered list
of grouping expressions (dimensions) as arguments and enables the SELECT statement to calculate
subtotals for all possible combinations of the group of dimensions. The CUBE operator is part of the
GROUP BY clause.
HAVING search-condition
Based on the group values and not on the individual row values. The HAVING clause can be used only if
either the statement has a GROUP BY clause or if the select list consists solely of aggregate functions. Any
column names referenced in the HAVING clause must either be in the GROUP BY clause or be used as a
parameter to an aggregate function in the HAVING clause.
ORDER BY
Orders the results of a query. Each item in the ORDER BY list can be labeled as ASC for ascending order or
DESC for descending order. Ascending is assumed if neither is specified. If the expression is an integer n,
then the query results are sorted by the nth item in the select list.
You cannot include a Java class in the SELECT list, but you can, for example, create a function or variable
that acts as a wrapper for the Java class and then select it.
FOR XML
This clause specifies that the result set is to be returned as an XML document. The format of the XML
depends on the mode you specify. Cursors declared with FOR XML are implicitly READ ONLY.
When you specify RAW mode, each row in the result set is represented as an XML <row> element, and
each column is represented as an attribute of the <row> element.
AUTO mode returns the query results as nested XML elements. Each table referenced in the select-list is
represented as an element in the XML. The order of nesting for the elements is based on the order that
tables are referenced in the select-list.
EXPLICIT mode allows you to control the form of the generated XML document. Using EXPLICIT mode
offers more flexibility in naming elements and specifying the nesting structure than either RAW or AUTO
mode.
row-limitation-option2
Returns a subset of rows that satisfy the WHERE clause. Only one row-limitation clause can be specified at
a time. When specifying this clause, an ORDER BY clause is required to order the rows in a meaningful
manner. The row limitation clause is valid only in the top query block of a statement.
The LIMIT argument must be an integer or integer variable The OFFSET argument must evaluate to a value
greater than or equal to 0. If <offset-expression> is not specified, the default is 0.
The LIMIT keyword is disabled by default. Use the RESERVED_KEYWORDS option to enable the LIMIT
keyword.
Note
Remarks
(back to top)
You can use a SELECT statement in Interactive SQL to browse data in the database or to export data from the
database to an external file.
You can also use a SELECT statement in procedures or in Embedded SQL. The SELECT statement with an INTO
clause is used for retrieving results from the database when the SELECT statement returns only one row.
(Tables created with SELECT INTO do not inherit IDENTITY/AUTOINCREMENT tables.) For multiple-row
queries, you must use cursors. When you select more than one column and do not use <#table>, SELECT
INTO creates a permanent base table. SELECT INTO <#table> always creates a temporary table regardless of
the number of columns. SELECT INTO table with a single column selects into a host variable.
When writing scripts and stored procedures that SELECT INTO a temporary table, wrap any select list item
that is not a base column in a CAST expression. This guarantees that the column data type of the
temporary table is the required data type.
Tables with the same name but different owners require aliases. A query without aliases returns incorrect
results:
In SELECT statements, a stored procedure call can appear anywhere a base table or view is allowed. Note that
CIS functional compensation performance considerations apply. For example, a SELECT statement can also
return a result set from a procedure.
● ROLLUP supports all of the aggregate functions available to the GROUP BY clause, but ROLLUP does not
currently support COUNT DISTINCT and SUM DISTINCT.
● ROLLUP can be used only in the SELECT statement; you cannot use ROLLUP in a SELECT subquery.
● A multiple grouping specification that combines ROLLUP, CUBE, and GROUP BY columns in the same
GROUP BY clause is not currently supported.
● Constant expressions as GROUP BY keys are not supported.
GROUPING is used with the ROLLUP operator to distinguish between stored NULL values and NULL values in
query results created by ROLLUP.
ROLLUP syntax:
● CUBE supports all of the aggregate functions available to the GROUP BY clause, but CUBE does not
currently support COUNT DISTINCT or SUM DISTINCT.
● CUBE does not currently support the inverse distribution analytical functions PERCENTILE_CONT and
PERCENTILE_DISC.
● CUBE can be used only in the SELECT statement; you cannot use CUBE in a SELECT subquery.
● A multiple GROUPING specification that combines ROLLUP, CUBE, and GROUP BY columns in the same
GROUP BY clause is not currently supported.
● Constant expressions as GROUP BY keys are not supported.
GROUPING is used with the CUBE operator to distinguish between stored NULL values and NULL values in
query results created by CUBE.
CUBE syntax:
When generating a query plan, the SAP IQ optimizer estimates the total number of groups generated by the
GROUP BY CUBE hash operation. The MAX_CUBE_RESULTS database option sets an upper boundary for the
number of estimated rows the optimizer considers for a hash algorithm that can be run. If the actual number of
rows exceeds the MAX_CUBE_RESULT option value, the optimizer stops processing the query and returns the
error message "Estimate number: nnn exceed the DEFAULT_MAX_CUBE_RESULT of GROUP BY
CUBE or ROLLUP," where <nnn> is the number estimated by the optimizer. See MAX_CUBE_RESULT Option
for information on setting the MAX_CUBE_RESULT option.
In a few unusual circumstances, differences in semantics between SQL Anywhere and SAP IQ may produce
unexpected query results. These circumstances are:
In these circumstances, subtle differences between the semantics of SQL Anywhere and SAP IQ may be
exposed. These differences include:
● SAP IQ treats the CHAR and VARCHAR data types as distinct and different; SQL Anywhere treats CHAR data
as if it were VARCHAR.
● When the RAND function is passed an argument, the behavior is deterministic in SAP IQ and
nondeterministic in SAP SQL Anywhere.
Privileges
(back to top)
Requires SELECT object-level privilege on the named tables and views. See GRANT Object-Level Privilege
Statement [page 1502] for assistance with granting privileges
Standards
(back to top)
Examples
(back to top)
● The following example lists all tables and views in the system catalog:
SELECT tname
FROM SYS.SYSCATALOG
WHERE tname LIKE 'SYS%' ;
● The following example lists all customers and the total value of their orders:
SELECT CompanyName,
CAST( sum(SalesOrderItems.Quantity *
Products.UnitPrice) AS INTEGER) VALUE
FROM Customers
LEFT OUTER JOIN SalesOrders
LEFT OUTER JOIN SalesOrderItems
LEFT OUTER JOIN Products
GROUP BY CompanyName
ORDER BY VALUE DESC
SELECT count(*)
FROM Employees;
● The following example lists the total sales by year, model, and color:
● The following example selects all items with a certain discount into a temporary table:
● The following example returns information about the employee that appears first when employees are
sorted by last name:
SELECT FIRST *
FROM Employees
ORDER BY Surname;
● The following examples return the first five employees when their names are sorted by last name:
SELECT TOP 5 *
FROM Employees
ORDER BY Surname;
SELECT *
FROM Employees
ORDER BY Surname
LIMIT 5;
● The following example lists the fifth and sixth employees sorted in descending order by last name:
SELECT *
FROM Employees
ORDER BY Surname DESC
LIMIT 4,2;
In this section:
Related Information
The FIRST, TOP, and LIMIT clauses allow you to return a subset of the rows that satisfy the WHERE clause. The
FIRST, TOP, and LIMIT clauses can be used within any SELECT query block that includes an ORDER BY clause.
FIRST, TOP, and LIMIT can only be used in the top query block in a statement.
Syntax
The FIRST, TOP, and LIMIT clauses are row-limitation clauses and they have the following syntax:
<row-limitation-option-1> ::=
FIRST | TOP { ALL | <limit-expression> } [ START AT <startat-expression> ]
<row-limitation-option-2> ::=
LIMIT { [ <offset-expression>, ] <limit-expression> | <limit-expression>
OFFSET <offset-expression> }
<simple-expression> ::=
integer
| variable
| ( simple-expression )
| ( simple-expression { + | - | * } simple-expression )
Only one row limitation clause can be specified for a SELECT clause. When specifying these clauses, an
ORDER BY clause is required to order the rows in a meaningful manner.
Parameters
row-limitation-option-1
row-limitation-option-2
This type of clause can be used in SELECT query blocks only. The LIMIT and OFFSET arguments can be
simple arithmetic expressions over host variables, integer constants, or integer variables. The LIMIT
argument must evaluate to a value greater than or equal to 0. The OFFSET argument must evaluate to a
value greater than or equal to 0. If offset-expression is not specified, the default is 0. The expression
limit-expression + offset-expression must evaluate to a value less than 9223372036854775807
= 2^64-1.
The LIMIT keyword is disabled by default. Use the reserved_keywords option to enable the LIMIT keyword.
Related Information
Syntax
Go to:
● Privileges
● Standards
● Examples
(back to top)
The SET statement assigns a new value to a variable. The variable must have been previously created by using
a CREATE VARIABLE statement or DECLARE statement, or it must be an OUTPUT parameter for a procedure.
The variable name can optionally use the Transact-SQL convention of an @ sign preceding the name. For
example: SET @localvar = 42.
A variable can be used in a SQL statement anywhere a column name is allowed. If a column name exists with
the same name as the variable, then the column value is used.
The <owner> specification is only for use when setting owned database-scope variables.
Variables are necessary for creating large text or binary objects for INSERT or UPDATE statements from
Embedded SQL programs because Embedded SQL host variables are limited to 32767 bytes.
Variables are local to the current connection and disappear when you disconnect from the database or use the
DROP VARIABLE statement. They are not affected by COMMIT or ROLLBACK statements.
If you set a database-scope variable, however, the variable persists after a disconnect. When the database is
restarted, the value of a database-scope variable reverts to NULL or its default, if defined. The
SYSDATABASEVARIABLE system view contains a list of all database-scope variables and their initial values.
Privileges
(back to top)
If you own the database-scope variable, no additional privilege is required. To set a database-scope variable
owned by PUBLIC, you must have the UPDATE PUBLIC DATABASE VARIABLE system privilege.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
● This code fragment inserts a large binary value into the database:
● This simple example shows the creation of a variable called birthday, and sets the date to CURRENT DATE:
● The following code fragment inserts a large text value into the database:
size_t size;
FILE * fp;
EXEC SQL BEGIN DECLARE SECTION;
DECL_VARCHAR( 5000 ) buffer;
EXEC SQL END DECLARE SECTION;
fp = fopen( "blob.dat", "r" );
EXEC SQL CREATE VARIABLE hold_blob LONG VARCHAR;
EXEC SQL SET hold_blob = '';
for(;;) {
size = fread( (void *)buffer.array, 1, 5000, fp );
if( size <= 0 ) break;
buffer.len = (a_sql_ulen) size;
EXEC SQL SET hold_blob = hold_blob || :buffer;
}
EXEC SQL INSERT INTO some_table VALUES( 1, hold_blob );
EXEC SQL COMMIT;
EXEC SQL DROP VARIABLE hold_blob;
fclose( fp );
Syntax
<option-value> ::=
ANSINULL [ ON | OFF ]
| ANSI_PERMISSIONS [ ON | OFF ]
| CLOSE_ON_ENDTRANS ON
| QUOTED_IDENTIFIER [ ON | OFF ]
| ROWCOUNT <integer>
| STRING_RTRUNCATION [ ON | OFF ]
| TRANSACTION ISOLATION LEVEL [ 0 | 1 | 2 | 3 ]
Parameters
ANSINULL
The default behavior for comparing values to NULL in SAP IQ and SAP ASE is different. Setting ANSINULL
to OFF provides Transact-SQL compatible comparisons with NULL.
ANSI_PERMISSIONS
The default behavior in SAP IQ and SAP ASE regarding permissions required to carry out a DELETE
containing a column reference is different. Setting ANSI_PERMISSIONS to OFF provides Transact-SQL-
compatible permissions on DELETE.
CLOSE_ON_ENDTRANS
When set to ON (the default and only allowable value), cursors are closed at the end of a transaction. With
the option set ON, CLOSE_ON_ENDTRANS provides Transact-SQL-compatible behavior.
QUOTED_IDENTIFIER
Controls whether strings enclosed in double quotes are interpreted as identifiers (ON) or as literal strings
(OFF).
ROWCOUNT
In the Transact-SQL, limits to the specified integer the number of rows fetched for any cursor. This includes
rows fetched by repositioning the cursor. Any fetches beyond this maximum return a warning. The setting
is considered when returning the estimate of the number of rows for a cursor on an OPEN request.
SAP IQ supports the <@@rowcount> global variable. SELECT, INSERT, DELETE, and UPDATE
statements affect the value of the ROWCOUNT clause. The ROWCOUNT clause has no effect on cursor
operation, the IF statement, or creating or dropping a table or procedure.
In SAP IQ, if ROWCOUNT is greater than the number of rows that dbisql can display, dbisql may do
extra fetches to reposition the cursor. The number of rows actually displayed may be less than the number
requested. Also, if any rows are refetched due to truncation warnings, the count might be inaccurate.
The default behavior in SAP IQ and SAP ASE when nonspace characters are truncated on assigning SQL
string data is different. Setting STRING_RTRUNCATION to ON provides Transact-SQL-compatible string
comparisons, including hexadecimal string (binary data type) comparisons.
TRANSACTION ISOLATION LEVEL
Sets the locking isolation level for the current connection For SAP ASE, only 1 and 3 are valid options. For
SAP IQ, only 3 is a valid option.
SET PREFETCH
Remarks
Database options in SAP IQ are set using the SET OPTION statement. However, SAP IQ also provides support
for the SAP ASE SET statement for a set of options particularly useful for compatibility.
Privileges
None
Standards
Related Information
Syntax
Remarks
The current connection state is saved, and resumed when it again becomes the active connection. If you omit
<connection-name>, but a connection exists that was not named, that connection becomes the active
connection.
Note
When cursors are opened in Embedded SQL, they are associated with the current connection. When the
connection is changed, you cannot access the cursor names. The cursors remain active and in position and
can become accessed when the associated connection becomes active again.
Privileges
None
Standards
● SQL – vendor extension to ISO/ANSI SQL grammar. Embedded SQL is a full-level feature.
● SAP database products – supported by Open Client/Open Server.
Examples
The following example sets the current connection to the connection named "conn1" from dbisql:
Describes the variables in a SQL descriptor area, and places data into the descriptor area.
Syntax
<assignment> ::=
{ { TYPE
| SCALE
| PRECISION
| LENGTH
| INDICATOR } = { <integer>
| <hostvar> }
| DATA = <hostvar> }
Parameters
COUNT
Sets the number of described variables within the descriptor area. The value for count cannot exceed the
number of variables specified when the descriptor area was allocated.
VALUE
The value <n> specifies the variable in the descriptor area upon which the assignments are performed.
DATA
Type checking is performed when using the DATA clause to ensure that the variable in the descriptor area
has the same type as the host variable. If an error occurs, the code is returned in the SQLCA.
Privileges
None
Examples
Related Information
Changes options that affect the behavior of the database and its compatibility with Transact-SQL. Setting the
value of an option can change the behavior for all users or an individual user, in either a temporary or
permanent scope.
Syntax
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
option-value
A host-variable (indicator allowed), string, identifier, or number. The maximum length of <option-value>
when set to a string is 127 bytes.
Note
For all database options that accept integer values, SAP IQ truncates any decimal <option-value>
setting to an integer value. For example, the value 3.8 is truncated to 3.
EXISTING
Option values cannot be set for an individual user ID unless there is already a PUBLIC user ID setting for
that option.
TEMPORARY
Changes the duration that the change takes effect. Without the TEMPORARY clause, an option change is
permanent: it does not change until it is explicitly changed using SET OPTION statement.
When the TEMPORARY clause is applied using an individual user ID, the new option value is in effect as
long as that user is logged in to the database.
When the TEMPORARY clause is used with the PUBLIC user ID, the change is in place for as long as the
database is running. When the database is shut down, TEMPORARY options for the PUBLIC user ID revert
to their permanent value.
If a TEMPORARY option is deleted, the option setting reverts to the permanent setting.
Remarks
(back to top)
Specifying either a user ID or the PUBLIC user ID determines whether the option is set for an individual user, a
role represented by <user_id>, or the PUBLIC user ID (the role to which all users are a member). If the option
applies to a role ID, option settings are not inherited by members of the role — the change is applied only to the
role ID. If no role is specified, the option change is applied to the currently logged-in user ID that issued the SET
OPTION statement. For example, this statement applies an option change to the PUBLIC user ID:
Changing the value of an option for the PUBLIC user ID sets the value of the option for any user that has not set
its own value. Option values cannot be set for an individual user ID unless there is already a PUBLIC user ID
setting for that option.
Temporarily setting an option for the PUBLIC user ID, as opposed to setting the value of the option
permanently, offers a security advantage. For example, when the LOGIN_MODE option is enabled, the database
relies on the login security of the system on which it is running. Enabling the option temporarily means a
database relying on the security of a Windows domain is not compromised if the database is shut down and
Caution
Changing option settings while fetching rows from a cursor is not supported, as it can lead to unpredictable
behavior. For example, changing the DATE_FORMAT setting while fetching from a cursor returns different
date formats among the rows in the result set. Do not change option settings while fetching rows.
Privileges
(back to top)
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
Syntax
Syntax 1
Syntax 2
SET PERMANENT
Syntax 3
SET
Remarks
Syntax 2 (SET PERMANENT) stores all current dbisql options in the SYSOPTION system table. These settings
are automatically established every time dbisql is started for the current user ID.
Syntax 3 (SET) shows all current option settings. If there are temporary options set for dbisql or the database
server, these are shown; otherwise, permanent option settings are shown.
If you enter the name of an option incorrectly when you are setting the option, the incorrect name is saved in
the SYSOPTION table. You can remove the incorrectly entered name from the SYSOPTION table by setting the
option PUBLIC with an equality after the option name and no value:
The SET ANY PUBLIC OPTION system privilege is required to set database options for another user.
The SET ANY SYSTEM OPTION system privilege is required to set a SYSTEM option for the PUBLIC user ID.
The SET ANY SECURITY OPTION system privilege is required to set a SECURITY option for the PUBLIC user ID.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Related Information
Tells the SQL preprocessor to use a SQLCA other than the default global <sqlca>.
Syntax
Parameters
sqlca
Identifier or string
Remarks
The current SQLCA pointer is implicitly passed to the database interface library on every Embedded SQL
statement. All Embedded SQL statements that follow this statement in the C source file use the new SQLCA.
This statement is necessary only when you are writing code that is reentrant. The <sqlca> should reference a
local variable. Any global or module static variable is subject to being modified by another thread.
None
Standards
Examples
The following example shows a function that can be found in a Windows DLL. Each application that uses the
DLL has its own SQLCA:
Related Information
Note
The SET USER system privilege is two words; the SETUSER statement is one word.
Syntax
SETUSER <user_id>
user_id
Must be the name of an existing user or role that has a login password.
Remarks
At-least criteria validation occurs when the SETUSER statement is executed, not when the SET USER system
privilege is granted.
To terminate a successful impersonation, issue the SETUSER statement without specifying a <user_id>.
Privileges
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
● The impersonator has been granted the right to impersonate the target user.
● The impersonator has, at minimum, all the roles and system privileges granted to the target user.
● The impersonator has been granted the said roles and system privileges with similar or higher
administrative rights.
Note
For the purposes of meeting administrative rights criteria, the WITH ADMIN OPTION and WITH ADMIN
ONLY OPTION clauses are considered to grant similar administrative rights. They are also considered
to grant higher administrative rights than the WITH NO ADMIN OPTION clause. For example, User1 is
granted Role1 with the WITH ADMIN OPTION clause, User2 is granted Role1 with the WITH ADMIN
ONLY clause, and User3 is granted Role1 with the WITH NO ADMIN OPTION clause. User1 and
User2 are said to be granted Role1 with similar administrative rights. User1 and User2 are also said
to be granted Role1 with higher administrative rights than User3.
● If the target user has been granted a system privilege that supports extensions, the clauses used to grant
the system privilege to the impersonator are a super-set of those used for the target user. Only the SET
USER and CHANGE PASSWORD system privileges support extensions.
○ The ANY clause is considered a super-set of the <target_roles_list> and
<target_users_list> clauses. If the target user has been granted the SET USER system privilege
with an ANY grant, the impersonator must also have the ANY grant.
○ If the target user has been granted the SET USER system privilege with both the
<target_roles_list> and <target_users_list> clauses, the impersonator must also have been
granted the system privilege with the two clauses, and the target list of each clause must be equal to,
or a super set of, the corresponding clause grant of the target user. For example, if the target lists of
both the impersonator and target user contain User1, User2 and Role1, Role2, respectively, the
target list grants for each clause are said to be equal. Alternately, if the target list grants of the
impersonator contain User1, User2, and Role1, Role2, respectively, while the target list grants of the
Standards
Related Information
Syntax
SIGNAL <exception-name>
Privileges
None
Related Information
Syntax
Parameters
AS database-name
(Optional) If not specified, the statement assigns a default name to the database. This default name is the
root of the database file. For example, a database in file c:\SAP\16_1\demo\iqdemo.db is given the
default name iqdemo.
ON engine-name
(Optional) If not specified, the statement uses the default database server. The default database server is
the first started server among those that are currently running.
AUTOSTOP { YES | NO }
(Optional) When set to YES (default), the database is unloaded when the last connection to it is dropped.
When set to NO, the database is not unloaded.
KEY key
(Optional) Specify to enter the KEY value (password) for strongly encrypted databases.
Note
The database server must be running. The full path must be specified for the database file unless the file is
located in the current directory.
The START DATABASE statement does not connect dbisql to the specified database you must also issue a
CONNECT statement to make a connection.
Privileges
Requires the SERVER OPERATOR system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
● (UNIX) This example starts the database file /s1/IQ/sample_2.db on the current server:
● (Windows) This example starts the database file c:\IQ\sample_2.db as sam2 on the server eng1:
Related Information
Syntax
START ENGINE
AS <engine-name> [ STARTLINE <command-string> ]
Parameters
STARTLINE
Specifies valid command strings that conform to the database server command line description. See
start_iq Database Server Startup Utility in the SAP IQ Utility Reference.
Remarks
Several server options are required for SAP IQ to operate well. To ensure that you are using the best options,
start your server by using either SAP IQ Cockpit or a configuration file with the start_iq command.
Privileges
None
Standards
Examples
● The following example starts a database server named eng1 without starting any databases on it:
Related Information
Syntax
<environment-name> :
C_ESQL32
| C_ESQL64
| C_ODBC32
| C_ODBC64
| JAVA
| JS
| PERL
| PHP
Parameters
environment-name
Remarks
The START EXTERNAL ENVIRONMENT statement can be used to ensure that the external environment
module can be located and started. Since an external environment is automatically started, this statement is
not required.
None
Side effects
None
Standards
Example
Related Information
Loads the Java VM at a convenient time, so that when the user starts to use Java functionality, there is no initial
pause while the Java VM is loaded.
Syntax
None
Standards
Examples
Related Information
Syntax
Parameters
database-name
The name specified in the -n parameter when the database is started, or specified in the DBN
(DatabaseName) connection parameter. This name is typically the file name of the database file that holds
the catalog store, without the .db extension, but can be any user-defined name.
(Optional) If not specified, all running engines are searched for a database of the specified name.
UNCONDITIONALLY
(Optional) If specified, the database is stopped, even if there are connections to the database. If not
specified, the database is not stopped if there are connections to it.
Privileges
Requires the SERVER OPERATOR system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Examples
The following example stops the database named sample on the default server:
Related Information
Syntax
UNCONDITIONALLY
If specified, the database server is stopped, even if there are connections to the server. If not specified, the
database server is not stopped if there are connections to it.
Privileges
None
Standards
Examples
Related Information
Syntax
<environment-name> :
C_ESQL32
| C_ESQL64
Parameters
environment-name
Remarks
None.
Privileges
None
Side effects
None
Standards
Example
Releases resources associated with the Java VM to economize on the use of system resources.
Syntax
Privileges
None
Standards
Related Information
Triggers a named event. The event may be defined for event triggers or be a scheduled event.
Syntax
Actions are tied to particular trigger conditions or schedules by a CREATE EVENT statement. You can use
TRIGGER EVENT to force the event handler to execute, even when the scheduled time or trigger condition has
not occurred. TRIGGER EVENT does not execute disabled event handlers
When a triggering condition causes an event handler to execute, the database server can provide context
information to the event handler using the event_parameter function. TRIGGER EVENT allows you to
explicitly supply these parameters, to simulate a context for the event handler.
When you trigger an event, specify the event name. You can list event names by querying the system table
SYSEVENT. For example:
Privileges
Requires the MANAGE ANY EVENT system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Related Information
Deletes all rows from a table or materialized view without deleting the table definition.
Syntax
TRUNCATE
TABLE [ <owner>.]<table-name>
[ PARTITION <partition-name>
| SUBPARTITION <subpartition-name> ]
| MATERIALIZED VIEW <owner>.] <materialized-view-name>
Parameters
PARTITION
Note
SUBPARTITION
Note
Remarks
TRUNCATE is equivalent to a DELETE statement without a WHERE clause, except that each individual row
deletion is not entered into the transaction log. After a TRUNCATE TABLE statement, the table structure and all
of the indexes continue to exist until you issue a DROP TABLE statement. The column definitions and
constraints remain intact, and permissions remain in effect.
The TRUNCATE statement is entered into the transaction log as a single statement, like data definition
statements. Each deleted row is not entered into the transaction log.
Note
If the table you are truncating contains an identity column, the TRUNCATE statement does not reset the
identity number sequence. If you need to reset the identity number sequence, call the stored procedure
sp_iq_reset_identity. See sp_iq_reset_identity Procedure [page 746].
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
For both temporary and base tables, you can execute TRUNCATE TABLE while other users have read access to
the table. This behavior differs from SAP SQL Anywhere, which requires exclusive access to truncate a base
table. SAP IQ table versioning ensures that TRUNCATE TABLE can occur while other users have read access;
however, the version of the table these users see depends on when the read and write transactions commit.
Examples
The following example delete all rows from the Sale table:
Related Information
Syntax
Parameters
ON
Remarks
Use the TRUNCATE TEXT INDEX statement when you want to delete data from a manual text index without
dropping the text index definition. For example, to alter the text configuration object for the text index to
change the stoplist, truncate the text index, change the text configuration object it refers to, and then refresh
the text index to populate it with new data.
The TRUNCATE TEXT INDEX requires exclusive access to the table. Any open cursors that reference the table
being truncated must be closed, and a COMMIT or ROLLBACK statement must be executed to release the
reference to the table.
Privileges
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Standards
Examples
The first statement creates the txt_index_manual text index. The second statement populates the text index
with data. The third statement truncates the text index data:
The truncated text index is repopulated with data the next time it is refreshed.
Related Information
Syntax
<select-without>-<order-by>
… UNION [ ALL ] <select-without>-<order-by>
… [ UNION [ ALL ] <select-without>-<order-by> ]…
… [ ORDER BY <integer> [ ASC | DESC ] [, …] ]
Parameters
ALL
The results of UNION ALL are the combined results of the component SELECT statements. The results of
UNION are the same as UNION ALL, except that duplicate rows are eliminated. Eliminating duplicates
requires extra processing, so UNION ALL should be used instead of UNION where possible.
ORDER BY
Only integers are allowed in the order by list. These integers specify the position of the columns to be
sorted.
Remarks
The results of several SELECT statements can be combined into a larger result using a UNION clause. The
component SELECT statements must each have the same number of items in the select list, and cannot
contain an ORDER BY clause. See FROM Clause.
If corresponding items in two select lists have different data types, SAP IQ chooses a data type for the
corresponding column in the result, and automatically converts the columns in each component SELECT
statement appropriately.
The column names displayed are the same column names that display for the first SELECT statement.
Note
When SELECT statements include constant values and UNION ALL views but omit the FROM clause, use
iq_dummy to avoid errors. See FROM Clause for details.
Privileges
Requires SELECT object-level privilege for each component of the SELECT statements. See GRANT Object-
Level Privilege Statement [page 1502] for assistance with granting privileges
Examples
SELECT Surname
FROM Employees
UNION
SELECT Surname
FROM Customers
Related Information
Modifies existing rows of a single table, or a view that contains only one table.
Syntax
UPDATE <table-name>
... SET [<column-name> = <expression>, ...
...[ FROM <table-expression> ]
...[ WHERE <search-condition> ]
...[ ORDER BY <expression> [ ASC | DESC ] , …]
<table-name> ::=
[ <owner>.]<table-name> [ [ AS ] <correlation-name> ]
| [ <owner>.]<view-name> [ [ AS ] <correlation-name> ]
<table-expression> ::=
<table-spec>
| <table-expression> <join-type> <table-spec> [ ON <condition> ]
| <table-expression>, ...
Go to:
Parameters
(back to top)
SET
Use the SET clause to set column names or variables to the specified expression.
Use the SET clause to set the column to a computed column value by using this format:
Each specified column is set to the value of the expression. There are no restrictions on <expression>. If
<expression> is a <column-name>, then the previous value from that column is used.
If a column has a default defined, then use the SET clause to set a column to its default value.
You can also use the SET clause to assign a variable by using the following format:
When assigning a value to a variable, the variable must already be declared, and its name must begin with
the at sign (@). If the variable name matches the name of a column in the table to be updated, then the
UPDATE statement updates the column value and leaves the variable unchanged. Variable and column
assignments can be combined in any order.
FROM
Allows tables to be updated based on joins. If the FROM clause is present, <table-name> must specify the
sole table to be updated, and it must qualify the name in the same way as it appears in the FROM clause. If
correlation names are used in the FROM clause, the identical correlation name must be specified as
<table-name>.
This statement illustrates a potential ambiguity in table names in UPDATE statements using a FROM
clause that contain table expressions, which use correlation names:
UPDATE table_1
SET column_1 = ...
FROM table_1 AS alias_1, table_2 AS alias_2
WHERE ...
Each instance of table_1 in the FROM clause has a correlation name, denoting a self-join of table_1 to
itself. However, the UPDATE statement fails to specify which of the rows that make up the self-join are to be
updated. This can be corrected by specifying the correlation name in the UPDATE statement as follows:
UPDATE table_1
SET column_1 = ...
FROM table_1 AS alias_1, table_1 AS alias_2
If the same table name in which you are updating rows is used in the FROM clause, they are considered to
reference the same table if one of the following is true:
In cases where the server cannot determine if the table references are identical, a SQL error appears. This
prevents the user from unintended semantics by updating unintended rows.
WHERE clause
If specified, only rows satisfying the search condition are updated. If no WHERE clause is specified, every
row is updated.
ORDER BY clause
Normally, the order in which rows are updated does not matter. However, with the FIRST or TOP clause, the
order can be significant.
To use the ORDER BY clause, you cannot set the ansi_update_constraints option to Strict.
To update columns that appear in the ORDER BY clause, set the ansi_update_constraints option to Off.
Remarks
(back to top)
The table referenced in the UPDATE statement can be a base table or a temporary table.
Defaults on updates are honored for current user, user and current timestamp, and timestamp only.
Each named column is set to the value of the expression on the right-hand side of the equal sign. Even
<column-name> can be used in the expression—the old value is used.
The FROM clause can contain multiple tables with join conditions and returns all the columns from all the
tables specified and filtered by the join condition and/or WHERE condition.
Using the wrong join condition in a FROM clause causes unpredictable results. If the FROM clause specifies a
one-to-many join and the SET clause references a cell from the “many” side of the join, the cell is updated from
the first value selected. In other words, if the join condition causes multiple rows of the table to be updated per
row ID, the first row returned becomes the update result. For example:
UPDATE T1
SET T1.c2 = T2.c2
FROM T1 JOIN TO T2
ON T1.c1 = T2.c1
If table T2 has more than one row per T2.c1, results might be as follows:
SAP IQ rejects any UPDATE statement in which the table being updated is on the null-supplying side of an outer
join. In other words:
● In a left outer join, the table on the left side of the join cannot be missing any rows on joined columns.
● In a right outer join, the table on the right side of the join cannot be missing any rows on joined columns.
● In a full outer join, neither table can be missing any rows on joined columns.
For example, in this statement, table T1 is on the left side of a left outer join, and thus cannot contain be
missing any rows:
UPDATE T1
SET T1.c2 = T2.c4
FROM T1 LEFT OUTER JOIN T2
ON T1.rowid = T2.rowid
Normally, the order in which rows are updated does not matter. However, in conjunction with the NUMBER(*)
function, an ordering can be useful to get increasing numbers added to the rows in some specified order. If you
are not using the NUMBER(*) function, avoid using the ORDER BY clause, because the UPDATE statement
performs better without it.
In an UPDATE statement, if the NUMBER(*) function is used in the SET clause and the FROM clause specifies a
one-to-many join, NUMBER(*) generates unique numbers that increase, but do not increment sequentially due
to row elimination.
You can use the ORDER BY clause to control the result from an UPDATE statement when the FROM clause
contains multiple joined tables.
SAP IQ ignores the ORDER BY clause in the UPDATE statement and returns a message that the syntax is not
valid ANSI syntax.
The left side of each SET clause must be a column in a base table.
Views can be updated provided the SELECT statement defining the view does not contain a GROUP BY clause
or an aggregate function, or involve a UNION operation. The view should contain only one table.
Character strings inserted into tables are always stored in the case they are entered, regardless of whether the
database is case-sensitive or not. Thus a character data type column updated with the string 'Value' is always
held in the database with an uppercase V and the remainder of the letters lowercase. SELECT statements
return the string as 'Value.' If the database is not case-sensitive, however, all comparisons make 'Value' the
same as 'value,' 'VALUE,' and so on. The IQ server may return results in any combination of lowercase and
uppercase, so you cannot expect case-sensitive results in a database that is case-insensitive (CASE IGNORE).
Further, if a single-column primary key already contains an entry 'Value,' an INSERT of 'value' is rejected, as it
would make the primary key not unique.
If the update violates any check constraints, the whole statement is rolled back.
SAP IQ supports scalar subqueries within the SET clause, for example:
UPDATE r
SET r.o= (SELECT MAX(t.o)
FROM t ... WHERE t.y = r.y),
SAP IQ supports DEFAULT column values in UPDATE statements. If a column has a DEFAULT value, this
DEFAULT value is used as the value of the column in any UPDATE statement that does not explicitly modify the
value for the column.
See CREATE TABLE Statement for details about updating IDENTITY/AUTOINCREMENT columns, which are
another type of DEFAULT column.
When updating database-scope variables using the SET clause, the setting does not persist between restarts of
the database, even though the variable does. When a database is restarted, the value of a database-scope
variable reverts to NULL or its default, if defined. The SYSDATABASEVARIABLE system view contains a list of all
database-scope variables and their default values.
Privileges
(back to top)
No privileges are required to update a database-scope variable you own. To update a database-scope variable
owned by PUBLIC, requires the UPDATE PUBLIC DATABASE VARIABLE system privilege.
See GRANT System Privilege Statement [page 1511] or GRANT Object-Level Privilege Statement [page 1502]
for assistance with granting privileges.
Standards
(back to top)
Examples
(back to top)
UPDATE Employees
SET DepartmentID = 400
WHERE EmployeeID = 129;
● In this example, the Marketing Department (400) increases bonuses from 4% to 6% of each employee’s
base salary:
UPDATE Employees
SET bonus = base * 6/100
WHERE DepartmentID =400;
● In this example, each employee gets a pay increase with the department bonus:
UPDATE Employees
SET emp.Salary = emp.Salary + dept.bonus
FROM Employees emp, Departments dept
WHERE emp.DepartmentID = dept.DepartmentID;
● This example shows another way to give each employee a pay increase with the department bonus:
UPDATE Employees
SET emp.salary = emp.salary + dept.bonus
FROM Employees emp JOIN Departments dept
ON emp.DepartmentID = dept.DepartmentID;
Related Information
Syntax
UPDATE <table-list>
SET <set-item>, ...
WHERE CURRENT OF <cursor-name>
<set-item> ::=
<column-name> [.<field-name>…] = <scalar-value>
cursor-name
Identifier or hostvar.
SET
The columns that are referenced in set-item must be in the base table that is updated. They cannot refer to
aliases, nor to columns from other tables or views. If the table you are updating is given a correlation name
in the cursor specification, you must use the correlation name in the SET clause. The expression on the
right side of the SET clause may reference columns, constants, variables, and expressions from the
SELECT clause of the query.
set-item
Avoid using ORDER BY in the WHERE CURRENT OF clause. The ORDER BY columns may be updated, but
the result set does not reorder. The results appear to fetch out of order and appear to be incorrect.
Remarks
This form of the UPDATE statement updates the current row of the specified cursor. The current row is defined
to be the last row successfully fetched from the cursor, and the last operation on the cursor cannot have been a
positioned DELETE statement.
The requested columns are set to the specified values for the row at the current row of the specified query. The
columns must be in the select list of the specified open cursor.
Changes effected by positioned UPDATE statements are visible in the cursor result set, except where client-side
caching prevents seeing these changes. Rows that are updated so that they no longer meet the requirements
of the WHERE clause of the open cursor are still visible.
Since SAP IQ does not support the CREATE VIEW... WITH CHECK OPTION, positioned UPDATE does not
support this option. The WITH CHECK OPTION clause does not allow an update that creates a row that is not
visible by the view.
SAP IQ supports repeatedly updating the same row in the result set.
Requires UPDATE object-level permission on the columns being modified. See GRANT Object-Level Privilege
Statement [page 1502] for assistance with granting privileges
Standards
● The range of cursors that can be updated may contain vendor extensions to ISO/ANSI SQL grammar if the
ANSI_UPDATE_CONSTRAINTS option is set to OFF.
● Embedded SQL use is supported by Open Client/Open Server, and procedure and trigger use is supported
in SAP SQL Anywhere.
Related Information
Validates the current database, or a single table, materialized view, or index in the IQ catalog (system) store.
Caution
Perform the validation of a table or an entire database only while there are no connections that are making
changes to the database; otherwise, errors may be reported indicating some form of database corruption
even though no corruption actually exists.
Syntax
VALIDATE {
TABLE [ <owner>.]<table-name>
| MATERIALIZED VIEW [ <owner>.]<materialized-view-name> }
[ WITH EXPRESS CHECK ]
VALIDATE {
INDEX <index-name>
| [ INDEX ] FOREIGN KEY <role-name>
| [ INDEX ] PRIMARY KEY }
ON [ <owner>.]<object-name>
<object-name> ::=
<table-name> | <materialized-view-name>
Parameters
CHECKSUM
Validates the checksum on each page of a database. The CHECKSUM clause ensures that database pages
have not been modified on disk. When a database is created with checksums enabled, a checksum is
calculated for each database page before it is written to disk. CHECKSUM reads each database page
directly from disk — not via the database server's cache — and calculates the checksum for each page. If
the calculated checksum for a page does not match the stored checksum for that page, an error occurs
and information about the invalid page appears in the database server messages window.
The CHECKSUM clause is not recommended for databases that have checksums disabled because it reads
the entire database from disk.
DATABASE
Ensures that the free map correctly identifies pages as either allocated or free and that no BLOBs have
been orphaned. The DATABASE clause also performs checksum validation and verifies that each database
page belongs to the correct object. For example, on a table page, the table ID must identify a valid table
whose definition must include the current page in its set of table pages.
The DATABASE clause brings pages into the database server's cache in sequential order. This results in
their validation, as the database server always verifies the contents and checksums of pages brought into
the cache. If you start database validation while the database cleaner is running, the validation does not
run until the database cleaner is finished running.
TABLE
Validates the specified table and all of its indexes by checking that the set of all rows and values in the base
table matches the set of rows and values contained in each index. The TABLE clause also traverses all the
table's BLOBs, verifies BLOB allocation maps, and detects orphaned BLOBs. The TABLE clause checks the
physical structure of the table's index pages and verifies the order of the index hash values, and the index's
uniqueness requirements (if any are specified).
For foreign key indexes, unless the WITH EXPRESS CHECK clause is specified, each value is looked up in
the primary key table to verify that referential integrity is intact. Because the TABLE clause, like the
DATABASE clause, uses the database server's cache, the database server also verifies the checksums and
basic validity of all pages in use by a table and its indexes.
Performs the same operations as the TABLE clause except that it only validates the specified index and its
underlying table; other indexes are not checked.
For foreign key indexes, unless the WITH EXPRESS CHECK clause is specified, each value is looked up in
the primary key table to verify that referential integrity is intact. Specifying the WITH EXPRESS CHECK
clause disables referential integrity checking and can therefore significantly improve performance. If the
specified index is not a foreign key index, WITH EXPRESS CHECK has no effect.
TEXT INDEX
Verifies that the positional information for the terms in the index is intact. If the positional information is
not intact, an error is generated and you must rebuild the text index. If the text index is either auto or
manual, you can rebuild the text index by executing the REFRESH TEXT INDEX statement. If the
generated error concerns an immediate text index, you must drop the immediate index and create a new
one.
Privileges
Requires VALIDATE ANY OBJECT system privilege. See GRANT System Privilege Statement [page 1511] for
assistance with granting privileges.
Standards
Related Information
Validates changes to the settings of existing LDAP server configuration objects before applying them.
Syntax
<ldapua-server-attributes> ::=
SEARCH DN
URL { '<URL_string>' | NULL }
| ACCESS ACCOUNT { '<DN_string>' | NULL }
Go to:
● Remarks
● Privileges
● Standards
● Examples
Parameters
(back to top)
ldapua-server-name
Identifies the host (by name or by IP address), port number, and the search to be performed for the DN
lookup for a given user ID. This value is validated for correct LDAP URL syntax before it is stored in the
ISYSLDAPSERVER system table. The maximum size for this string is 1024 bytes.
ACCESS ACCOUNT { 'DN_string' | NULL }
A user created on the LDAP server for use by SAP IQ, not a user within SAP IQ. The distinguished name
(DN) for this user is used to connect to the LDAP server. This user has permissions within the LDAP server
to search for DNs by user ID in the locations specified by the SEARCH DN URL. The maximum size for this
string is 1024 bytes.
IDENTIFIED BY { 'password' | NULL }
Provides the password associated with the ACCESS ACCOUNT user. The password is stored using
symmetric encryption on disk. Use the value NULL to clear the password and set it to none. The maximum
size of a clear text password is 255 bytes.
IDENTIFIED BY ENCRYPTED { encrypted-password | NULL }
Configures the password associated with the ACCESS ACCOUNT distinguished name in an encrypted
format. The binary value is the encrypted password and is stored on disk as is. Use the value NULL to clear
the password and set it to none. The maximum size of the binary is 289 bytes.
AUTHENTICATION URL { 'URL_string' | NULL }
Identifies the host (by name or IP address) and the port number of the LDAP server to use for
authentication of the user. This is the value defined for <URL_string> and is validated for correct LDAP
URL syntax before it is stored in ISYSLDAPSERVER system table. The DN of the user obtained from a prior
DN search and the user password bind a new connection to the authentication URL. A successful
connection to the LDAP server is considered proof of the identity of the connecting user. The maximum
size for this string is 1024 bytes.
CONNECTION TIMEOUT timeout_value
Specifies the number of retries on connections from SAP IQ to the LDAP server for both DN searches and
authentication. The valid range of values is 1 through 60, with a default value of 3.
TLS
Defines whether the TLS or Secure LDAP protocol is used for connections to the LDAP server for both DN
searches and authentication. When set to ON, the TLS protocol is used and the URL begins with "ldap://"
When set to OFF (or not specified), Secure LDAP protocol is used and the URL begins with “ldaps://”. When
using the TLS protocol, specify the database security option TRUSTED_CERTIFICATES_FILE with a file
name containing the certificate of the Certificate Authority (CA) that signed the certificate used by the
LDAP server.
CHECK userid
Remarks
(back to top)
This statement is useful for an administrator when setting up a new server to use LDAP user authentication,
and for diagnosing problems between the LDAP server configuration object and the external LDAP server. Any
connection made by the VALIDATE LDAP SERVER statement is temporary and is closed by the end of the
statement.
When validating the LDAP server configuration object by name, definitions from prior CREATE LDAP SERVER
and ALTER LDAP SERVER statements are used. Alternately, when <ldapua-server-attributes> are
specified instead of the LDAP server configuration object name, the specified attributes are validated. When
<ldapua-server-attributes> are specified, the URLs are parsed to identify syntax errors, and statement
processing stops is a syntax error is detected.
Whether using an LDAP server configuration object name or a successfully parsed set of <ldapua-server-
attributes>, a connection to the external LDAP server is attempted. If the parameter ACCESS ACCOUNT
and a password are specified, the values are used to establish the connection to the SEARCH DN URL. This is
the SEARCH DN URL, ACCESS ACCOUNT, and ACCESS ACCOUNT password.
When using the optional CHECK clause, the userID is used in the search to validate the existence of the user on
the external LDAP server. When the expected DN value for a given user is known, the value can be specified,
and is compared with the result of the search to determine success or failure.
Privileges
(back to top)
Standards
(back to top)
Examples
(back to top)
● This example assumes the apps_primary LDAP server configuration object was created as follows:
● This example validates the existence of a userID myusername by using the optional CHECK clause to
compare the userID to the expected user distinguished name (enclosed in quotation marks) on the
apps_primary LDAP server configuration object:
● In this example, the name of the LDAP server configuration object does not have to defined in the
VALIDATE LDAP SERVER statement if you include the search attributes:
Related Information
Delays processing for the current connection for a specified amount of time or until a given time.
Syntax
WAITFOR {
DELAY <time_value> | TIME <time_value> }
[ CHECK EVERY <integer> }
[ AFTER MESSAGE BREAK ]
Parameters
DELAY
Processing is suspended until the server time reaches the <time_value> specified.
time_value
String.
CHECK EVERY integer
Controls how often the WAITFOR statement wakes up. By default, WAITFOR wakes up every 5 seconds. The
value is in milliseconds, and the minimum value is 250milliseconds.
AFTER MESSAGE BREAK
The WAITFOR statement can be used to wait for a message from another connection. In most cases, when
a message is received it is forwarded to the application that executed the WAITFOR statement and the
WAITFOR statement continues to wait. If the AFTER MESSAGE BREAK clause is specified, when a message
is received from another connection, the WAITFOR statement completes. The message text is not
forwarded to the application, but it can be accessed by obtaining the value of the MessageReceived
connection property.
Remarks
The WAITFOR statement wakes up periodically (every 5 seconds by default) to check if it has been canceled or
if messages have been received. If neither of these has happened, the statement continues to wait.
If the current server time is greater than the time specified, processing is suspended until that time on the
following day.
WAITFOR provides an alternative to the following statement, and might be useful for customers who choose not
to enable Java in the database:
Privileges
None
Side Effects
The implementation of this statement uses a worker thread while it is waiting. This uses up one of the threads
specified by the -gn server command line option.
Standards
Examples
Related Information
Syntax
Parameters
owner
The owner of the semaphore. <owner> can also be specified using an indirect identifier (for example,
`[@<variable-name>]`).
semaphore-name
The name of the semaphore. <semaphore-name> can also be specified using an indirect identifier (for
example, `[@<variable-name>]`).
TIMEOUT clause
Specify the duration of time, in milliseconds, to wait to decrement the counter associated with the
semaphore. If this clause is not specified, then the connection waits indefinitely until the count can be
decremented, or until an error is returned.
Remarks
The WAITFOR SEMAPHORE statement decrements the counter associated with the semaphore. If the counter
is a positive integer, then the count is decremented and the statement completes. If the counter is 0, then the
connection waits until the counter is a positive integer, or until the duration specified by the TIMEOUT clause
passes, at which point an error is returned indicating the timeout.
An error is returned if the current connection is identified during deadlock detection while waiting on the
semaphore. An error is also returned if the semaphore is dropped.
If a connection that notified a semaphore is dropped or canceled, the counter decrement persists, so your
application needs to be able to address this case.
See GRANT System Privilege Statement [page 1511] for assistance with granting privileges.
Side effects
None.
Standards
Example
The following statement decrements the counter for the license_counter semaphore by 1. If the semaphore
count is 0, then the statement waits indefinitely until the counter is incremented.
Related Information
Syntax
WHENEVER
{ SQLERROR | SQLWARNING | NOTFOUND }
… { GOTO <label> | STOP | CONTINUE | <C code;> }
Remarks
WHENEVER can be put anywhere in an Embedded SQL C program, and does not generate any code. The
preprocessor generates code following each successive SQL statement. The error action remains in effect for
all Embedded SQL statements from the source line of the WHENEVER statement until the next WHENEVER
statement with the same error condition, or the end of the source file.
WHENEVER is provided for convenience in simple programs. Most of the time, checking the sqlcode field of the
SQLCA (SQLCODE) directly is the easiest way to check error conditions. In this case, WHENEVER is not used.
The WHENEVER statement causes the preprocessor to generate an <if ( SQLCODE )> test after each
statement.
Note
The error conditions are in effect based on positioning in the C language source file and not on when the
statements are executed.
Privileges
None
Standards
● The following example executes done when the NOTFOUND clause is met:
Related Information
Syntax
WHILE <expression>
... <statement>
Remarks
The WHILE conditional affects the performance of only a single SQL statement, unless statements are grouped
into a compound statement between the keywords BEGIN and END.
The BREAK statement and CONTINUE statement can be used to control execution of the statements in the
compound statement. The BREAK statement terminates the loop, and execution resumes after the END
keyword, marking the end of the loop. The CONTINUE statement causes the WHILE loop to restart, skipping any
statements after the CONTINUE.
Privileges
None
Examples
The following example shows a BREAK statement that breaks the WHILE loop, if the most expensive product
has a price less than $50. Otherwise, the loop continues until the average price is greater than $30:
Related Information
Database options and Interactive SQL options customize and modify database behavior.
SAP IQ database options are divided into three classes: general, Transact-SQL compatibility, and Interactive
SQL.
In this section:
Database options control many aspects of database behavior including compatibility, error handling, and
concurrency.
For example, you can use database options for the purposes such as:
● Compatibility – lets you control how much like SAP Adaptive Server Enterprise your SAP IQ database
operates, and whether SQL that does not conform to SQL92 generates errors.
● Error handling – lets you control what happens when errors, such as dividing by zero or overflow errors,
occur.
● Concurrency and transactions – lets you control the degree of concurrency and details of COMMIT
behavior using options.
You set options with the SET OPTION statement, which has this general syntax:
Specify a user ID or role name to set the option only for that user or role. Every user belongs to the PUBLIC
role. If no user ID or role is specified, the option change is applied to the currently logged on user ID that issued
the SET OPTION statement.
For example, this statement applies a change to the PUBLIC user ID, a role to which all users belong:
Note
When you set an option to TEMPORARY without specifying a user or role, the new option value takes effect
only for the currently logged-on user ID that issued the statement, and only for the duration of the
connection. When you set an option to TEMPORARY for the PUBLIC role, the change remains in place for as
long as the database is running — when the database shuts down, TEMPORARY options for the PUBLIC role
revert back to their permanent value.
When you set an option without issuing the TEMPORARY keyword, the new option value is permanent for the
user or role who issued the statement.
See Scope and Duration of Database Options, Temporary Options, and SET OPTION Statement for more
information on temporary versus permanent option values.
Note
For all database options that accept integer values, SAP IQ truncates any decimal <option-value>
setting to an integer value. For example, the value 3.8 is truncated to 3.
Caution
In this section:
Related Information
You can obtain a list of option settings, or the values of individual options, using sp_iqcheckoptions,
sa_conn_properties, the SET statement, SAP IQ Cockpit, and the SYSOPTIONS system view.
● For the connected user, the sp_iqcheckoptions stored procedure displays a list of the current value and
the default value of database options that have been changed from the default. sp_iqcheckoptions
considers all SAP IQ and SAP SQL Anywhere database options. SAP IQ modifies some SAP SQL Anywhere
option defaults, and these modified values become the new default values. Unless the new SAP IQ default
value is changed again, sp_iqcheckoptions does not list the option.
sp_iqcheckoptions also lists server start-up options that have been changed from the default values.
When a DBA runs sp_iqcheckoptions, he or she sees all options set on a permanent basis for all roles
and users and sees temporary options set for DBA. Users who are not DBAs see their own temporary
options. All users see nondefault server start-up options.
The sp_iqcheckoptions stored procedure requires no parameters. In Interactive SQL, run:
sp_iqcheckoptions
The system table DBA.SYSOPTIONDEFAULTS contains all of the names and default values of the SAP IQ
and SAP SQL Anywhere options. You can query this table to see all option default values.
● Current option settings for your connection are available as a subset of connection properties. You can list
all connection properties using the sa_conn_properties system procedure:
call sa_conn_properties
● In Interactive SQL, the SET statement with no arguments lists the current setting of options:
SET
SELECT *
FROM SYSOPTIONS
This shows all PUBLIC values, and those USER values that have been explicitly set.
● Use the connection_property system function to obtain an individual option setting. For example, this
statement returns the value of the Ansinull option:
You can set options at three levels of scope: public, user, and temporary.
Temporary options take precedence over user and public settings. User-level options take precedence over
public settings. If you set a user-level option for the current user, the corresponding temporary option is set as
well.
Some options, such as COMMIT behavior, are database-wide in scope. Setting these options requires DBA
permissions. Other options, such as ISOLATION_LEVEL, can also be applied to only the current connection,
and need no special permissions.
Changes to option settings take place at different times, depending on the option. Changing a global option
such as RECOVERY_TIME takes place the next time the server is started. Some of the options that take effect
after the server is restarted:
● CACHE_PARTITIONS
● CHECKPOINT_TIME
● OS_FILE_CACHE_BUFFERING
● OS_FILE_CACHE_BUFFERING_TEMPDB
● PREFETCH_BUFFER_LIMIT
● PREFETCH_BUFFER_PERCENT
● RECOVERY_TIME
● SWEEPER_THREADS_PERCENT
● WASH_AREA_BUFFERS_PERCENT
Options that affect only the current connection generally take place immediately. For example, you can change
option settings in the middle of a transaction.
Caution
Changing options when a cursor is open can lead to unreliable results. For example, changing
DATE_FORMAT might not change the format for the next row when a cursor is opened. Depending on the
way the cursor is being retrieved, it might take several rows before the change works its way to the user.
Adding the TEMPORARY keyword to the SET OPTION statement changes the duration of the change.
Ordinarily an option change is permanent: it will not change until it is explicitly changed using the SET OPTION
statement.
When the SET TEMPORARY OPTION statement is executed, the new option value takes effect only for the
current connection, and only for the duration of the connection.
When the SET TEMPORARY OPTION is used to set a PUBLIC option, the change is in place for as long as the
database is running. When the database is shut down, TEMPORARY options for the PUBLIC user ID revert back
to their permanent value.
Setting an option for the PUBLIC user ID temporarily offers a security advantage. For example, when the
LOGIN_MODE option is enabled, the database relies on the login security of the system on which it is running.
Enabling LOGIN_MODE temporarily means that a database relying on the security of a Windows domain will not
be compromised if the database is shut down and copied to a local machine. In this case, the LOGIN_MODE
option reverts to its permanent value, which could be Standard, a mode where integrated logins are not
permitted.
PUBLIC options can be set for a user, user extended role, or the PUBLIC role. They can be set for self or for
another user or role.
Setting a PUBLIC option for the PUBLIC role sets the value for all users who do not already have the PUBLIC
option set at the user level. Setting a PUBLIC option for a user or user-extended role overrides any value
defined at the PUBLIC role level.
No system privilege is required to set a PUBLIC option for self, but does require the SET ANY PUBLIC OPTION
system privilege to set for another user, user-extended role, or PUBLIC role. PUBLIC options cannot be set for
user-defined roles. PUBLIC database options take effect immediately. No shut down and restart of the
database server is required for the change to take effect.
SECURITY options are a special category of options, which are relevant to security of the database. It can be
set at user level or PUBLIC level depending on options.
Changes to SECURITY database options take effect immediately. Requires the SET ANY SECURITY OPTION
system privilege to set SECURITY database options.
No shut down and restart of the database server is required for the change to take effect.
SYSTEM options are a special category of options, which are relevant to security of the database. It can be set
at user level or PUBLIC level.
Requires the SET ANY SYSTEM OPTION system privilege to set SYSTEM options. Takes effect immediately.
Omit the <option-value> to delete the option setting from the database.
If <option-value> is omitted, the specified option setting is deleted from the database. If <option-value>
is a personal option setting, the value reverts back to the PUBLIC setting. If a TEMPORARY option is deleted, the
option setting reverts back to the permanent setting.
If you incorrectly type the name of an option when you are setting the option, the incorrect name is saved in the
SYSOPTION table. You can remove the incorrectly typed name from the SYSOPTION table by setting the option
PUBLIC with an equality after the option name and no value:
For example, if you set an option and incorrectly type the name, you can verify that the option was saved by
selecting from the SYSOPTIONS view:
PUBLIC a_mistyped_name ON
PUBLIC Abort_On_Error_File
PUBLIC Abort_On_Error_Line 0
PUBLIC Abort_On_Error_Number 0
...
Remove the incorrectly typed option by setting the option to no value, then verify that the option is removed:
PUBLIC Abort_On_Error_File
PUBLIC Abort_On_Error_Line 0
PUBLIC Abort_On_Error_Number 0
...
If you remove the PUBLIC option and then try to add the USER option, an error message displays:
Couldn't execute the statement.
Invalid option 'chained' -- no PUBLIC setting exists
SQLCODE=-200?ODBC 3 State="42000"
Line 1,Column 29
To reset the PUBLIC option to the default value, explicitly set the default value:
You can use stored procedures to configure the initial database option settings of a user.
You can connect to SAP IQ through the TDS (tabular data stream) protocol (Open Client and jConnect for JDBC
connections) or through the SAP IQ protocol (ODBC, Embedded SQL).
If users have both TDS and the SAP IQ-specific protocol, you can configure their initial settings using stored
procedures. As it is shipped, SAP IQ uses this method to set Open Client connections and jConnect
connections to reflect default SAP Adaptive Server Enterprise behavior.
The initial settings are controlled using the LOGIN_PROCEDURE option, which is called after all the checks have
been performed to verify that the connection is valid. The LOGIN_PROCEDURE option names a stored
procedure to run when users connect. The default setting is to use the sp_login_environment system
stored procedure. You can specify a different stored procedure.
The sp_login_environment procedure checks to see if the connection is being made over TDS. If it is, it calls
the sp_tsql_environment procedure, which sets several options to new default values for the current
connection.
Related Information
See New Features Summary SAP IQ 16.1 for information about database options deprecated in this release.
Procedure
Issue the SET OPTION statement, which has this general syntax:
Related Information
General database options is the class of options consisting of all options except Transact-SQL compatibility
options and Interactive SQL options.
In this section:
Related Information
The data extraction facility allows you to extract data from a database by redirecting the output of a SELECT
statement from the standard interface to one or more disk files or named pipes.
The TEMP_EXTRACT_... database options are used to control the data extraction feature.
Transact-SQL compatibility options allow SAP IQ behavior to be compatible with SAP Adaptive Server
Enterprise, or to both support old behavior and allow ISO SQL92 behavior.
For further compatibility with SAP ASE, you can set some of these options for the duration of the current
connection using the Transact-SQL SET statement instead of the SAP IQ SET OPTION statement.
In this section:
Related Information
The default setting for some options differs from the SAP Adaptive Server Enterprise default setting. To ensure
compatible behavior, you should explicitly set the options.
When a connection is made using the Open Client or JDBC interfaces, some option settings are explicitly set
for the current connection to be compatible with SAP ASE.
ALLOW_NULLS_BY_DEFAULT OFF
ANSINULL OFF
CHAINED OFF
CONTINUE_AFTER_RAISERROR ON
DATE_FORMAT YYYY-MM-DD
DATE_ORDER MDY
ESCAPE_CHARACTER OFF
ISOLATION_LEVEL 1
ON_TSQL_ERROR CONDITIONAL
QUOTED_IDENTIFIER OFF
TIME_FORMAT HH:NN:SS.SSS
TSQL_VARIABLES OFF
Interactive SQL options change how Interactive SQL interacts with the database.
Syntax
Syntax 1
Syntax 2
SET PERMANENT
Syntax 3
SET
Parameters
<userid> ::=
<identifier>, <string> or <host-variable>
<option-name> ::=
<identifier>, <string> or <host-variable>
Remarks
In Syntax 1, you cannot use the TEMPORARY keyword between the BEGIN and END keywords of a compound
statement.
In Syntax 2, SET PERMANENT stores all current Interactive SQL options in the SYSOPTIONS system table.
These settings are automatically established every time Interactive SQL is started for the current user ID.
Syntax 3 is used to display all of the current option settings. If there are temporary options set for Interactive
SQL or the database server, these are displayed; otherwise, the permanent option settings are displayed.
Related Information
Descriptions of general, Transact-SQL compatibility, and Interactive SQL database options. Some option
names are followed by a class indicator in square brackets.
● [Interactive SQL] – The option changes how Interactive SQL interacts with the database.
● [TSQL] – The option allows SAP IQ behavior to be made compatible with SAP Adaptive Server Enterprise,
or to both support old behavior and allow ISO SQL92 behavior.
In this section:
Decrypts encrypted data upgraded or imported from databases prior to SAP IQ 15.4.
Importing or upgrading data encrypted in a previous version of SAP IQ to a later release can result in
decryption errors. Use this option to ensure that your encrypted data is decrypted properly during an import or
upgrade.
Allowed Values
Default
-1
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
In some cases, SAP IQ may not be able to read encrypted data in a database upgraded from a previous release:
There was an error reading the results of the SQL statement.
The displayed results may be incorrect or incomplete.
Decryption error: Incorrect CAST type varchar(16) for decrypt data of
type numeric(16,0).
-- (hos_encrypt.cxx 359)
SQLCODE=-1001064, ODBC 3 State="HY000"
This issue applies to encrypted data upgraded or imported from databases prior to SAP IQ 15.4. If this error
occurs, contact technical support. Diagnosing encryption issues requires the assistance of an SAP IQ product
support engineer. Incorrect usage can result in decryption errors.
Related Information
The amount of time before SAP IQ removes a shut down node from the affinity map and reassigns its partitions
to other nodes.
Allowed Values
Default
10 minutes
Remarks
The amount of time before SAP IQ removes a shut down node from the affinity map and reassigns its partitions
to other nodes.
Related Information
Allowed Values
-6 to 6
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For aggregation (GROUP BY, DISTINCT, SET functions) within a query, the SAP IQ optimizer has a choice of
several algorithms for processing the aggregate. AGGREGATION_PREFERENCE lets you override the costing
decision of the optimizer when choosing the algorithm. the option does not override internal rules that
determine whether an algorithm is legal within the query engine.
This option is normally used for internal testing and for manually tuning queries that the optimizer does not
handle well. Only experienced DBAs should use it. Inform SAP Technical Support, if you need to set
AGGREGATION_PREFERENCE, as setting this option might mean that a change to the optimizer may be
appropriate.
Related Information
Controls whether new columns created without specifying either NULL or NOT NULL are allowed to contain
NULL values.
Allowed Values
ON, OFF
Default
● ON
● OFF for Open Client and JDBC connections
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SECURITY OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Enable this option to read from files on a client computer, for example by using the READ_CLIENT_FILE
function.
Related Information
Controls what values you can set the SNAPSHOT_VERSIONING option to. Applies to RLV-enabled tables only.
Allowed Values
Default
any
Scope
Related Information
Controls whether cursors that were opened WITH HOLD are closed when a ROLLBACK is performed.
Allowed Values
ON
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The ANSI SQL/3 standard requires all cursors be closed when a transaction is rolled back. This option forces
that behavior and cannot be changed. The CLOSE_ON_ENDTRANS option overrides this option.
Related Information
Allowed Values
ON, OFF
Default
ON
Scope
Remarks
With ANSI_PERMISSIONS ON, SQL92 permission requirements for DELETE and UPDATE statements are
checked. The default value is OFF in SAP Adaptive Server Enterprise. This table outlines the differences:
UPDATE UPDATE permission on the columns where ● UPDATE permission on the columns
values are being set
where values are being set
● SELECT permission on all columns ap
pearing in the WHERE clause.
● SELECT permission on all columns on
the right side of the SET clause.
Controls the behavior of the SUBSTRING (SUBSTR) function when negative values are provided for the start or
length parameters.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When the ANSI_SUBSTRING option is set to ON, the behavior of the SUBSTRING function corresponds to
ANSI/ISO SQL/2003 behavior. A negative or zero start offset is treated as if the string were padded on the left
with non-characters, and gives an error if a negative length is provided.
When this option is set to OFF, the behavior of the SUBSTRING function is the same as in earlier versions of SAP
IQ: a negative start offset means an offset from the end of the string, and a negative length means the desired
Avoid using non-positive start offsets or negative lengths with the SUBSTRING function. Where possible, use
the LEFT or RIGHT functions instead.
Example
These examples show the difference in the values returned by the SUBSTRING function based on the setting of
the ANSI_SUBSTRING option:
SUBSTRING( 'abcdefgh',-2,4 );
ansi_substring = Off ==> 'gh'
// substring starts at second-last character
ansi_substring = On ==> 'gh'
// takes the first 4 characters of
// ???abcdefgh and discards all ?
SUBSTRING( 'abcdefgh',4,-2 );
ansi_substring = Off ==> 'cd'
ansi_substring = On ==> value -2 out of range
for destination
SUBSTRING( 'abcdefgh',0,4 );
ansi_substring = Off ==> 'abcd'
ansi_substring = On ==> 'abcd'
Related Information
Allowed Values
CURSORS
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
SAP IQ provides several extensions that allow updates that are not permitted by the ANSI SQL standard. These
extensions provide powerful, efficient mechanisms for performing updates. However, in some cases, they
cause behavior that is not intuitive. This behavior might produce anomalies such as lost updates if the user
application is not designed to expect the behavior of these extensions.
ANSI_UPDATE_CONSTRAINTS controls whether updates are restricted to those permitted by the SQL92
standard.
If the option is set to CURSORS, these same restrictions are in place, but only for cursors. If a cursor is not
opened with FOR UPDATE or FOR READ ONLY, the database server determines whether updates are
permitted based on the SQL92 standard.
Example
Option 1
Set ANSI_UPDATE_CONSTRAINTS to STRICT:
This results in an error indicating that the attempted update operation is not allowed.
Option 2
Set ANSI_UPDATE_CONSTRAINTS to CURSORS or OFF:
Related Information
Allowed Values
ON, OFF
Default
● ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
With ANSINULL ON, results of comparisons with NULL using '=' or '!=' are unknown. This includes results of
comparisons implied by other operations such as CASE.
Setting ANSINULL ON to OFF allows comparisons with NULL to yield results that are not unknown, for
compatibility with SAP Adaptive Server Enterprise.
Note
Unlike SAP SQL Anywhere, SAP IQ does not generate the warning “null value eliminated in
aggregate function” (SQLSTATE=01003) for aggregate functions on columns containing NULL values.
Related Information
Specifies that the display of SAP IQ binary columns is consistent with the display of SAP Adaptive Server
Enterprise binary columns.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option affects only columns in the IQ store. It does not affect variables, catalog store columns or SAP SQL
Anywhere columns. When this option is ON, SAP IQ displays the column in readable ASCII format; for example,
0x1234567890abcdef. When this option is OFF, SAP IQ displays the column as binary output (not ASCII).
Set ASE_BINARY_DISPLAY OFF to support bulk copy operations on binary data types. SAP IQ supports bulk
loading of remote data via the LOAD TABLE USING CLIENT FILE statement.
Related Information
Specifies that output of SAP IQ functions, including INTTOHEX and HEXTOINT, is consistent with the output of
SAP Adaptive Server Enterprise functions.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When ASE_BEHAVIOR_FUNCTION is ON, some of the SAP IQ data type conversion functions, including
HEXTOINT and INTTOHEX, return output that is consistent with the output of SAP ASE functions. The
differences in the SAP ASE and SAP IQ output, with respect to formatting and length, exist because SAP ASE
primarily uses signed 32-bit as the default and SAP IQ primarily uses unsigned 64-bit as the default.
SAP IQ does not provide support for 64-bit integer, as SAP ASE does not have a 64-bit integer data type.
In this example, the HEXTOINT function returns a different value based on whether ASE_BEHAVIOR_FUNCTION
is ON or OFF.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
Remarks
Auditing is the recording of details about many events in the database in the transaction log. Auditing provides
some security features, at the cost of some performance. When you turn on auditing for a database, you
cannot stop using the transaction log. You must turn auditing off before you turn off the transaction log.
Databases with auditing on cannot be started in read-only mode.
For the AUDITING option to work, you must set the auditing option to ON, and use the
sa_enable_auditing_type system procedure to indicate the types of information to audit, including any
combination of permission checks, connection attempts, DDL statements, public options, triggers. Auditing
will not take place if either of these conditions is true:
If you set the AUDITING option to ON, and do not specify auditing options, all types of auditing information are
recorded.
Related Information
Allowed values
On, Off
Default
Off
Remarks
If this option is set to On, then the database server automatically commits after every request. This option can
only be set temporarily for a connection.
When an application enables automatic commit using the specific driver API, the SQL Anywhere JDBC, ODBC,
ADO.NET, and OLE DB drivers automatically set the auto_commit option to On if they are connected to a
version SAP IQ 16.1 database server. For previous versions, the driver reverts back to handling automatic
commits on the client side. By default, automatic commit is enabled for these drivers.
Note
Do not set the auto_commit server option directly when using an API such as JDBC, ODBC, ADO.NET, or
OLE DB. Use the API-specific mechanism for enabling or disabling automatic commit. For example, in
ODBC set the SQL_ATTR_AUTOCOMMIT connection attribute using SQLSetConnectAttr. When you use
the API, the driver can track the current setting of automatic commit.
Note
Use a BEGIN block to set the database option from an Interactive SQL session to avoid setting the
Interactive SQL option of the same name:
BEGIN
SET TEMPORARY OPTION AUTO_COMMIT = 'ON';
END;
Use this Interactive SQL command to verify the new setting of the database option:
SET;
Note
The auto_commit option is different from the chained option. Setting auto_commit to On forces the
database server to commit after every request. Setting the chained option to Off forces the database server
to commit after each statement. This distinction is most important when executing a stored procedure.
Setting the chained option to Off will result in a commit request after the execution of each individual
statement within the procedure. Setting the auto_commit option to On will result in a single commit
request once the entire procedure finishes executing. In cases where automatic commit is necessary, it is
much better to use the auto_commit option rather than the chained option.
Registers newly created tables in the RLV store, enabling row-level versioning. RLV-enabled tables are eligible
for multiple writer concurrent access. You can override this setting at the table level using the CREATE_TABLE
statement.
Allowed Values
ON, OFF
Default
OFF
Scope
Remarks
When set to ON, newly created tables are registered in the RLV store. RLV-enabled tables are optimized for
real-time updates.
The { ENABLE | DISABLE } RLV STORE clause of the CREATE_TABLE statement always overrides the
BASE_TABLES_IN_RLV_STORE option.
Once Base_Tables_in_RLV_STORE option is enabled, any newly created IQ base tables are automatically
RLV-enabled. Enabling this option has no impact on existing IQ base tables.
Maximum percentage of a user’s temp memory that a persistent bit-vector object can pin.
Allowed Values
0 to 100
Default
40
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option is primarily for use by Technical Support. If you change the value of
BIT_VECTOR_PINNABLE_CACHE_PERCENT, do so with extreme caution; first analyze the effect on a wide
variety of queries.
Controls the behavior in response to locking conflicts. BLOCKING is not supported on secondary nodes of a
multiplex.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When BLOCKING is off, a transaction receives an error when it attempts a write operation and is blocked by the
read lock of another transaction.
Related Information
Controls the length of time a transaction waits to obtain a lock. BLOCKING_TIMEOUT is not supported on
secondary nodes of a multiplex.
Allowed Values
Integer, in milliseconds.
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When the blocking option is on, any transaction attempting to obtain a lock that conflicts with an existing lock
waits for the indicated number of milliseconds for the conflicting lock to be released. If the lock is not released
within blocking_timeout milliseconds, an error is returned for the waiting transaction.
Set the option to 0 to force all transactions attempting to obtain a lock to wait until all conflicting transactions
release their locks.
Related Information
Controls the way SAP IQ determines whether to continue prefetching B-tree pages for a given query.
Allowed Values
0 to 1000
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Use only if instructed to do so by SAP Technical Support. For queries that use HG (High_Group) indexes, SAP IQ
prefetches B-tree pages sequentially until it determines that prefetching is no longer useful. For some queries,
it might turn off prefetching prematurely. Increasing the value of BT_PREFETCH_MAX_MISS makes it more
likely that SAP IQ continues prefetching, but might also increase I/O unnecessarily.
If queries using HG indexes run more slowly than expected, try gradually increasing the value of
BT_PREFETCH_MAX_MISS.
Experiment with different settings to find the setting that gives the best performance. For most queries, useful
settings are in the range of 1 to 10.
Related Information
Restricts the size of the read-ahead buffer for the High_Group B-tree.
Allowed Values
0 to 100
Default
10
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
B-tree prefetch is activated by default for any sequential access to the High_Group index such as INSERT, large
DELETE, range predicates, and DBCC (Database Consistency Checker) commands.
BT_PREFETCH_SIZE limits the size of the read-ahead buffer for B-tree pages. Reducing prefetch size frees
buffers, but also degrades performance at some point. Increasing prefetch size might have marginal returns.
This option should be used in conjunction with the options PREFETCH_GARRAY_PERCENT,
GARRAY_INSERT_PREFETCH_SIZE, and GARRAY_RO_PREFETCH_SIZE for non-unique High_Group indexes.
Related Information
Determines per-page fill factor during page splits for B-tree structures.
Allowed Values
0 to 90
50
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
B-tree structures are used by the HG, DT, TIME, and DTTM indexes. Splits of a B-tree page try to leave the
specified percentage empty to avoid splitting when new keys are inserted into the index.
Indexes reserve storage at the page level that can be allocated to new keys as additional data is inserted.
Reserving space consumes additional disk space, but can help the performance of incremental inserts. If future
plans include incremental inserts, and the new rows do not have values that are already present in the index, a
nonzero value for GARRAY_PAGE_SPLIT_PAD_PERCENT may improve incremental insert performance.
If you do not plan to incrementally update the index, you can reduce the value of this option to save disk space.
Related Information
The maximum percentage of main buffer cache to use for affinity. Non-affinity data can use this area if
insufficient affinity data exists.
Allowed Values
0 to 100 %
Default
75
Scope
Remarks
This option defines the percentage of the buffer cache used for affinitized data buffers. SAP IQ buffer caches
are organized as a long MRU/LRU chain. Non-affinitized data buffers are put into the chain after affinitized
buffers when this percentage is non-zero, so that affinitized data stay in the cache longer than non-affinitized
data. If there are insufficient affinitized data buffers to fill this entire percentage, non-affinitized data may
consume the remainder.
Note
Before changing this option, check the value of the WASH_AREA_BUFFERS_PERCENT option.
WASH_AREA_BUFFERS_PERCENT affects the LRU side of the buffer cache and CACHE_AFFINITY_PERCENT
affects the MRU side. The total of these two values cannot exceed 100 percent.
Related Information
Sets the number of partitions to be used for the main and temporary buffer caches.
Allowed Values
0, 1, 2, 4, 8, 16, 32, 64
Default
Scope
Remarks
Partitioning the buffer cache can sometimes improve performance on systems with multiple CPUs by reducing
lock contention. Normally, you should rely on the value that SAP IQ calculates automatically, which is based on
the number of CPUs on your system. However, if you find that load or query performance in a multi-CPU
configuration is slower than expected, you might be able to improve it by setting a different value for
CACHE_PARTITIONS.
Both the number of CPUs and the platform can influence the ideal number of partitions. Experiment with
different values to determine the best setting for your configuration.
The value you set for CACHE_PARTITIONS applies to both the main and temp buffer caches. The absolute
maximum number of partitions is 64, for each buffer cache.
The number of partitions does not affect other buffer cache settings. It also does not affect statistics collected
by the IQ monitor; statistics for all partitions are rolled up and reported as a single value.
Example
In a system with 100 CPUs, if you do not set CACHE_PARTITIONS, SAP IQ automatically sets the number of
partitions to 16:
With this setting, there are 16 partitions for the main buffer cache and 16 partitions for the temp cache.
In the same system with 100 CPUs, to explicitly set the number of partitions to 8, specify:
Related Information
Allowed Values
ON, OFF
Default
● ON
● OFF for Open Client and JDBC connections
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Controls the Transact-SQL transaction mode. In unchained mode (CHAINED = OFF) each statement is
committed individually unless an explicit BEGIN TRANSACTION statement is executed to start a transaction. In
chained mode (CHAINED = ON) a transaction is implicitly started before any data retrieval or modification
statement. For SAP Adaptive Server Enterprise, the default setting is OFF.
Related Information
Set the maximum length of time, in minutes, that the database server runs without doing a checkpoint.
Allowed Values
Integer
Default
60
Description
This option is used with the RECOVERY_TIME option to decide when checkpoints should be done.
Related Information
Sets the number of rows that are returned from remote servers for each fetch.
Allowed Values
Integer
Default
50
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
This option sets the ODBC FetchArraySize value when you are using ODBC to connect to a remote database
server.
Related Information
Allowed Values
ON
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
When CLOSE_ON_ENDTRANS is set to ON (the default and only value allowed), cursors are closed at the end of
a transaction, which is Transact-SQL compatible behavior.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● When set at the database level, the value becomes the default for any new user, but has no impact on
existing users.
● When set at the user level, overrides the PUBLIC value for that user only.
Requires the SET ANY PUBLIC OPTION system privilege to set this option.
Can be set temporary for an individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
To control statistics collection based on query execution time, you can use the QUERY_PLAN_MIN_TIME
option. Statistics for any query having an execution time less than the value of QUERY_PLAN_MIN_TIME are
not recorded.
Example
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
Intermediate RAISERROR statuses and codes are lost after the procedure terminates. If, at return time, an error
occurs along with the RAISERROR, then the error information is returned and the RAISERROR information is
lost. The application can query intermediate RAISERROR statuses by examining @@error global variable at
different execution points.
Related Information
Controls reporting of data type conversion failures on fetching information from the database.
Allowed Values
ON, OFF
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option controls whether data type conversion failures, when data is fetched from the database or inserted
into the database, are reported by the database as errors (CONVERSION_ERROR set to ON), or as warnings
(CONVERSION_ERROR set to OFF).
If the option is set to OFF, the warning SQLE_CANNOT_CONVERT is produced. Each thread doing data
conversion for a LOAD statement writes at most one warning message to the .iqmsg file.
If conversion errors are reported as warnings only, the NULL value is used in place of the value that could not
be converted. In Embedded SQL, an indicator variable is set to -2 for the column or columns that cause the
error.
Note
SAP IQ does not silently truncate the conversion result of NUMERIC and DATE data types to CHAR and
VARCHAR. A conversion error is generated when the following data types are converted to a string whose
length is less than the column width:
The CONVERSION_ERROR option controls SAP IQ behavior in cases of conversion error. If you set the
CONVERSION_ERROR option to:
Restricts implicit conversion between binary data types (BINARY, VARBINARY, and LONG BINARY) and other
non-binary data types (BIT, TINYINT, SMALLINT, INT, UNSIGNED INT, BIGINT, UNSIGNED BIGINT, CHAR,
VARCHAR, and LONG VARCHAR) on various operations. Also allows all explicit conversions to be permitted as
implicit conversions on various operations.
Allowed Values
0, 1, 2
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The default CONVERSION_MODE value of 0 maintains implicit conversion behavior prior to version 12.7.
Setting CONVERSION_MODE to 1 restricts implicit conversion of binary data types to any other non-binary data
type on INSERT, UPDATE, and in queries. The restrict binary conversion mode also applies to LOAD TABLE
Setting CONVERSION_MODE option to allows all explicit conversions to be permitted as implicit conversions on
various operations. If this option is not set, the user must use CAST or CONVERT in queries that require explicit
conversions.
Users must be specifically licensed to use the encrypted column functionality of the SAP IQ Advanced Security
Option.
The CONVERSION_MODE option value of 1 (CONVERSION_MODE = 1) restricts implicit conversion for these
operations:
In this section:
Restrict Implicit Binary Conversion Mode for LOAD TABLE [page 1791]
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to LOAD TABLE with
CHECK constraint or default value.
Restrict Implicit Binary Conversion Mode for Positioned INSERT and Positioned UPDATE via Updatable
Cursor [page 1793]
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to certain types of
INSERT and UPDATE via updatable cursor.
Related Information
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to LOAD TABLE with CHECK
constraint or default value.
Example
The request in this example fails, and returns the following message:
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to INSERT...SELECT,
INSERT...VALUE, and INSERT...LOCATION.
Example
The query in this example fails, and returns the following message:
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to certain types of UPDATE.
The query in this example fails, and returns the following message:
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to certain types of INSERT
and UPDATE via updatable cursor.
The restrict implicit binary conversion mode (CONVERSION_MODE set to 1) applies to all aspects of queries in
general.
Comparison Operators
● WHERE clause
● HAVING clause
● CHECK clause
● ON phrase in a join
● IF CASE expression
The query in this example fails, and returns the following message:
● CHAR
● CHAR_LENGTH
● DIFFERENCE
● LCASE
● LEFT
● LOWER
● LTRIM
● PATINDEX
● RIGHT
● RTRIM
● SIMILAR
● SORTKEY
● SOUNDEX
● SPACE
● STR
● TRIM
● UCASE
● UPPER
The query in this example fails, and returns the following message:
The following functions allow either a string argument or a binary argument. When CONVERSION_MODE = 1, the
restriction applies to mixed type arguments, that is, one argument is string and the other argument is binary.
● INSERTSTR
● LOCATE
● REPLACE
● STRING
● STUFF
In this query, the column cvb is defined as VARBINARY and the column cvc is defined as VARCHAR. When
executed, the query fails, and returns the following message:
● BIT_LENGTH
When CONVERSION_MODE = 1, the restriction applies to these operators used in arithmetic operations: +, -, *, /
The restriction applies to these bitwise operators used in bitwise expressions: & (AND), | (OR), ^ (XOR)
● ROUND
● “TRUNCATE”
● TRUNCNUM
The query in this example fails, and returns the following message:
● ARGN
● SUBSTRING
● DATEADD
● YMD
The query in this example fails, and returns the following message:
When CONVERSION_MODE = 1, no further restriction applies to analytical functions, aggregate functions, and
numeric functions that require numeric expressions as arguments.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect when you run
sp_iqcheckdb in any mode.
Remarks
Helps further compress data and improve performance, especially for databases with many variable character
strings.
Set this option and then run sp_iqcheckdb only once, and only for VARCHAR columns that were created
before version 12.4.2.
Allowed Values
Integer, in milliseconds
Default
250
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option only has meaning when COOPERATIVE_COMMITS is set to ON. The database server waits for the
specified number of milliseconds for other connections to fill a page of the log before writing to disk. The
default setting is 250 milliseconds.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If COOPERATIVE_COMMITS is set to OFF, a COMMIT is written to disk as soon as the database server receives it,
and the application is then allowed to continue.
If CREATE_HG_AND_FORCE_PHYSICAL_DELETE is set to ON, the default, the database server does not
immediately write the COMMIT to the disk. Instead, it requires the application to wait for a maximum length set
Related Information
10.6.36 CREATE_HG_AND_FORCE_PHYSICAL_DELETE
Option
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Set CREATE_HG_AND_FORCE_PHYSICAL_DELETE before creating a tiered HG column index. It does not affect
preexisting HG indexes. It has no effect on sp_iqrebuildindex. This option persists through the life of the
tiered HG index, and cannot be changed or modified unless the index is dropped and the option toggled before
re-creating the index (sp_iqrebuildindex cannot modify the status of the index).
Note
sp_iqrebuildindex output includes a Force Physical Delete column that identifies the status of
this option.
Related Information
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option is ON by default in all newly created 16.1 databases, and all 16.1 database upgraded from SAP IQ
15.x. To take advantage of the new tiered structure, set this option to OFF. Use sp_iqrebuildindex to
convert non-tiered HG indexes to tiered HG and vice versa.
Related Information
Allowed Values
20 to 100,000
200
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set only for individual
connections or the PUBLIC role. You must shut down and restart the database server for the change to
take effect.
Remarks
When an application opens a cursor, SAP IQ creates a FIFO (first-in, first-out) buffer to hold the data rows
generated by the query. CURSOR_WINDOW_ROWS defines how many rows can be put in the buffer. If the cursor is
opened in any mode other than NO SCROLL, SAP IQ allows for backward scrolling for up to the total number of
rows allowed in the buffer before it must restart the query. This is not true for NO SCROLL cursors, as they do
not allow backward scrolling.
For example, with the default value for this option, the buffer initially holds rows 1 through 200 of the query
result set. If you fetch the first 300 rows, the buffer holds rows 101 through 300. You can scroll backward or
forward within that buffer with very little overhead cost. If you scroll before row 101, SAP IQ restarts that query
until the required row is back in the buffer. This can be an expensive operation to perform, so your application
should avoid it where possible. An alternative is to increase the value for CURSOR_WINDOW_ROWS to
accommodate a larger possible scrolling area; however, the default setting of 200 is sufficient for most
applications.
Related Information
Allowed Values
0 to 6
Default
0 (Sunday)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set for an individual
connection or for the PUBLIC role. You must shut down and restart the database server for the change to
take effect.
Remarks
By default, Sunday is day 1, Monday is day 2, Tuesday is day 3, and so on. This option specifies which day is the
first day of the week:
● 0 – Sunday
● 1 – Monday
● 2 – Tuesday
● 3 – Wednesday
● 4 – Thursday
● 5 – Friday
● 6 – Saturday
For example, if you change the value of DATE_FIRST_DAY_OF_WEEK to 3, Wednesday becomes day 1,
Thursday becomes day 2, and so on. This option only affects the DOW and DATEPART functions.
The SAP SQL Anywhere option FIRST_DAY_OF_WEEK performs the same function, but assigns the values 1
through 7 instead of 0 through 6. 1 stands for Monday and 7 for Sunday (the default).
Sets the format used for dates retrieved from the database.
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Symbol Description
yy 2-digit year
mmmm[m...] Character long form for months—as many characters as there are m's, until the number of m’s
specified exceeds the number of characters in the month’s name
dddd[d...] Character long form for day of the week—as many characters as there are d's, until the number
of d’s specified exceeds the number of characters in the day’s name
Note
Multibyte characters are not supported in date format strings. Only single-byte characters are allowed,
even when the collation order of the database is a multibyte collation order like 932JPN. Use the
concatenation operator to include multibyte characters in date format strings. For example, if '<?>'
represents a multibyte character, use the concatenation operator to move the multibyte character outside
of the date format string:
Each symbol is substituted with the appropriate data for the date being formatted. Any format symbol that
represents character rather than digit output can be put in uppercase which causes the substituted characters
to also be in uppercase. For numbers, using mixed case in the format string suppresses leading zeros.
You can control the padding of numbers by changing the case of the symbols. Same-case values (MM, mm, DD,
or dd) all pad number with zeros. Mixed-case (Mm, mM, Dd, or dD) cause the number to not be zero-padded;
the value takes as much room as required. For example:
2011/1/1
Examples
This table illustrates DATE_FORMAT settings, together with the output from this statement, executed on
Saturday May 21, 2011:
yyyy/mm/dd/ddd 2011/05/21/sat
jjj 141
mm-yyyy 05-2011
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
DATE_ORDER is used to determine whether 10/11/12 is Oct 11 1912, Nov 12 1910, or Nov 10 1912. The option can
have the value 'MDY', 'YMD', or 'DMY'.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When DBCC_LOG_PROGRESS is ON, the sp_iqcheckdb system stored procedure sends progress messages to
the IQ message file. These messages allow the user to follow the progress of the sp_iqcheckdb operation.
Examples
Sample progress log output of the command sp_iqcheckdb 'allocation table nation':
Related Information
Controls the percent of the cache used by the sp_iqcheckdb system stored procedure.
Allowed Values
0 to 100
50
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect at the next execution of sp_iqcheckdb.
Remarks
The sp_iqcheckdb system stored procedure works with a fixed number of buffers, as determined by this
option. By default, a large percentage of the cache is reserved to maximize sp_iqcheckdb performance.
Related Information
Controls whether or not MESSAGE statements that include a DEBUG ONLY clause are executed.
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option allows you to control the behavior of debugging messages in stored procedures that contain a
MESSAGE statement with the DEBUG ONLY clause specified. By default, this option is set to OFF and debugging
messages do not appear when the MESSAGE statement is executed. By setting DEBUG_MESSAGES to ON, you
can enable the debugging messages in all stored procedures.
Note
DEBUG ONLY messages are inexpensive when the DEBUG_MESSAGES option is set to OFF, so these
statements can usually be left in stored procedures on a production system. However, they should be used
sparingly in locations where they would be executed frequently; otherwise, they might result in a small
performance penalty.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set as a temporary option only, for an individual connection or for the PUBLIC role, for the
duration of the current connection.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Takes effect immediately.
Remarks
When the DEDICATED_TASK connection option is set to ON, a request handling task is dedicated exclusively to
handling requests for the connection. By pre-establishing a connection with this option enabled, you can gather
information about the state of the database server if it becomes otherwise unresponsive.
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
DEFAULT_DBSPACE allows the administrator to set the default dbspace for a role or user or allows a user to set
the user’s own default dbspace.
IQ_SYSTEM_TEMP is always used for global temporary tables unless a table IN clause is used that specifies
SYSTEM, in which case an SA global temporary table is created.
At database creation, the system dbspace, IQ_SYSTEM_MAIN, is created and is implied when the
PUBLIC.DEFAULT_DBSPACE option setting is empty or explicitly set to IQ_SYSTEM_MAIN. Immediately after
creating the database, create a second main dbspace, revoke CREATE privilege in dbspace IQ_SYSTEM_MAIN
from PUBLIC, grant CREATE in dbspace for the new main dbspace to selected users or PUBLIC, and set
PUBLIC.DEFAULT_DBSPACE to the new main dbspace. For example:
In this example, CONNECT and RESOURCE privileges on all dbspaces are granted to users usrA and usrB, and
each of these users is granted CREATE privilege on a particular dbspace:
UsrA connects:
UsrB connects:
DBA connects:
sp_iqindexinfo result:
Related Information
Allowed Values
ON, OFF
Default
ON
Scope
Remarks
By default, disk striping is ON for all dbspaces in the IQ main store. This option is used only by CREATE
DBSPACE and defines the default striping value, if CREATE DBSPACE does not specify striping.
Related Information
Provides default selectivity estimates to the optimizer for most HAVING clauses in parts per million.
Allowed Values
0 to 1,000,000
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Allowed Values
<identifier> or <string>
Default
Scope
Can only be set as a temporary option, for the duration of the current connection.
Remarks
DEFAULT_ISQL_ENCODING specifies the code page to use when reading or writing files. It cannot be set
permanently. The default code page is the default code page for the platform you are running on.
Interactive SQL determines the code page that is used for a particular OUTPUT or READ statement as follows,
where code page values occurring earlier in the list take precedence over those occurring later in the list:
● The code page specified in the ENCODING clause of the OUTPUT or READ statement
● The code page specified with the DEFAULT_ISQL_ENCODING option (if this option is set)
● The default code page for the computer on which Interactive SQL is running
Example
Sets an upper threshold in KB on the amount to write to a stripe before write operations move on to the next
stripe.
This setting is the default size for all dbspaces in the IQ main store.
Allowed Values
1 to maximum integer
Default
Scope
Remarks
The default value of 1 KB means that one page is compressed and that the compressed page is written to disk
as a single operation. Whatever the chosen page size, the next operation writes to the next dbfile in that
dbspace.
To write multiple pages to the same stripe before moving to the next stripe, change the
DEFAULT_KB_PER_STRIPE setting. For example, if the page size is 128 KB, and DEFAULT_KB_PER_STRIPE set
This option is used only by CREATE DBSPACE and defines the default disk striping size for dbspaces in the IQ
main store, if CREATE DBSPACE does not specify a stripe size.
Related Information
Provides default selectivity estimates (in parts per million) to the optimizer for most LIKE predicates.
Allowed Values
0 to 1,000,000
Default
150,000
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
DEFAULT_LIKE_MATCH_SELECTIVITY_PPM sets the default selectivity for generic LIKE predicates, for
example, LIKE '<string%string>' where % is a wildcard character.
The optimizer relies on this option when other selectivity information is not available and the match string does
not start with a set of constant characters followed by a single wildcard.
If the column has or a 1- or 2- or 3-byte FP index, the optimizer can get exact information and does not need to
use this value.
Related Information
Provides default selectivity estimates (in parts per million) to the optimizer for leading constant LIKE
predicates.
Allowed Values
1 to 1,000,000
Default
150,000
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
DEFAULT_LIKE_RANGE_SELECTIVITY_PPM sets the default selectivity for LIKE predicates, of the form
LIKE '<string%>' where the match string is a set of constant characters followed by a single wildcard
character (%). The optimizer relies on this option when other selectivity information is not available.
If the column has or a 1- or 2- or 3-byte FP index, the optimizer can get exact information and does not need to
use this value.
You can also specify selectivity (user-supplied condition hints) in the query.
Related Information
Enables you to override the default estimate of the number of rows to return from a proxy table.
Allowed Values
0 to 4,294,967,295
200,000
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Related Information
Enables you to override the default estimate of the number of rows to return from a table UDF (either a C, C++,
or Java table UDF).
Allowed Values
0 to 4,294,967,295
Default
200,000
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
A table UDF or TPF can use the DEFAULT_TABLE_UDF_ROW_COUNT option to give the query processor an
estimate for the number of rows that a table UDF will return. This is the only way a Java table UDF can convey
this information. However, for a C or C++ table UDF, the UDF developer should consider publishing this
information in the describe phase using the EXTFNAPIV4_DESCRIBE_PARM_TABLE_NUM_ROWS describe
parameter to publish the number of rows it expects to return. The value of
EXTFNAPIV4_DESCRIBE_PARM_TABLE_NUM_ROWS always overrides the value of the
DEFAULT_PROXY_TABLE_UDF_ROW_COUNT option.
Related Information
Allowed Values
Integer, in milliseconds.
Default
500
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option is ignored by SAP IQ, since DELAYED_COMMITS can only be set OFF.
Related Information
Allowed Values
OFF
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When set to OFF (the only value allowed by SAP IQ), the application must wait until the COMMIT is written to
disk. This option must be set to OFF for ANSI/ISO COMMIT behavior.
Related Information
Allows load, insert, update, or delete operations to bypass the referential integrity check, improving
performance.
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Users are responsible for ensuring that no referential integrity violation occurs during requests while
DISABLE_RI_CHECK is set to ON.
Related Information
Allowed Values
ON, OFF
Default
ON
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option indicates whether division by zero is reported as an error. If the option is set ON, division by zero
results in an error with SQLSTATE 22012.
If the option is set OFF, division by zero is not an error; a NULL is returned.
Related Information
Controls whether zone maps are used querying to potentially improve performance.
Allowed Values
● 0 – production mode. Zone map predicates are considered for query processing by the optimizer.
● 1 – off. Zone map predicates are not used.
● 2 – diagnostic mode. Zone map predicates are created but are validated only; they are not used in the
computation of the query’s result.
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Temporary database option DQP_ENABLED allows you to enable or disable distributed query processing at the
connection level.
Allowed Values
● ON – regular DQP
● 1 – DQP for a Logical Server Policy.
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
You can set the temporary database option DQP_ENABLED to OFF to disable DQP for the current connection.
You can set the option to ON (the default value) to enable DQP for the current connection, but only when DQP
is enabled for the user by that user's login policy for the logical server of the current connection.
Setting DQP_ENABLED to ON results in an error if DQP is disabled based upon the user's login policy:
Invalid setting for option 'DQP_ENABLED'
Note
Any changes you make to a user's login policy options affect new connections only. Login policy option
settings for existing connections are based upon the time the connection was initially established.
Related Information
Temporary database option DQP_ENABLED_OVER_NETWORK allows you to enable or disable distributed query
processing over the network at the connection level.
Note
This option is deprecated and will be removed from the documentation in a future release.
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option for PUBLIC or for other user or
role. Can be set temporary for an individual or public.
Remarks
You can set the temporary database option DQP_ENABLED_OVER_NETWORK to ON to enable DQP over the
network for the current connection. The OFF (default) setting has no effect, and the setting of the
DQP_ENABLED logical server policy option determines whether or not DQP is used over the network for
queries on the current connection.
Note
Any changes you make to a logical server policy option affect new connections only. Logical server policy
options for existing connections are based on the time that the connection was initially established.
Related Information
Allowed Values
ON, OFF
Default
ON
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The DQP_OPTIONS13 option enables the use of remote procedure calls based on TCP/IP. When the option
value is OFF, SAP IQ uses MIPC for DQP communications among worker and leader nodes. This option is only
valid when the temporary database option DQP_ENABLED_OVER_NETWORK is set ON, or when you enable
DQP by setting the logical server policy as follows:
Remember that changes made to a logical server policy option affect new connections only. Logical server
policy options for existing connections are based on the time that the connection was initially established.
Related Information
Specifies a timeout value (in seconds) that influences the system-calculated timeout value for individual TCP
remote procedure calls (RPCs) used in multiplex internal communications for distributed query processing.
Allowed Values
1 to 4,294,967,295 (seconds)
Default
60 (seconds)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The supplied value influences the timeout value of an RPC; it does not define the actual system-calculated
timeout value. SAP IQ considers the value of DQP_TCP_TIMEOUT, but also considers the amount of data to be
processed, and the data transfer speed. SAP IQ may override your DQP_TCP_TIMEOUT value based on its
calculation.
If the timeout using the value you supplied in DQP_TCP_TIMEOUT is less than the computed timeout suing the
default value, then SAP IQ chooses the default.
Related Information
Controls whether simple local predicates are executed before query optimization.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If this option is ON (the default), the optimizer finds, prepares, and executes predicates containing only local
columns and constraints before query optimization, including join ordering, join algorithm selection, and
grouping algorithm selection, so that the values of “Estimated Result Rows” in the query plan are more precise.
If this option is OFF, the optimizer finds and prepares the simple predicates, but does not execute them before
query optimization. The resulting values of “Estimated Result Rows” are less precise, if the predicates are not
executed.
In general, EARLY_PREDICATE_EXECUTION should always be left ON, as this results in improved query plans
for many queries.
This information is included in the query plan for the root node:
● Threads used for executing local invariant predicates – if greater than 1, indicates parallel execution of local
invariant predicates.
● Early_Predicate_Execution – indicates if the option is OFF.
● Time of Cursor Creation – the time of cursor creation.
The simple predicates for which execution is controlled by this option are referred to as invariant predicates in
the query plan. This information is included in the query plan for a leaf node, if there are any local invariant
predicates on the node:
● Generated Post Invariant Predicate Rows – actual result after executing local invariant predicate
● Estimated Post Invariant Predicate Rows – calculated by using estimated local invariant predicates
selectivity
● Time of Condition Start – starting time of the execution of local invariant predicates
● Time of Condition Done – ending time of the execution of local invariant predicates
● Elapsed Condition Time – elapsed time for executing local invariant predicates
Related Information
Allows a DBA to enable or disable the asynchronous IO used by the RLV persistence log.
Allowed Values
ON, OFF
Default
ON
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. If permitted, can be set for an arbitrary other user or role, or for all
users via the role. Takes effect immediately.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When ENABLE_LOB_VARIABLES is OFF, large object variables less than 32 K are implicitly converted; an error
is reported if a large object variable is greater than or equal to 32 K. A LONG VARCHAR variable is implicitly
When ENABLE_LOB_VARIABLES is ON, large object variables of any size retain their original data type and size.
Example
Retain the data type and size of large object variables greater than 32 K:
Related Information
Controls whether queries with an ambiguous syntax for multi-table joins are allowed or are reported as an
error.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
This option reports a syntax error for those queries containing outer joins that have ambiguous syntax due to
the presence of duplicate correlation names on a null-supplying table.
This join clause illustrates the kind of query that is reported where C1 is a condition:
If EXTENDED_JOIN_SYNTAX is set to ON, this query is interpreted as follows, where C1 and C2 are conditions:
Related Information
10.6.68 FILE_PREALLOCATE_SAMPLING_THRESHOLD
Option
Allowed Values
0 to 10,000 (milliseconds)
Default
10 (milliseconds)
● You can only set this option at the database (PUBLIC) level.
● You need the DBA privilege to set this option.
● You need the SET ANY SYSTEM OPTION system privilege to set this option.
Remarks
If you suspect a slow file system, check the .iqmsg file for messages like these:
Related Information
Allowed Values
1, 2, 3
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Setting 1 (fast accumulator) is faster and uses less space for floats and doubles than setting 2. This setting
uses a single double precision variable to add double and float numbers, and is subject to the known accuracy
limitations of such an approach.
Setting 2 (default) (medium accumulator) uses multiple double precision variables to accumulate floats and
doubles. It is very accurate for addends in the range of magnitudes 1e-20 to 1e20. While it loses some accuracy
outside of this range, it is still accurate enough for most applications. Setting 2 allows the optimizer to choose
hash for faster performance more easily than setting 3.
Setting 3 (large accumulator) is highly accurate for all floats and doubles, but its size often precludes the use of
hash optimization, which will be a performance limitation for most applications.
Related Information
Causes SAP IQ to leak, rather than reclaim, database disk space during a DROP command.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set as a temporary option only, for an individual connection or for the PUBLIC role, for the
duration of the current connection.
● Requires SET ANY SYSTEM OPTION system privilege to set this option. Takes effect immediately.
Remarks
You must drop a corrupt index, column or table and set the FORCE_DROP option to ON. This prevents the free
list from being incorrectly updated from incorrect or suspect file space allocation information in the object
being dropped. After dropping corrupt objects, you can reclaim the file space using the -iqfrec and -
iqdroplks server switches.
When force dropping objects, you must ensure that only the DBA is connected to the database. The server
must be restarted immediately after a force drop.
Caution
Do not attempt to force drop objects unless SAP Technical Support has instructed you to do so.
FORCE_DROP procedures for system recovery and database repair are described in SAP IQ Administration:
Backup, Restore, and Data Recovery.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
By default, all cursors are scrolling. Scrolling cursors with no host variable declared cause SAP IQ to create a
buffer for temporary storage of results. Each row in the result set is stored to allow for backward scrolling.
Controls whether cursors that have not been declared as updatable can be updated.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When FORCE_UPDATABLE_CURSORS is ON, cursors that have not been declared as updatable can be updated.
This option allows updatable cursors to be used in front-end applications without specifying the FOR UPDATE
clause of the DECLARE CURSOR statement.
Caution
Specifies the number of lookup pages and cache memory allocated for Lookup FP indexes in SAP IQ 15.
databases.
Allowed Values
1 to 4096 (MB)
Default
16 (MB)
Scope
Dependencies
Remarks
Note
Related Information
Controls the amount of main buffer cache allocated to FP indexes in SAP IQ 15databases.
Allowed Values
1 to 1,000,000
Default
2500
Scope
Remarks
Note
Related Information
Limits the number of distinct values in columns that implicitly load as NBit FP.
Allowed Values
0 to 2,147,483,647
1,048,576
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Dependencies
Remarks
FP_NBIT_AUTOSIZE_LIMIT limits the number of distinct values in all newly created columns without an
explicit IQ UNIQUE setting. Columns constrained by the FP_NBIT_AUTOSIZE_LIMIT option load with a Flat
FP or NBit FP index:
● If FP_NBIT_AUTOSIZE_LIMIT is greater than 0 and less than 2,147,483,647, columns load with an NBit
FP index
● If FP_NBIT_AUTOSIZE_LIMIT equals 0, columns load with a Flat FP index
FP_NBIT_AUTOSIZE_LIMIT and FP_NBIT_LOOKUP_MB establish a ceiling for sizing NBit columns during
data loads. As long as the number of distinct values is less than FP_NBIT_AUTOSIZE_LIMIT and the total
dictionary size (values and counts) per column is less the FP_NBIT_LOOKUP_MB, the column loads as an NBit.
If the load exceeds the FP_NBIT_AUTOSIZE_LIMIT but is less than FP_NBIT_ROLLOVER_MAX_MB, the column
rolls over to Flat FP.
● SAP IQ SQL Reference > System Procedures > Alphabetical List of System Stored Procedures >
sp_iqrebuildindex
● SAP IQ SQL Reference > System Procedures » Alphabetical List of System Stored Procedures >
sp_iqindexmetadata
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Dependencies
Remarks
DML operations check the FP_NBIT_ENFORCE_LIMITS option when the number of distinct values in a column
exceeds the explicit limit set in an IQ UNIQUE constraint, which is above the FP_NBIT_AUTOSIZE value, or
when the dictionary size for an implicit NBit rollover exceeds the FP_NBIT_ROLLOVER_MAX_MB limit.
Using sp_iqrebuildindex to increase the number of distinct values beyond current limits for a Flat FP
column when FP_NBIT_ENFORCE_LIMITS is set to ON, returns an error. If FP_NBIT_ENFORCE_LIMITS is OFF,
sp_iqrebuildindex rebuilds the index to the maximum token, which is the largest distinct value.
Related Information
Provides support for tokenized FP indexes similar to that available in SAP IQ 15.
Allowed Values
ON, OFF
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The FP_NBIT_IQ15_COMPATIBILITY option provides tokenized FP support similar to that available in SAP IQ
15. All newly created and modified tokenized FP indexes in 16.1 will be NBit. The only 15 style FP(1),FP(2),
and FP(3) byte FP indexes available in 16.1 are those from an upgraded database that have had only Read-Only
activity.
The FP_NBIT_IQ15_COMPATIBILITY ON/OFF setting only pertains to tokenized FP creation and cut-off
behavior:
Note
Related Information
Limits the total dictionary size per column for implicit NBit FP columns.
Allowed Values
1 to 4,294,967,295
64 (MB)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Dependencies
Remarks
FP_NBIT_AUTOSIZE_LIMIT and FP_NBIT_LOOKUP_MB establish a ceiling for sizing implicit NBit columns. As
long as the number of distinct values is less than FP_NBIT_AUTOSIZE_LIMIT and the total dictionary size
(values and counts) per column is less the FP_NBIT_LOOKUP_MB, the column loads with an NBit FP index.
Limits are enforced by the FP_NBIT_ENFORCE_LIMITS option.
DML operations that exceed the FP_NBIT_LOOKUP_MB limit rollover to a Flat FP index.
Additional Information
Sets a threshold for the total dictionary size for implicit NBit rollovers to Flat FP.
Allowed Values
1 to 4,294,967,295
Default
16384 KB
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
● If the total dictionary size per column does not exceed the FP_NBIT_ROLLOVER_MAX_MB, the NBit column
rolls over to a Flat FP.
● If the dictionary size exceeds the FP_NBIT_ROLLOVER_MAX_MB limit and
FP_NBIT_ENFORCE_LIMITS='ON', DML operations throw an error and roll back.
● If the dictionary size exceeds the FP_NBIT_ROLLOVER_MAX_MB limit and
FP_NBIT_ENFORCE_LIMITS='OFF' (default), DML operations keep running, and the NBit dictionary
continues to grow.
● If FP_NBIT_ROLLOVER_MAX_MB='0', the NBit column rolls over to Flat FP.
Additional Information
Related Information
Allowed Values
Integer
Default
200
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The default index calculates some predicates such as SUM, RANGE, MIN, MAX and COUNT DISTINCT in
parallel. FP_PREDICATE_WORKUNIT_PAGES affects the degree of parallelism used by specifying the number of
pages worked on by each thread. To increase the degree of parallelism, decrease the value of this option.
Related Information
Controls the use of memory for the optimization of queries involving functional expressions against columns
having enumerated storage.
Allowed Values
0 to 20,000
Default
1024 (KB)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
FPL_EXPRESSION_MEMORY_KB controls the use of memory for the optimization of queries involving functional
expressions against columns having enumerated storage. The option enables the DBA to constrain the
memory used by this optimization and balance it with other SAP IQ memory requirements, such as caches.
Setting this option to 0 switches off optimization.
Related Information
Specifies the percent of space on each HG garray page to reserve for future incremental inserts into existing
groups.
Allowed Values
0 to 1000
Default
25
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The garray tries to pad out each group to include a pad of empty space set by the value. This space is used for
rows added to existing index groups.
An HG index can reserve some storage on a per-group basis (where group is defined as a group of rows with
equivalent values). Reserving space consumes additional disk space, but can help the performance of
incremental inserts into the HG index.
If you plan to do future incremental inserts into an HG index, and those new rows have values that are already
present in the index, a nonzero value for this option might improve incremental insert performance.
If you do not plan to incrementally update the index, you can reduce the values of this option to save disk
space.
Allowed Values
0 to 100
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option defines the number of database pages read ahead during an insert to a column that has an HG
index.
Determines per-page fill factor during page splits on the garray and specifies the percent of space on each HG
garray page to reserve for future incremental inserts.
Allowed Values
0 to 100
Default
25
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Splits of a garray page try to leave that percentage empty. This space is used for rows added to new index
groups.
If future plans include incremental inserts into an HG index, and the new rows do not have values that are
already present in the index, a nonzero value for GARRAY_PAGE_SPLIT_PAD_PERCENT could improve
incremental insert performance.
If you do not plan to incrementally update the index, you can reduce the values of this option to save disk
space.
Related Information
Allowed Values
0 to 100
Default
10
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
This option defines the number of database pages read ahead during a query to a column that has an HG index.
Related Information
Controls the maximum percentage of a user’s temp memory that a hash object can pin.
Allowed Values
0 to 100
Default
20
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
HASH_PINNABLE_CACHE_PERCENT controls the percentage of a user’s temp memory allocation that any one
hash object can pin in memory. The default is 20%, but you should reduce this number to 10% if you are
running complex queries, or increase this number to 50% if you have simple queries that need a single large
hash object to run, such as a large IN subquery.
HASH_PINNABLE_CACHE_PERCENT is for use by primarily Technical Support. If you change the value of it, do
so with extreme caution; first analyze the effect on a wide variety of queries.
Related Information
Specifies the percent of hard disk I/Os allowed during the execution of a statement that includes a query
involving hash algorithms, before the statement is rolled back and an error message is reported.
Allowed Values
0 to 100
Default
10
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If a query that uses hash algorithms causes an excessive number of hard disk I/Os (paging buffers from
memory to disk), query performance is negatively affected, and server performance might also be affected.
HASH_THRASHING_PERCENT controls the percentage of hard disk I/Os allowed before the statement is rolled
back, and you see one of the following error messages:
The default value of HASH_THRASHING_PERCENT is 10%. Increasing this value permits more paging to disk
before a rollback and decreasing this value permits less paging before a rollback.
Related Information
Allowed Values
0 to 3
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option chooses the algorithm used by the HG index during a delete operation. The cost model considers
both CPU- and I/O-related in selecting the appropriate delete algorithm. The cost model takes the following
into account:
● Rows deleted
● Index size
● Width of index data type
● Cardinality of index data
● Available temporary cache
● Machine related I/O and CPU characteristics
● Available CPUs and threads
● Referential integrity costs
Related Information
Specifies the maximum number of Btree pages used in evaluating a range predicate in the HG index.
Allowed Values
Integer
Default
10
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option effectively controls the amount of time the optimizer spends searching for the best index to use for
a range predicate. Setting this option higher may cause a query to spend more time in the optimizer, but as a
result may choose a better index to resolve a range predicate.
Related Information
Specifies the amount of time, in minutes, that the client waits for an HTTP session to time out before giving up.
Allowed Values
0 to 525,600
Default
30
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect immediately.
Remarks
This option provides variable session timeout control for Web service applications. A Web service application
can change the timeout value from within any request that owns the HTTP session, but a change to the timeout
value can impact subsequent queued requests if the HTTP session times out. The Web application must
include logic to detect whether a client is attempting to access an HTTP session that no longer exists. This can
be done by examining the value of the SessionCreateTime connection property to determine whether a
timestamp is valid: if the HTTP request is not associated with the current HTTP session, the
SessionCreateTime connection property contains an empty string.
Related Information
Creates a unique HG index on each IDENTITY/AUTOINCREMENT column, if the column is not already a primary
key.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When option is set ON, HG indexes are created on future identity columns. The index can only be deleted if the
deleting user is the only one using the table and the table is not a local temporary table.
Related Information
Allowed Values
= '<tablename>'
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Note
If you set a user level option for the current option, the corresponding temporary option is also set. See
Scope and Duration of Database Options.
Remarks
When IDENTITY_INSERT is set, insert/update is enabled. A table name must be specified to identify the
column to insert or update. If you are not the table owner, qualify the table name with the owner name.
To drop a table with an IDENTITY column, IDENTITY_INSERT must not be set to that table.
Examples
Illustrates the effect of user level options on temporary options (see Note), if you are connected to the
database as DBA and enter:
The value for the option is set to Customers for the user DBA and temporary for the current connection. Other
users who subsequently connect to the database as DBA find their option value for IDENTITY_INSERT is
Customers also.
Related Information
Allowed Values
-3 to 3
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The IQ optimizer has a choice of several algorithms for processing IN subqueries. This option allows you to
override the optimizer's costing decision when choosing the algorithm to use. It does not override internal rules
that determine whether an algorithm is legal within the query engine.
IN_SUBQUERY_PREFERENCE is normally used for internal testing and for manually tuning queries that the
optimizer does not handle well. Only experienced DBAs should use it. The only reason to use this option is if the
optimizer seriously underestimates the number of rows produced by a subquery, and the hash object is
thrashing. Before setting this option, try to improve the mistaken estimate by looking for missing indexes and
dependent predicates.
Inform Technical Support if you need to set IN_SUBQUERY_PREFERENCE, as setting this option might mean
that a change to the optimizer is appropriate.
Related Information
Generates messages suggesting additional column indexes that may improve performance of one or more
queries.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When set ON, the index advisor prints index recommendations as part of the query plan or as a separate
message in the message log file, if query plans are not enabled. These messages begin with the string “Index
Advisor:” and you can use that string to search and filter them from a message file. The output is in
OWNER.TABLE.COLUMN format.
Note
When INDEX_ADVISOR_MAX_ROWS is set ON, index advice will not be written to the message file as
separate messages. Advice will, however, continue to be displayed on query plans in the message file.
Local predicates on a single column where an HG, HNG, DATE, TIME Add an <index-type> index to column col.
or DATETIME index would be desirable, as appropriate.
Single column join keys where an HG index would be useful. Add an HG index to join key col.
Single column candidate key indexes where an HG exists, but could be Change join key col to a unique HG index.
changed to a unique HG index.
Join keys have mismatched data types, and regenerating one column Make join keys col1 and col2 identical data types
with a matched data type would be beneficial.
Subquery predicate columns where an HG index would be useful. Add an HG index to subquery column col
Grouping columns where an HG index would be useful. Create an HG index on grouping column col
Single-table intercolumn comparisons where the two columns are Create a CMP index on col1, col2
identical data types, a CMP index are recommended.
Columns where an HG index exists, and the number of distinct values Use the sp_iqrebuildindex stored procedure
allows, suggest converting the FP to a 1 or 2-byte FP index. to rebuild col as Nbit.
Very large tables joined with an expensive join algorithm. Consider either hash partitioning table
<tablename>, or tables <tablename1> and
<tablename2>
It is up to you to decide how many queries benefit from the additional index and whether it is worth the expense
to create and maintain the indexes. In some cases, you cannot determine how much, if any, performance
improvement results from adding the recommended index.
For example, consider columns used as a join key. SAP IQ uses metadata provided by HG indexes extensively to
generate better/faster query plans to execute the query. Putting an HG index on a join column without one
makes the IQ optimizer far more likely to choose a faster join plan, but without adding the index and running
the query again, it is very hard to determine whether query performance stays the same or improves with the
new index.
Examples
Note
This method accumulates index advisor information for multiple queries, so that advice for several queries
can be tracked over time in a central location.
Related Information
Sets the maximum number of unique advice messages stored by the index advisor to max_rows.
Allowed Values
0 to 4,294,967,295
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
Setting the option to 0 (the default) disables the collection of index advice.
INDEX_ADVISOR_MAX_ROWS limits the number of messages stored by the index advisor. Once the specified
limit has been reached, the INDEX_ADVISOR will not store new advice. It will, however, continue to update
counts and timestamps for existing advice messages.
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The SAP IQ optimizer normally chooses the best index available to process local WHERE clause predicates and
other operations that can be done within an IQ index. INDEX_PREFERENCE is used to override the optimizer
choice for testing purposes; under most circumstances, it should not be changed.
Related Information
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Default
Scope
Remarks
The point-in-time recovery archive increases in size with each update sequence. The default setting of 0 allows
the archive to increase without limit. Setting IQ_LOG_MAX_SIZE to a specific value limits the size of the
archive:
SET OPTION PUBLIC.IQ_LOG_MAX_SIZE = '1000' //sets the max size of the archive
to 1000 MB
If the archive exceeds the size limit, the server creates a new archive in the point-in-time recovery archive
directory. Size limits are expressed in megabytes.
Related Information
Default
● Databases smaller than 1 TB – the default threshold size is to 10 percent of dbspace size or 2 GB,
whichever is greater.
● Larger databases – the threshold size is 1 percent of dbspace size.
Scope
Remarks
To turn off threshold processing, set threshold size to 1 MB (the minimum reserved dbspace size is 200 MB).
Related Information
Sets a time interval between automatic backups of the point in time recovery logs.
Allowed Values
0 to <num> minutes
Default
Scope
Remarks
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
Remarks
2. Use ALTER DBSPACE to identify the directory where you want to archive the recovery logs:
SAP IQ.db file by default. This command saves the log in another directory. The saves the point-in-time
recovery log in the same directory as the new-directory-specification must point to an existing
directory. For multiplex servers, the IQ_SYSTEM_LOG directory must reside on a shared file system and be
writable by all multiplex nodes. For other constraints, see Redirecting Log Output.
3. Perform the backup:
Point-in-time recovery logging only becomes fully enabled when the data backup begins. On multiplex servers,
all writers must be shut down during first data backup to enable PITR logging on the coordinator. After the
backup is complete, the administrator must synchronize all writers before starting them up.
Setting this option to OFF disables point-in-time recovery immediately. To re-enable point-in-time recovery, you
must complete all steps in this procedure, including a FULL, INCREMENTAL, or INCREMENTAL SINCE FULL
backup.
Point-in-time recovery logging will be disabled during the following multiplex configuration changes:
● Multiplex failover
● Simplex to multiplex conversion
● Multiplex to simplex conversion
To re-enable point-in-time recovery after multiplex configuration changes, shut down all secondary nodes.
Allow some time to let the coordinator release all global free list allocations, then shut down the coordinator.
Restart the coordinator or new failover coordinator, then use the ALTER DBSPACE IQ_SYSTEM_LOG RENAME
command to rename the point-in-time recovery log. Perform a full data backup. Synchronize and restart all
secondary nodes.
Related Information
Controls the divisor for allocation of space from IQ_SYSTEM_MAIN dbspace for use by a multiplex writer.
Allowed Values
1 to 99
Default
16
Remarks
IQ_SYSTEM_MAIN_ALLOCATION_RATIO is a divisor for adjusting the requests from multiplex writers to the
multiplex coordinator when allocating IQ_SYSTEM_MAIN space for use by a writer. The default is 16, meaning
that if the requested amount of space from the writer is <B> blocks, then SAP IQ allocates SAP IQ blocks. This
behavior contrasts with requests for other main store dbspaces, where SAP IQ allocates the full <B> blocks.
The option lets you conserve IQ_SYSTEM_MAIN space, because the space is a critical resource and the default
allocation suitable for other dbspaces would be too large.
Related Information
Controls the amount of space reserved in IQ_SYSTEM_MAIN that is kept free for recovery operations.
Allowed Values
Default
1 percent
Remarks
Related Information
Allowed Values
1 to 3
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Limits the allowed IQGOVERN_PRIORITY setting, which affects the order in which a user’s queries are queued
for execution. In the range of allowed values, 1 indicates high priority, 2 (the default) medium priority, and 3 low
priority. SAP IQ returns an error if a user sets IQGOVERN_PRIORITY higher than IQGOVERN_MAX_PRIORITY.
Related Information
Allowed Values
1 to 3
Default
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Assigns a value that determines the order in which a user’s queries are queued for execution. In the range of
allowed values, 1 indicates high priority, 2 (the default) medium priority, and 3 low priority. This switch can be
set temporary per user or public by any user. Queries with a lower priority will not run until all higher priority
queries have executed.
This option is limited by the per user or per group value of the option IQGOVERN_MAX_PRIORITY. It cannot be
set to a value higher than the current user's IQGOVERN_MAX_PRIORITY value. For example, if the
IQGOVERN_MAX_PRIORITY for the current user (userA) is 2, userA cannot set the IQGOVERN_PRIORITY value
for userB to 1, since that is higher than 2. UserA can only complete this task, if IQGOVERN_MAX_PRIORITY
value for userA is first increased to 1.
Related Information
Limits the time a high priority query waits in the queue before starting.
Allowed Values
0 to 1,000,000 (seconds)
0 (disabled)
Scope
Remarks
Limits the time a high priority (priority 1) query waits in the queue before starting. When the limit is reached,
the query is started even if it exceeds the number of queries allowed by the -iqgovern setting. The range is
from 1 to 1,000,000 seconds. The default (0) disables this feature. IQGOVERN_PRIORITY_TIME must be set
PUBLIC.
Related Information
Allowed Values
Default
● 0
● 1 for Open Client and JDBC connections
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
ISOLATION_LEVEL determines the isolation level for tables in the catalog store. SAP IQ always enforces level 3
for tables in the IQ store. Level 3 is equivalent to ANSI level 4.
Related Information
Allowed Values
String
Empty string
Scope
Remarks
By default, this option contains an empty string. In this case, the database server searches the JAVA_HOME
environment variable, the path, and other locations for the Java VM.
Related Information
Specifies command line options that the database server uses when it launches the Java VM.
Allowed Values
String
Default
Empty string
Remarks
JAVA_VM_OPTIONS specifies options that the database server uses when launching the Java VM specified by
the JAVA_LOCATION option. These additional options can be used to set up the Java VM for debugging
purposes or to run as a service on UNIX platforms. In some cases, additional options are required to use the
Java VM in 64-bit mode instead of 32-bit mode.
Related Information
Controls how conservative the optimizer’s join result estimates are in unusually complex situations.
Allowed Values
1 to 100
Default
30
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option controls how conservative the join optimizer’s result size estimates are in situations where an input
to a specific join has already passed through at least one intermediate join that can result in multiple copies of
rows projected from the table being joined.
A level of zero indicates that the optimizer should use the same estimation method above intermediate
expanding joins as it would if there were no intermediate expanding joins.
This results in the most aggressive (small) join result size estimates.
A level of 100 indicates that the optimizer should be much more conservative in its estimates whenever there
are intermediate expanding joins, and this results in the most conservative (large) join result size estimates.
Normally, you should not need to change this value. If you do, set JOIN_EXPANSION_FACTOR as a temporary
or user option.
Related Information
Allowed Values
ON, OFF
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When JOIN_OPTIMIZATION is ON, SAP IQ optimizes the join order to reduce the size of intermediate results
and sorts, and to balance the system load. When the option is OFF, the join order is determined by the order of
the tables in the FROM clause of the SELECT statement.
JOIN_OPTIMIZATION controls the order of the joins, but not the order of the tables. To show the distinction,
consider this example FROM clause with four tables:
FROM A, B, C, D
By default, this FROM clause creates a left deep plan of joins that could also be explicitly represented as:
If JOIN_OPTIMIZATION is turned OFF, then the order of these joins on the sets of tables is kept precisely as
specified in the FROM clause. Thus A and B must be joined first, then that result must be joined to table C, and
then finally joined to table D. This option does not control the left/right orientation at each join. Even with
JOIN_OPTIMIZATION turned OFF, the optimizer, when given the above FROM clause, can produce a join plan
that looks like one of the following:
In all of these cases, A and B are joined first, then that result is joined to C, and finally that result is joined to
table D. The order of the joins remains the same, but the order of the tables appears different.
Note that the above FROM clause is a different join order than the original example FROM clause, even though
all the tables appear in the same order.
JOIN_OPTIMIZATION should be set to OFF only to diagnose obscure join performance issues or to manually
optimize a small number of predefined queries. With JOIN_OPTIMIZATION turned OFF, queries can join up to
128 tables, but might also suffer serious performance degradation.
Caution
If you turn off JOIN_OPTIMIZATION, SAP IQ has no way to ensure optimal performance for queries
containing joins. You assume full responsibility for performance aspects of your queries.
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For joins within a query, the SAP IQ optimizer has a choice of several algorithms for processing the join.
JOIN_PREFERENCE allows you to override the optimizer’s cost-based decision when choosing the algorithm to
use. It does not override internal rules that determine whether an algorithm is legal within the query engine. If
This option is normally used for internal testing or tuning of report queries, and only experienced DBAs should
use it.
Simple equality join predicates can be tagged with a predicate hint that allows a join preference to be specified
for just that one join. If the same join has more than one join condition with a local join preference, and if those
hints are not the same value, then all local preferences are ignored for that join. Local join preferences do not
affect the join order chosen by the optimizer
Related Information
Controls the minimum number of tables being joined together before any join optimizer simplifications are
applied.
Allowed Values
1 to 24
Default
12
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
The query optimizer simplifies its optimization of join order by separate handling of both lookup tables (that is,
nonselective dimension tables) and tables that are effective Cartesian products. After simplification, it
optimizes the remaining tables for join order, up to the limit set by MAX_JOIN_ENUMERATION.
Setting this option to a value greater than the current value for MAX_JOIN_ENUMERATION has no effect.
Setting this value below the value for MAX_JOIN_ENUMERATION might improve the time required to optimize
queries containing many joins, but may also prevent the optimizer from finding the best possible join plan.
If you change this value, set the JOIN_SIMPLIFICATION_THRESHOLD as a temporary or user option, and to a
value of at least 9.
Related Information
Allowed Values
1 to 8
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
LF_BITMAP_CACHE_KB defines the amount of heap memory (in KB) per distinct value used during a load into
an LF index. The default allots 4KB. If the sum of the distinct counts for all LF indexes on a particular table is
relatively high (greater than 10,000), then heap memory use might increase to the point of impacting load
performance due to system page faulting. If this is the case, reduce the value of LF_BITMAP_CACHE_KB.
This formula shows how to calculate the heap memory used (in bytes) by a particular LF index during a load:
Using the default of 4 KB, an LF index with 1000 distinct values can use up to 4 MB of heap memory during a
load.
Related Information
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
● Inserting a zero-length data value into a column of data type CHAR, VARCHAR, LONG VARCHAR, BINARY,
VARBINARY, or LONG BINARY, and
● A NULL column-spec; for example, NULL(ZEROS) or NULL(BLANKS) is also given for that same column.
Set LOAD_ZEROLENGTH_ASNULL ON to load a zero-length value as NULL when the above conditions are met.
Set LOAD_ZEROLENGTH_ASNULL OFF to load a zero-length value as zero-length, subject to the setting of option
NON_ANSI_NULL_VARCHAR.
Related Information
Allowed Values
ON, OFF
Default
ON
Scope
Remarks
When this option is ON, a message appears in the IQ message log (.iqmsg file) every time a user connects to
or disconnects from the SAP IQ database.
Note
If this option is set OFF (connection logging disabled) when a user connects, and then turned on before the
user disconnects, the message log shows that user disconnecting but not connecting.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When this option is ON, a message appears in the IQ message log every time you open or close a cursor.
Normally this option should be OFF, which is the default. Turn it ON only if you are having a problem and must
provide debugging data to Technical Support.
Related Information
Allowed values
ON, OFF
Default
OFF
Scope
Remarks
When this option is set to On, the database server logs information about deadlocks in an internal buffer. The
size of the buffer is fixed at 10000 bytes. You can view the deadlock information using the sa_report_deadlocks
stored procedure. The contents of the buffer are retained when this option is set to Off.
When deadlock occurs, information is reported for only those connections involved in the deadlock. The order
in which connections are reported is based on which connection is waiting for which row. For thread deadlocks,
information is reported about all connections.
When you have deadlock reporting turned on, you can also use the Deadlock system event to take action when
a deadlock occurs.
Related Information
Controls the use of standard, integrated, Kerberos, LDAP, and PAM logins for the database.
Allowed Values
● Standard – the default setting, which does not permit integrated logins. An error occurs if an integrated
login connection is attempted.
● Mixed – allows both integrated logins and standard logins.
● Integrated – all logins to the database must be made using integrated logins.
● Kerberos – all logins to the database must be made using Kerberos logins.
● LDAPUA – all logins to the database must be made using LDAP logins.
● PAMUA – all logins to the database must be made using PAM logins.
Note
Default
Standard
Scope
Remarks
Values are case-insensitive. Specify values in a comma-separated list without white space.
Caution
● Restricting the LOGIN_MODE to a single mode in a mixed environment (for example, integrated only or
LDAPUA only) restricts connections to only those users who have been granted the corresponding
login mapping. Attempting to connect using other methods generates an error. The only exceptions to
this are users with full administrative rights (SYS_AUTH_DBA_ROLE or SYS_AUTH_SSO_ROLE).
● Restricting the LOGIN_MODE to LDAPUA only may result in a configuration where no users can connect
to the server if no user or login policy exists that permits LDAPUA. Use the command line switch -al
<user-id-list> with the start_iq utility to recover from this situation.
Related Information
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SECURITY OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
The initial connection compatibility options settings are controlled using the LOGIN_PROCEDURE option, which
is called after all the checks have been performed to verify that the connection is valid. The LOGIN_PROCEDURE
option names a stored procedure to run when users connect. The default setting is to use the
sp_login_environment system stored procedure. You can specify a different stored procedure. The
procedure specified by the LOGIN_PROCEDURE option is not executed for event connections.
The sp_login_environment procedure checks to see if the connection is being made over TDS. If the
connection is made over TDS, sp_login_environment calls the sp_tsql_environment procedure, which
sets several options to new default values for the current connection.
Related Information
Allowed Values
Default
200; SAP IQ actually reserves a maximum of 50 percent and a minimum of 1 percent of the last read-write file
in IQ_SYSTEM_MAIN
Scope
Remarks
MAIN_RESERVED_DBSPACE_MB controls the amount of space SAP IQ sets aside in the IQ main store for certain
small but critical data structures used during release savepoint, commit, and checkpoint operations. For a
production database, set this value between 200 MB and 1 GB, or at least 20 percent of IQ_SYSTEM_MAIN size.
The larger your IQ page size and number of concurrent connections, the more reserved space you need.
Reserved space size is calculated as a maximum of 50 percent and a minimum of 1 percent of the last read-
write file in IQ_SYSTEM_MAIN.
SAP IQ ignores the MAIN_RESERVED_DBSPACE_MB option if the actual dbspace size is less than twice the size
of the MAIN_RESERVED_DBSPACE_MB value. In dbspaces less than 100 MB (such as the demo database), half
the usable space may be reserved.
Related Information
Allowed Values
Integer
Default
100,000,000
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
MAX_CARTESIAN_RESULT limits the number of result rows from a query containing a Cartesian join (usually
the result of missing one or more join conditions when creating the query). If SAP IQ cannot find a query plan
for the Cartesian join with an estimated result under this limit, it rejects the query and returns an error. Setting
MAX_CARTESIAN_RESULT to 0 disables the check for the number of result rows of a Cartesian join.
Related Information
Controls the maximum precision for numeric data sent to the client.
Allowed Values
0 to 126
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When SAP IQ performs its calculation, it promotes data types to an appropriate size that ensures accuracy. The
promoted data type might be larger in size than Open Client and some ODBC applications can handle correctly.
When MAX_CLIENT_NUMERIC_PRECISION is a nonzero value, SAP IQ checks that numeric result columns do
not exceed this value. If the result column is bigger than MAX_CUBE_RESULT allows, and SAP IQ cannot cast it
to the specified precision, the query returns this error:
Data Exception - data type conversion is not possible %1
SQLCODE = -1001006
Note
In SAP SQL Anywhere, the maximum value supported for the numeric function is 255. If the precision of
the numeric function exceeds the maximum value supported, you see the error:
The result datatype for function '_funcname' exceeds the maximum
supported numeric precision of 255. Please set the proper value for precision
in numeric function, 'location'
Related Information
Controls the maximum scale for numeric data sent to the client.
Allowed Values
0 to 126
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When SAP IQ performs its calculation, it promotes data types to an appropriate scale and size that ensure
accuracy. The promoted data type might be larger than the original defined data size. You can set this option to
the scale you want for numeric results.
Multiplication, division, addition, subtraction, and aggregate functions can all have results that exceed the
maximum precision and scale.
For example, when a DECIMAL(88,2) is multiplied with a DECIMAL(59,2), the result could require a
DECIMAL(147,4). With MAX_CLIENT_NUMERIC_PRECISION of 126, only 126 digits are kept in the result. If
MAX_CLIENT_NUMERIC_SCALE is 4, the results are returned as a DECIMAL(126,4). If
MAX_CLIENT_NUMERIC_SCALE is 2, the results are returned as a DECIMAL(126,2). In both cases, there is a
possibility for overflow.
Sets the maximum number of rows that the IQ optimizer considers for a GROUP BY CUBE operation.
Allowed Values
0 to 4,294,967,295
Default
10,000,000
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When generating a query plan, the IQ optimizer estimates the total number of groups generated by the GROUP
BY CUBE hash operation. The IQ optimizer uses a hash algorithm for the GROUP BY CUBE operation. This
option sets an upper boundary for the number of estimated rows the optimizer considers for a hash algorithm
that can be run. If the actual number of rows exceeds the MAX_CUBE_RESULT value, the optimizer stops
Set MAX_CUBE_RESULT to zero to override the default value. When this option is set to zero, the IQ optimizer
does not check the row limit and allows the query to run. Setting MAX_CUBE_RESULT to zero is not
recommended, as the query might not succeed.
Related Information
Specifies a resource governor to limit the maximum number of cursors that a connection can use at once.
Allowed Values
Integer
Default
50
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
The specified resource governor allows a DBA to limit the number of cursors per connection that a user can
have. If an operation exceeds the limit for a connection, an error is generated indicating that the limit has been
exceeded.
If a connection executes a stored procedure, the procedure is executed under the permissions of the procedure
owner. However, the resources used by the procedure are assigned to the current connection.
Related Information
Sets the maximum number of rows that the IQ optimizer considers for a hash algorithm.
Allowed Values
Default
2,500,000
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When generating a query plan, the IQ optimizer might have several algorithms (hash, sort, indexed) to choose
from when processing a particular part of a query. These choices often depend on estimates of the number of
rows to process or generate from that part of the query. This option sets an upper boundary for how many
estimated rows are considered for a hash algorithm.
For example, if there is a join between two tables, and the estimated number of rows entering the join from both
tables exceeds the value of MAX_HASH_ROWS, the optimizer does not consider a hash join. On systems with
more than 50 MB per user of temporary buffer cache space, you might want to consider a higher value for this
option.
Use MAX_HASH_ROWS only as needed for joins; it may negatively affect parallelism. MAX_HASH_ROWS does not
apply to GROUP BY.
Related Information
Allowed Values
3 to 10000
Default
144
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Description
Allows you to constrain the number of threads (and thereby the amount of system resources) the commands
executed on a connection use. For most applications, use the default.
Related Information
Controls the number of threads allocated to perform a single operation (such as a LIKE predicate on a column)
executing within a connection.
Allowed Values
1 to 10000
Default
144
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
Allows you to constrain the number of threads (and thereby the amount of system resources) allocated to a
single operation. The total for all simultaneously executing teams for this connection is limited by the related
option, MAX_IQ_THREADS_PER_CONNECTION. For most applications, use the default.
Related Information
Controls the maximum number of tables to be optimized for join order after optimizer simplifications have
been applied.
Allowed Values
1 to 32
Default
15
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Each FROM clause is limited to having at most 64 tables. In practice, however, the effective limit on the number
of tables in a FROM clause is usually much lower, and is based partially on the complexity of the join
relationships among those tables. That effective limit is constrained by the setting for
MAX_JOIN_ENUMERATION. The optimizer will attempt to simplify the set of join relationships within a FROM
clause. If those simplifications fail to reduce the set of the joins that must be simultaneously considered to no
more than the current setting for MAX_JOIN_ENUMERATION, then the query will return an error.
Caution
Setting MAX_JOIN_ENUMERATION over the default value of 15 should only be done with caution, especially
in the case of queries with bushy join relationships that can cause the amount of time required by the
optimizer increase dramatically. In queries that use only a linear chain of join relationships, a
MAX_JOIN_ENUMERATION setting of 32 can still provide reasonable optimization times.
The query optimizer simplifies its optimization of join order by separate handling of both lookup tables (that is,
nonselective dimension tables) and tables that are effective Cartesian products. After simplification, it
proceeds with optimizing the remaining tables for join order, up to the limit set by MAX_JOIN_ENUMERATION.
If this limit is exceeded, the query is rejected with an error. The user can then either simplify the query or try
increasing the limit.
Normally, you should not need to change this value. If you do, set MAX_JOIN_ENUMERATION as a temporary
or user option.
Related Information
Sets an upper bound, expressed in megabytes, on the amount of temporary cache space that the optimizer
can assume will be available for hash-partitioned hash-based query operators.
Allowed Values
Default
Scope
Can be set temporary for an individual connection, for a user, or for the PUBLIC group. No system privilege
required to set this option. This option takes effect immediately.
Description
When generating a query plan, the IQ optimizer might choose from several algorithms when processing a
particular part of a query. These decisions often depend on estimates of the temp cache space that will be
required to process that part of the query and on the currently available temp cache. This option sets an upper
bound on estimated temporary space usage for operators that can consider a hash-partitioned hash-based
algorithm.
The default value of 0 indicates that there is no hard upper bound, and that therefore optimizer’s choice will be
limited only by the current temp cache availability, the current number of active user connections, and the
HASH_PINNABLE_PERCENT option setting.
Note that this option affects only the optimizer’s algorithm selection decisions, and that the run-time usage
may under some circumstances occasionally exceed this limit.
Related Information
Allowed Values
0 to 300
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Users must be licensed for the Unstructured Data Analytics Option to use TEXT indexes and perform full text
searches.
Related Information
Sets upper bound for parallel execution of GROUP BY operations and for arms of a UNION.
Allowed Values
Default
64
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This parameter sets an upper bound which limits how parallel the optimizer will permit query operators to go.
This can influence the CPU usage for many query join, GROUP BY, UNION, ORDER BY, and other query
operators.
Systems with more than 64 CPU cores often benefit from a larger value, up to the total number of CPU cores on
the system to a maximum of 512; you can experiment to find the best value for this parameter for your system
and queries.
Systems with 64 or fewer CPU cores should not need to reduce this value, unless excessive system time is
seen. In that case, you can try reducing this value to determine if that adjustment can lower the CPU system
time and improve query response times and overall system throughput.
Related Information
Sets a time limit so that the optimizer can disallow very long queries.
Allowed Values
0 to 232 – 1 (minutes)
Default
0 (disabled)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If the query runs longer than the MAX_QUERY_TIME setting, SAP IQ stops the query and sends a message to
the user and the IQ message file. For example:
MAX_QUERY_TIME applies only to queries and not to any SQL statement that is modifying the contents of the
database.
Limit the volume of RLV-enabled table data allowed to be transferred from the RLV store when a statement is
executed on another node.
Allowed Values
Default
1000
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege. Can be set temporary for an individual
connection or for the PUBLIC role. Takes effect immediately.
Remarks
In a Multiplex, all access to RLV-enabled tables is performed on the RLV store node. Queries running on other
nodes need to forward the fragment of the query involving RLV-enabled tables to the RLV store node to retrieve
data. The MAX_RV_REMOTE_TRANSFER_MB option is the mechanism to accomplish this task. When a query
plan estimate of the total volume of data to be transferred for all RLV-enabled tables within the query exceeds
the limit set by this option, the query is rejected with an error before execution begins.
Related Information
Specifies a resource governor to limit the maximum number of prepared statements that a connection can use
at once.
Allowed Values
Integer
Default
100
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
The specified resource governor allows a DBA to limit the number of prepared statements per connection that
a user can have. If an operation exceeds the limit for a connection, an error is generated indicating that the limit
has been exceeded.
If a connection executes a stored procedure, the procedure is executed under the permissions of the procedure
owner. However, the resources used by the procedure are assigned to the current connection.
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
By controlling space per connection, this option enables DBAs to manage the space for both loads and queries.
If the connection exceeds the run time quota specified by MAX_TEMP_SPACE_PER_CONNECTION, SAP IQ rolls
back the current statement and returns this message to the IQ message file or client user:
The current operation has been canceled: Max_Temp_Space_Per_Connection exceeded
Conditions that may fill the buffer cache include read or write errors, lack of main or temp space, or being out
of memory. SAP IQ may return the first error encountered in these situations and the DBA must determine the
appropriate solution.
In a distributed query processing transaction, SAP IQ uses the values set for the QUERY_TEMP_SPACE_LIMIT
and MAX_TEMP_SPACE_PER_CONNECTION options for the shared temporary store by limiting the total shared
and local temporary space used by all nodes participating in the distributed query. This means that any single
query cannot exceed the total temporary space limit (from IQ_SYSTEM_TEMP and IQ_SHARED_TEMP
dbspaces), no matter how many nodes participate.
For example, if the limit is 100 and four nodes use 25 units of temporary space each, the query is within limits.
If the sum of the total space used by any of the nodes exceeds 100, however, the query rolls back.
Examples
Example 1
Set a 500 GB limit for all connections:
SET OPTION
PUBLIC.MAX_TEMP_SPACE_PER_CONNECTION = 512000
Example 2
Set a 10 TB limit for all connections:
SET OPTION
PUBLIC.MAX_TEMP_SPACE_PER_CONNECTION = 10485760
Example 3
Set a 5000 MB limit for user wilson:
SET OPTION
wilson.MAX_TEMP_SPACE_PER_CONNECTION = 5000
Related Information
Controls the number of decimal places displayed for division operations on constant numeric values.
Allowed Values
0 to 125
Default
Scope
Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
By default, queries on IQ main store tables use a six digit scale for division operations on constant numbers.
Since IQ catalog store uses the same scale, there is minimal impact when comparing the values between
stores, particularly when results are passed to numeric functions like POWER(), EXP(), etc.
Allowed values
Integer
Default
Scope
Remarks
This option allows the database administrator to impose a minimum length on all new passwords for greater
security. Existing passwords are not affected. Passwords have a maximum length of 255 bytes and are case
sensitive.
Example
Related Information
Allowed Values
1 to 10
Default
Scope
Remarks
This option sets the minimum number of required administrators for all roles. This value applies to the
minimum number of role administrators for each role, not the minimum number or role administrators for the
total number of roles. When dropping roles or users, this value ensures that you never create a scenario where
there are no users and roles left with sufficient system privilege to manage the remaining users and roles.
Related Information
Minimizes use of disk space for newly created columns in SAP IQ 15 databases.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Dependencies
Remarks
When the ratio of main memory to the number of columns is large, turning MINIMIZE_STORAGE to ON is
beneficial. Otherwise, storage of new columns generally benefits by turning this option OFF.
Note
Avoid running a database when FP_NBIT_IQ15_COMPATIBILITY is set to ON. All SAP IQ 15 runtime
behavior is available with the SAP IQ 16.1 interface.
Related Information
Allowed Values
String
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
MONITOR_OUTPUT_DIRECTORY controls the directory in which the IQ monitor output files are created,
regardless of what is being monitored or what monitor mode is used. The dummy table used to start the
monitor can be either a temporary or a permanent table. The directory can be on any physical machine.
All monitor output files are used for the duration of the monitor runs, which cannot exceed the lifetime of the
connection. The output file still exists after the monitor run stops. A connection can run up to two performance
monitors simultaneously, one for main buffer cache and one for temp buffer cache. A connection can run a
monitor any number of times, successively.
The DBA can use the PUBLIC setting to place all monitor output in the same directory, or set different
directories for individual users.
Example
This example shows how you could declare a temporary table for monitor output, set its location, and then
have the monitor start sending files to that location for the main and temp buffer caches.
In this example, the output directory string is set to both “/tmp” and “tmp/”. The trailing slash (“/”) is correct
and is supported by the interface. The example illustrates that the buffer cache monitor does not require a
permanent table; a temporary table can be used.
Related Information
Timeout for autoexcluding a secondary node on the coordinator node. This option does not apply to the
designated failover node.
Allowed Values
Default
60 (minutes)
Scope
Remarks
0 indicates that the nodes are not autoexcluded. Values must be exactly divisible by the
MPX_HEARTBEAT_FREQUENCY setting in minutes. For example, if the MPX_HEARTBEAT_FREQUENCY setting is
120 (2 minutes), MPX_AUTOEXCLUDE_TIMEOUT must be divisible by 2.
Related Information
Controls the state of initialization of the network interface during SAP IQ server startup. This option is set on a
MPX-coordinator or Simplex node.
Allowed Values
OFF
Default
OFF
Note
Scope
● To set this option, you require DBA permissions. It can be set only for the PUBLIC group.
● Once set, the option takes effect after a server reboot (secondary servers need to be synced and restarted
as well).
Remarks
Support for Shared-Nothing multiplex has been removed in this release. For details, see Shared-Nothing
Multiplex (Removed).
Enables global transaction resiliency functionality on the coordinator. Global transaction resiliency allows DML
read-write transactions on writers to survive temporary communication failures between coordinator and
writer and temporary failure of coordinator due to server failure, shutdown, or failover.
Allowed Values
ON, OFF
Default
ON
Scope
Related Information
Interval until the heartbeat thread wakes and performs periodic operations, such as checking for coordinator
connectivity and cleaning up the connection pool on the secondary node. The heartbeat thread maintains a
dedicated internal connection from secondary server to coordinator.
Allowed Values
2 to 3600 (seconds)
Default
60 (seconds)
Scope
Related Information
Time after which an unused connection in the connection pool on a secondary node will be closed.
Allowed Values
0 – no limit (seconds)
600 (seconds)
Scope
Related Information
Time, in seconds, before a heartbeat on a secondary server declares the coordinator offline if the heartbeat
fails to reconnect to the coordinator after the first disconnect. This option also determines how long the
coordinator keeps a global transaction in a suspended state.
Allowed Values
Default
● This option affects all multiplex nodes and has no node-specific or connection-specific value. Option can
be set at the database (PUBLIC) level only.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. If you change the value of
MPX_LIVENESS_TIMEOUT on a running server, the new value takes effect immediately for connections that
might suspend in the future. The changed value also immediately affects the remaining timeout period for
all current suspended transactions.
Remarks
If a writer fails to resume a suspended transaction within the MPX_LIVENESS_TIMEOUT period, the transaction
can no longer commit, and the user should roll back the transaction. The coordinator keeps a global
transaction in a suspended state for a period of 2 * MPX_LIVENESS_TIMEOUT. If the corresponding writer fails
to resume the transaction before the 2 * MPX_LIVENESS_TIMEOUT period, the coordinator rolls back the
suspended transaction.
Related Information
Allowed Values
1 to 1000
10
Scope
Remarks
INC connections are inter-server connections between secondary nodes and the coordinator node. An INC
connection is associated with each user connection on a secondary server doing a DDL or read-write
operation. The connection is active until that command commits or rolls back; it then returns to the pool. If
these transactions are short lived, then the default setting of MPX_MAX_CONNECTION_POOL_SIZE suffices for
many user connections running DDL or RW operations. If many concurrent connections run DDL or read-write
operations, or the transactions take a long time, increase the value of MPX_MAX_CONNECTION_POOL_SIZE. For
example, increase the value when many user connections do concurrent loads without committing.
To estimate the pool size required, consider the setting of the -gm server option. The -gm setting indicates how
many users can connect to the secondary server; the INC connections are not included, but will add to this
number. Use application requirements to assess how many read-write or DDL operations are likely to occur per
user, and increase the pool size accordingly.
Each connection (INC or user) carries a memory overhead depending on -gn setting and number of cores. The
burden of memory and thread contention may affect SAP IQ server response times.
Related Information
Allowed Values
Default
Scope
Related Information
Specifies the timeout, in seconds, for MIPC (multiplex interprocess communication) calls.
Allowed Values
0 to 4,294,967,296 (seconds)
180 (seconds)
Scope
Option can be set at the database (PUBLIC) level only. Requires the SET ANY PUBLIC OPTION privilege to set
this option. Takes effect immediately.
Remarks
During a multiplex query, a node sends an MIPC request to another node and waits for results. The
MPX_MIPC_TIMEOUT prevents indefinite waiting in the event of a network failure or target node deadlock.
MIPC requests that exceed the timeout threshold will cancel the query with an error.
Related Information
Time, in seconds, before a multiplex DQP leader reassigns incomplete distributed work to another DQP worker
node.
Allowed Values
0 to 4,294,967,296 (seconds)
600 (seconds)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Pending DQP work units are monitored by the leader node to prevent indefinite waiting in the event of a node
failure or network failure. In many cases, queries will silently reassign work units that exceed the timeout value
to the leader. However, some plans do not support this reassignment and will cancel the query with an error.
Typically you do not need to change this option from its default value. However, increase this option in rare
cases where a query has very large intermediate results that cause individual work units to time out.
Decrease this option if unreliable networks or servers cause distributed work to be lost and the timeout interval
is unacceptably long. Note that setting this option too low can cause unnecessary early timeouts.
Related Information
Allowed Values
0 to 100
Default
50
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
NEAREST_CENTURY controls the handling of two-digit years, when converting from strings to dates or
timestamps.
The NEAREST_CENTURY setting is a numeric value that acts as a rollover point. Two-digit years less than the
value are converted to 20<yy>, whereas years greater than or equal to the value are converted to 19<yy>.
SAP Adaptive Server Enterprise and SAP IQ behavior is to use the nearest century, so that if the year value
<yy> is less than 50, then the year is set to 20<yy>.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When determining how to process a query, the IQ optimizer generates a query plan to map how it plans to have
the query engine process the query. If this option is set ON, the optimizer sends the plan for the query to the IQ
message file rather than submitting it to the query engine. NOEXEC affects queries and commands that include
a query.
When the EARLY_PREDICATE_EXECUTION option is ON, SAP IQ executes the local predicates for all queries
before generating a query plan, even when the NOEXEC option is ON. The generated query plan is the same as
the runtime plan.
Controls whether zero-length VARCHAR data is treated as NULLs for insert, load, and update operations.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
NON_ANSI_NULL_VARCHAR lets you revert to non-ANSI (Version 12.03.1) behavior for treating zero-length
VARCHAR data during load or update operations. When this option is set to OFF, zero-length VARCHAR data is
stored as zero-length during load, insert, or update. When this option is set to ON, zero-length VARCHAR data is
stored as NULLs on load, insert, or update.
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
NON_KEYWORDS turns off individual keywords. If you have an identifier in your database that is now a keyword,
you can either add double quotes around the identifier in all applications or scripts, or you can turn off the
keyword using the NON_KEYWORDS option.
This statement prevents TRUNCATE and SYNCHRONIZE from being recognized as keywords:
A side effect of the options is that SQL statements using a turned-off keyword cannot be used; they produce a
syntax error.
Related Information
Allowed Values
Any integer
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
This option sets the default number of notify messages SAP IQ issued for certain commands that produce
them. The NOTIFY clause for some of the commands (such as CREATE INDEX, LOAD TABLE, and DELETE)
override this value. Other commands that do not support the NOTIFY clause always use this value. The default
does not restrict the number of messages you can receive.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When a connection is opened, the SAP IQ ODBC driver uses the setting of this option to determine how CHAR
columns are described. If ODBC_DISTINGUISH_CHAR_AND_VARCHAR is set to OFF (the default), then CHAR
columns are described as SQL_VARCHAR. If this option is set to ON, then CHAR columns are described as
SQL_CHAR. VARCHAR columns are always described as SQL_VARCHAR.
Related Information
Allowed Values
Default
IGNORE
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
Single-byte to single-byte converters are not able to report substitutions and illegal characters, and must be
set to IGNORE.
Related Information
Controls the action taken if an error is encountered while executing statements in Interactive SQL.
Allowed Values
● STOP – Interactive SQL stops executing statements from the file and returns to the statement window for
input.
● PROMPT – Interactive SQL prompts the user whether to continue.
● CONTINUE – errors appear in the Messages pane, and Interactive SQL continues executing statements.
● EXIT – Interactive SQL terminates.
● NOTIFY_CONTINUE – the error is reported, and the user is prompted to continue.
● NOTIFY_STOP – the error is reported, and the user is prompted to stop executing statements.
● NOTIFY_EXIT – the error is reported, and the user is prompted to terminate Interactive SQL.
Default
PROMPT
Controls the action taken, if an error is encountered while executing statements. When you are executing
a .SQL file, the values STOP and EXIT are equivalent.
Related Information
Allowed Values
Default
CONDITIONAL
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
Both CONDITIONAL and CONTINUE settings for ON_TSQL_ERROR are used for SAP ASE compatibility, with
CONTINUE most closely simulating SAP ASE behavior. The CONDITIONAL setting is recommended,
particularly when developing new Transact-SQL stored procedures, as CONDITIONAL allows errors to be
reported earlier.
When this option is set to STOP or CONTINUE, it supersedes the setting of the CONTINUE_AFTER_RAISERROR
option. However, when this option is set to CONDITIONAL (the default), behavior following a RAISERROR
statement is determined by the setting of the CONTINUE_AFTER_RAISERROR option.
Related Information
Specifies a login procedure whose result set contains messages that are displayed by the client application
immediately after a user successfully logs in.
Allowed Values
String
dbo.sa_post_login_procedure
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SECURITY OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The default post login procedure, dbo.sa_post_login_procedure, executes immediately after a user
successfully logs in.
If you have the SET ANY SECURITY OPTION system privilege, you can customize the post login actions by
creating a new procedure and setting POST_LOGIN_PROCEDURE to call the new procedure. Do not edit
dbo.sa_post_login_procedure. The customized post login procedure must be created in every database
you use.
The post login procedure supports the client applications Interactive SQL, and Interactive SQL Classic.
Related Information
Specifies the maximum number of digits in the result of any decimal arithmetic, for queries on the catalog
store only.
Allowed Values
1 to 127
Default
126
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect immediately.
Remarks
Precision is the total number of digits to the left and right of the decimal point. The default PRECISION value is
fixed at 126. The SCALE option specifies the minimum number of digits after the decimal point, when an
arithmetic result is truncated to the maximum specified by PRECISION, for queries on the catalog store.
Note
For IQ catalog store tables, the maximum value supported for the numeric function is 255. If the precision
of the numeric function exceeds the maximum value supported, you see the following error message:
The result datatype for function '_funcname' exceeds the maximum
supported numeric precision of 255. Please set the proper value for
precision in numeric function, 'location'
Related Information
Allows you to turn fetching on or off or to use the ALWAYS value to prefetch the cursor results, even for
SENSITIVE cursor types and for cursors that involve a proxy table.
Allowed Values
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For the catalog store only, PREFETCH controls whether rows are fetched to the client side before being made
available to the client application. Fetching a number of rows at a time, even when the client application
requests rows one at a time (for example, when looping over the rows of a cursor) minimizes response time and
improves overall throughput by limiting the number of requests to the database.
The setting of PREFETCH is ignored by Open Client and JDBC connections, and for the IQ store.
Allowed Values
Integer
Default
Scope
Remarks
PREFETCH_BUFFER_LIMIT defines the number of cache pages available to SAP IQ for use in prefetching (the
read-ahead of database pages).
Related Information
Allowed Values
0 to 100
Default
40
Scope
Remarks
Related Information
Specifies the percent of prefetch resources for column data in all DML operations (insert, update, delete,
query).
Allowed Values
0 – 100
Default
50
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Specifies the percent of prefetch resources designated for performing all DML operations (insert, update,
delete, query) on HG indexes.
Allowed Values
0 – 100
Default
60
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Allowed Values
0 – 100
Default
20
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Specifies the percent of prefetch resources in queries involving the character large object data type or the
binary large object data type.
Allowed Values
0 – 100
Default
50
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Allowed Values
0 to 100
Default
20
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
PREFETCH_SORT_PERCENT designates a percentage of prefetch resources for use by a single sort object.
Increasing this value can improve the single-user performance of inserts and deletes, but may have detrimental
effects on multiuser operations.
Related Information
Specifies the percent of prefetch resources in queries that use CONTAINS on columns with text indexes.
Allowed Values
0 – 100
Default
50
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Related Information
Controls whether the original source definition of procedures, views, and event handlers is saved in system
files. If saved, the formatted source is saved in the column source in SYSTABLE, SYSPROCEDURE, and
SYSEVENT.
Allowed Values
ON, OFF
Default
ON
Scope
Remarks
When PRESERVE_SOURCE_FORMAT is ON, the server saves the formatted source from CREATE and ALTER
statements on procedures, views, and events, and puts original source definition in the source column of the
appropriate system table.
Unformatted source text is stored in the same system tables, in the columns proc_defn, and view_defn. The
formatted source column allows you to view the definitions with the spacing, comments, and case that you
want.
This option can be turned off to reduce space used to save object definitions in the database. The option can be
set only for the PUBLIC role.
Related Information
Controls whether progress messages are sent from the database server to the client.
Allowed Values
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Value Description
43;9728;22230;pages;5025;6138
Raw progress messages have six fields separated by semicolons that are defined as follows:
Formatted progress messages are localized, and the time format is HH:MM:SS. Units less than 100 KB are
displayed in bytes, units less than 100 MB are displayed in KB, and units greater than 100 MB are displayed in
MB.
Progress messages are sent at intervals that are 5% of the total estimated duration of the statement. Typically,
the estimate is completed and the first progress message is sent within 10 seconds. Additional progress
messages are sent in intervals of 30 seconds to 5 minutes. If the percentage complete is identical to the value
sent in a previous message, an updated progress message is not sent until more than 5 minutes have elapsed
since the last message was sent. Progress messages are not sent for statements that take less than 30
seconds to execute.
Estimates are recalculated continually; the accuracy of the remaining time estimate increases as the operation
progresses. During events such as backups, the total number of pages may be adjusted during statement
execution, so the percent complete and remaining time estimates change. With statements such as
BACKUP...WITH CHECKPOINT COPY or UNLOAD SELECT the total number of affected pages or rows is
unknown and it is possible for the percentage complete value to be greater than 100%. As a result, the
estimated remaining time cannot be calculated and it is not included in the progress message.
The following statements and procedures support progress messages on the IQ catalog row-store portion:
You can set the PROGRESS_MESSAGES option when you are connected to the utility database using the SET
OPTION statement.
Related Information
Specifies whether or not to include additional query information in the Query Detail section of the query plan.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When QUERY_DETAIL and QUERY_PLAN (or QUERY_PLAN_AS_HTML) are both turned on, SAP IQ displays
additional information about the query when producing its query plan. When QUERY_PLAN and
QUERY_PLAN_AS_HTML are OFF, this option is ignored.
When QUERY_PLAN is ON (the default), especially if QUERY_DETAIL is also ON, you might want to enable
message log wrapping or message log archiving to avoid filling up your message log file.
Related Information
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
You can assign the QUERY_NAME option any quote-delimited string value, up to 80 characters. For example:
When this option is set, query plans that are sent to the .iqmsg file or .html file include a line near the top of
the plan that looks like:
If you set the option to a different value before each query in a script, it is much easier to identify the correct
query plan for a particular query. The query name is also added to the file name for HTML query plans. This
option has no other effect on the query.
Related Information
Specifies whether or not additional query plans are printed to the SAP IQ message file.
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When this option is turned ON, SAP IQ produces textual query plans in the IQ message file. These query plans
display the query tree topography, as well as details about optimization and execution. When this option is
turned OFF, those messages are suppressed. The information is sent to the <dbname>.iqmsg file.
Related Information
Allowed Values
ON, OFF
Default
ON
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When QUERY_PLAN_AFTER_RUN is turned ON, the query plan is printed after the query has finished running.
This allows the query plan to include additional information, such as the actual number of rows passed on from
each node of the query.
For this option to work, the QUERY_PLAN option must be set to ON. You can use this option in conjunction with
QUERY_DETAIL to generate additional information in the query plan report.
Related Information
Generates graphical query plans in HTML format for viewing in a Web browser.
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When you set this option, also set the QUERY_NAME option for each query, so you know which query is
associated with the query plan.
SAP IQ writes the plans in the same directory as the .iqmsg file. Query plan file names follow these
conventions: <user-name>_<query-name>_<server-type>_<server-
number>_<YYYYMMDD_HHMMSS>_<query-number>_<fragment-number>.html
For example, if the user DBA sets the temporary option QUERY_NAME to 'Query_1123', a file created on
November 8, 2012, at exactly 8:30 a.m. is called DBA_QUERY_1123_L_0__20121108_083000_4.html. The
date, time, and unique <query-number> appended to the file name ensure that existing files are not
overwritten. The <server-type> parameter indicates whether the plan originates from a leader (L) or worker
(W) node. The <server-number> identifies the server where the plan originated when all html files are routed
to a single directory.
On multiplex servers, worker nodes generate an html file for each fragment executed by the worker, which can
result in multiple html files from a single query. These files are identified by <fragment-number>.
Note
If you use this feature, monitor your disk space usage so you leave enough room for your .iqmsg and log
files to grow. Enable IQ message log wrapping or message log archiving to avoid filling up your message log
file.
QUERY_PLAN_AS_HTML acts independently of the setting for the QUERY_PLAN option. In other words, if
QUERY_PLAN_AS_HTML is ON, you get an HTML format query plan whether or not QUERY_PLAN is ON.
This feature is supported with newer versions of many commonly used browsers. Some browsers might
experience problems with plans generated for very complicated queries.
DBA_QUERY_Q1123_L_0__20121108_083000_4.html
Simplex servers always return a <server-type> parameter that indicates that the plan originated on a leader
(L) with a <query-number> equal to 0. Simplex output never includes a <fragment-number>.
DBA_L_1_Q101_20121108_083000_94.html
DBA_W_2_Q101_20121108_083000_94_2.html
DBA_W_2_Q101_20121108_083000_94_1.html
DBA_W_3_Q101_20121113-054928_94_2.html
DBA_W_3_Q101_20121113-054933_94_1.html
Related Information
Specifies the directory into which SAP IQ writes the HTML query plans.
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When the QUERY_PLAN_AS_HTML option is turned ON and a directory is specified with the
QUERY_PLAN_AS_HTML_DIRECTORY option, SAP IQ writes the HTML query plans in the directory specified.
This option provides additional security by allowing HTML query plans to be produced outside of the server
directory. When the QUERY_PLAN_AS_HTML_DIRECTORY option is not used, the query plans are sent to the
default directory (the .iqmsg file directory).
This example creates the example directory /system1/users/DBA/html_plans and sets the correct
permissions on the directory by setting the options and running the query:
The HTML query plan is written to a file in the specified directory /system1/users/DBA/html_plans.
Related Information
Specifies a threshold for query execution. The post-query plan is generated only if query execution time
exceeds the threshold.
Allowed Values
Integer, in milliseconds.
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
A query with a very short execution time (a micro query) executes faster if a query plan is not generated.
Setting this option avoids the generation of query plans, and the associated query plan generation costs for
these queries. The QUERY_PLAN_MIN_TIME option is ignored unless the following options are also set:
● QUERY_PLAN = ON or QUERY_PLAN_AS_HTML = ON
● QUERY_PLAN_AFTER_RUN = ON
● QUERY_TIMING = ON
When these options are set, setting a QUERY_PLAN_MIN_TIME query execution threshold prevents the
generation of query plans for queries with execution times that do not exceed the specified threshold.
If using the statement performance monitoring feature (that is, you set the COLLECT_IQ_PERFORMANCE
option to ON), QUERY_PLAN_MIN_TIME specifies the reporting threshold for query execution times. Only
those SQL statements with execution times exceeding this threshold will be reported.
Related Information
Enables or prevents users from accessing query plans from the Interactive SQL client or from using SQL
functions to get plans.
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When QUERY_PLAN_TEXT_ACCESS option is ON, users can view, save, and print query plans from the
Interactive SQL client. When the option is OFF, query plans are not cached, and other query plan-related
database options have no effect on the query plan that is shown from the Interactive SQL client. This error
message appears:
No plan available. The database option QUERY_PLAN_TEXT_ACCESS is OFF.
Related Information
Allows you to specify whether or not SAP IQ generates and caches IQ plans for queries executed by the user.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
IQ query plans vary in size and can become very large for complex queries. Caching plans for display on the
Interactive SQL client can have high resource requirements. The QUERY_PLAN_TEXT_CACHING option gives
users a mechanism to control resources for caching plans. With this option turned OFF (the default), the query
plan is not cached for that user connection.
Note
If QUERY_PLAN_TEXT_ACCESS is turned OFF, the query plan is not cached for the connections from that
user, no matter how QUERY_PLAN_TEXT_CACHING is set.
Related Information
Sets the row threshold for rejecting queries based on estimated size of result set.
Allowed Values
Any integer
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If SAP IQ receives a query that has an estimated number of result rows greater than the value of
QUERY_ROWS_RETURNED_LIMIT, it rejects the query with this message:
Query rejected because it exceeds resource: Query_Rows_Returned_Limit
If you set this option to 0 (the default), there is no limit, and no queries are rejected based on the number of
rows in their output.
Specifies the maximum estimated amount of temp space before a query is rejected.
Allowed Values
Any integer
Default
0 (no limit)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If SAP IQ receives a query that is estimated to require a temporary result space larger than value of this option,
it rejects the query with this message:
Query rejected because it exceeds total space resource limit
When set to 0 (the default), there is no limit on temporary store usage by queries.
Users may override this option in their own environments to run queries that can potentially fill up the entire
temporary store. To prevent runaway queries from filling up the temporary store, a user with the SET ANY
In a distributed query processing transaction, SAP IQ uses the values set for the QUERY_TEMP_SPACE_LIMIT
and MAX_TEMP_SPACE_PER_CONNECTION options for the shared temporary store by limiting the total shared
and local temporary space used by all nodes participating in the distributed query. This means that any single
query cannot exceed the total temp space limit (from IQ_SYSTEM_TEMP and IQ_SHARED_TEMP dbspaces), no
matter how many nodes participate.
For example, if the limit is 100 and four nodes use 25 units of temporary space each, the query is within limits.
If the sum of the total space used by any of the nodes exceeds 100, however, the query rolls back.
Related Information
Determines whether or not to collect specific timing statistics and display them in the query plan.
Allowed Values
ON, OFF
Default
ON
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
Remarks
This option controls the collection of timing statistics on subqueries and some other repetitive functions in the
query engine.
Query timing is represented in the query plan detail as a series of timestamps. These timestamps correspond
to query operator phases (Conditions, Prepare, Fetch, Complete). HTML and Interactive SQL query plans
display query timing graphically as a timeline.
Related Information
Allowed Values
ON, OFF
Default
● ON
● OFF – for Open Client connections.
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
QUOTED_IDENTIFIER controls whether strings enclosed in double quotes are interpreted as identifiers (ON)
or as literal strings (OFF). This option is included for Transact-SQL compatibility.
SAP IQ Cockpit and Interactive SQL set QUOTED_IDENTIFIER temporarily to ON, if it is set to OFF. A message
is displayed informing you of this change. The change is in effect only for the SAP IQ Cockpit or Interactive SQL
connection. The JDBC driver also temporarily sets QUOTED_IDENTIFIER to ON.
Sets the maximum length of time, in minutes, that the database server takes to recover from system failure.
Allowed Values
Integer, in minutes
Default
Scope
Use this option with the CHECKPOINT_TIME option to decide when checkpoints should be done.
A heuristic measures the recovery time based on the operations since the last checkpoint. Thus, the recovery
time is not exact.
Related Information
Controls the minimum number of concurrent connections that a database must reserve for standard
connections. This option is useful when your database server accepts HTTP/HTTPS connections.
Allowed values
size
This integer specifies the number of concurrent connections that the database must reserve for standard
connections.
For databases running on the network database server, specify a number that fulfills the following
requirements:
● Less than the maximum database server connection limit as allowed by your license.
● Less than the maximum database server connection limit as specified by the -gm database server
option.
● Less than the maximum database connection limit specified by the max_connections database option.
Remarks
Setting this option ensures that a database can accept standard connections even when its HTTP/HTTPS
connections are queued.
A database server accepts HTTP/HTTPS connections until it reaches its license limit, and then it queues
subsequent HTTP/HTTPS connections and processes them as connections are made available. There is no
opportunity for a standard connection to replace an HTTP/HTTPS connection while there are connection
attempts in the queue. Users wanting to make a standard connection, could wait indefinitely for the HTTP/
HTTPS connection queue to complete. Use the reserved_connections option to specify a minimum number of
database connections that can only accept standard connections.
You can view the current number of reserved connections for a database by querying the value of the
reserved_connections connection property:
Example
Related Information
Allowed Values
String
Default
Scope
Remarks
This option turns on individual keywords that are disabled by default. Only the LIMIT keyword can be turned on.
Examples
You cannot turn on the keywords SET, OPTION, and OPTIONS. The following determine whether a word is
identified as a keyword (in order of precedence):
Each setting of this option replaces the previous setting. The following statement clears all previous settings:
Controls how a date, time, or timestamp value is passed to the client application when queried.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set as a temporary option only, for the duration of the current connection or for the PUBLIC
role.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect immediately.
Remarks
RETURN_DATE_TIME_AS_STRING indicates whether date, time, and timestamp values are returned to
applications as a date or time data type or as a string.
When this option is set to ON, the server converts the date, time, or timestamp value to a string before it is sent
to the client in order to preserve the TIMESTAMP_FORMAT, DATE_FORMAT, or TIME_FORMAT option setting.
SAP IQ Cockpit and Interactive SQL automatically turn the RETURN_DATE_TIME_AS_STRING option ON.
Setting this option ON forces the query optimizer to mimic SAP IQ 15.x behavior.
Allowed Values
ON, OFF
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. If permitted, can be set for an arbitrary other user or role, or
for all users via the role. Takes effect immediately.
Caution
The REVERT_TO_V15_OPTIMIZER option is normally used for internal testing and manually tuning queries.
Only experienced DBAs should use it.
SAP IQ 16.1 supports several new join and grouping algorithms that leverage Hash and Hash-Range partitioned
tables, as well as a few other new algorithms. By default, all of these new algorithms are considered by the
optimizer and will be selected where valid and appropriate. Setting REVERT_TO_V15_OPTIMIZER to ON
disables all 16.1 changes to the optimizer cost models. It also disables all of these new join and grouping
algorithms, unless they are valid and are specifically requested via a positive value for either the
AGGREGATION_PREFERENCE option, the JOIN_PREFERENCE option, or a join condition hint string.
Note
An error will result if your query references an RLV-enabled table and REVERT_TO_V15_OPTIMIZER='ON'.
Related Information
Controls behavior of the SQL function ROUND when querying SAP IQ tables.
Allowed Values
ON, OFF
Default
OFF
Requires the SET ANY SYSTEM OPTION system privilege. Can be set for the PUBLIC role only. Takes effect
immediately.
Remarks
The ROUND function rounds the digits after the decimal to the nearest value using the specified number of
places. When the value of the last digit of the specified number of places is 5 (exactly half way between the two
nearest values), the ROUND_TO_EVEN option determines how the digit is rounded. When set to ON, the digit
rounds to the nearest even number. When set to OFF, the digit rounds to the value with the largest absolute
value.
Examples
c1
-0.35
-0.25
0.25
0.35
Execute:
c1 round(MyTable,c1,1)
-0.35 -0.4
-0.25 -0.3
0.25 0.3
0.35 0.4
c1 round(MyTable,c1,1)
-0.35 -0.4
-0.25 -0.2
0.25 0.2
0.35 0.4
Related Information
Allowed Values
Integer
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
When this runtime option is set to a nonzero value, query processing stops after the specified number of rows.
This option affects only statements with the keyword SELECT and does not affect UPDATE and DELETE
statements.
The SELECT statement keywords FIRST and TOP also limit the number of rows returned from a query. Using
FIRST is the same as setting the ROW_COUNT database option to 1. Using TOP is the same as setting
ROW_COUNT to the same number of rows. If both TOP and ROW_COUNT are set, then the value of TOP takes
precedence.
The ROW_COUNT option could produce non-deterministic results when used in a query involving global
variables, system functions or proxy tables. Such queries are partly executed using CIS (Component Integrated
Services). In such cases, use SELECT TOP <n> instead of setting ROW_COUNT, or set the global variable to a
local one and use that local variable in the query.
Related Information
This option enables or disables automatic merges of the RLV store into the IQ main store for row-level,
versioning-enabled tables.
Allowed Values
ON, OFF
Default
ON
Remarks
As of SAP IQ 16.0 SP 11, the RV_AUTO_MERGE database option replaces the deprecated
sa_server_option('rlv_auto_merge') system procedure call for enabling and disabling auto merge. To
avoid potential conflicts, replace all calls to set the system procedure using the deprecated option with calls to
set the database option. In the event both options are called, the value used is set by the last option called.
Related Information
This option configures the evaluation period used to determine when an automated merge of the row-level
versioned (RLV) and IQ main stores should occur.
Allowed Values
1 to MAX_UINT (minutes)
Default
15 (minutes)
Remarks
This option is be used to configure the period of wait time, in minutes, between activations of the merge
evaluator. The merge evaluator examines the merge parameters of each row-level versioning (RLV) enabled
table against configured threshold values to determine whether a non-blocking (background) merge of the RLV
table to IQ main stores should occur.
If the interval ends while the evaluator is active, or when a merge is already in progress, the interval re-sets.
Any new value for the interval is used when the merge evaluator is next activated.
Related Information
Defines the strategy to use for subsequent array allocation for fixed length datatype columns in the RLV in-
memory store.
Allowed values
1 to 4
Default
Remarks
Once the RLV store is created using the current allocation strategy, if the strategy is changed, it is not used by a
table until the RLV store for the table is destroyed and re-created as the result of a DDL, an automerge, or the
user explicitly calling the sp_iqmergerlvstore procedure.
ID Name Description
2 (default) Percent Increase Allocates the first block size as defined by RV_INITIAL_FIX_DATA_BLOCKSIZE.
Each subsequent block size grows by the percentage defined by
RV_PERCENT_INCREASE_IN_FIX_DATA_BLOCKSIZE
Example
This example sets the allocation strategy to 3 and creates RLV-endabled tables t1 and t2. Data is inserted into
table t1, creating the RLV store for table t1. The RLV store for table t2 has not yet been created:
SET public.RV_Block_Size_Allocation_Strategy=3;
CREATE TABLE t1 ( a int ) ENABLE RLV STORE;
CREATE TABLE t2 ( a int ) ENABLE RLV STORE;
INSERT INTO t1 values(1);
COMMIT;
The allocation strategy is changed to 4. Data is now inserted into table t2. The RLV store for table t2 is created
using the new allocation strategy. t1 continues to use the original allocation strategy:
SET public.RV_Block_Size_Allocation_Strategy=4;
INSERT INTO t2 values(2);
COMMIT;
Column b is added to table t1. This destroys the current RLV store for table t1 (assuming no old transactions
exist) and then re-recreates it using the new allocation strategy:
Related Information
10.6.198 RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE
Option
Defines the delta size increase size (in bytes) for subsequent array allocation for fixed length datatype columns
in the RLV in-memory store. The nth block size is the value of (n-1)th block size +
RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE. This is used by the Delta Increase allocation strategy.
Allowed values
Default
1024 (1 KB)
Scope
This example sets the allocation strategy to 2, creates RLV-enabled table t1, and creates the RLV store by
inserting data into the table. It uses the default values for RV_INITIAL_FIX_DATA_BLOCKSIZE and
RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE:
set public.RV_Block_Size_Allocation_Strategy=2;
create table t1 ( a int ) enable rlv store;
insert into t1 values(1);
commit;
set public.RV_Percent_Increase_In_Fix_Data_BlockSize=50;
insert into t1 values(2);
Column b is added to table t1 and data inserted. This destroys the current RLV store for table t1 (assuming no
old transactions exist) and then re-creates it using the default value for RV_INITIAL_FIX_DATA_BLOCKSIZE
and the new value for RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE:
Related Information
Defines the size (in bytes) of every subsequent array allocation for fixed length datatype columns in the RLV in-
memory store. It is used by the Constant allocation strategy.
Allowed values
Scope
Example
This example sets the allocation strategy to 4, creates RLV-enabled table t1, and creates the RLV store by
inserting data into the table. It uses the default value for RV_FIX_DATA_BLOCKSIZE:
set public.RV_Block_Size_Allocation_Strategy=4;
create table t1 ( a int ) enable rlv store;
insert into t1 values(1);
commit;
The RV_FIX_DATA_BLOCKSIZE value is changed to 8388608 bytes and data is inserted into table t1:
set public.RV_FIX_DATA_BLOCKSIZE=8388608;
insert into t1 values(2);
Column b is added to table t1 and data inserted. This destroys the current RLV store for table t1 (assuming no
old transactions exist) and then re-creates it using the new value for the RV_FIX_DATA_BLOCKSIZE:
Related Information
Defines the size (in bytes) of the first array allocation for fixed length datatype columns in the RLV in-memory
store. It is used as a starting size by all fixed block allocation strategies.
Allowed values
Default
4096 (4 KB)
Scope
Example
This example sets the allocation strategy to 2, creates RLV-enabled table t1, and creates the RLV store by
inserting data into the table. It uses the default values for RV_INITIAL_FIX_DATA_BLOCKSIZE and
RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE:
set public.RV_Block_Size_Allocation_Strategy=2;
create table t1 ( a int ) enable rlv store;
insert into t1 values(1);
commit;
The RV_INITIAL_FIX_DATA_BLOCKSIZE value is changed to 2048 bytes and data is inserted into t1:
set public.RV_INITIAL_FIX_DATA_BLOCKSIZE=2048;
insert into t1 values(2);
Column b is added to table t1 and data inserted. This destroys the current RLV store for table t1 (assuming no
old transactions exist) and then re-creates it using the new value for RV_INITIAL_FIX_DATA_BLOCKSIZE and
the default value for RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE:
Allowed Values
>=0
Default
Note
Use of any value other than the default is not recommended as it could negatively impact CPU utilization
and scalability of bulk loads.
Scope
Remarks
If the value is set to anything other than the default, the system uses the specified value or the total number of
cores on the machine, whichever is less.
This value limits the total in-memory dictionary size for implicit NBit FP columns.
Allowed Values
1 to 4,294,967,295
Default
64 (MB)
Scope
Remarks
RV_MAX_TOKEN and RV_MAX_LOOKUP_MB database options establish a ceiling for sizing in-memory implicit
NBit columns. While the number of distinct values is less than RV_MAX_TOKEN and the total dictionary size
(values and counts) is less than RV_MAX_LOOKUP_MB, the column loads with an in-memory NBit FP index.
When DML operations exceed the RV_MAX_TOKEN or RV_MAX_LOOKUP_MB limits, the in-memory NBit FP index
rolls over to a Flat FP index.
In this example, the dictionary of in-memory store for table FOO can expand to a maximum of 10 MB is size:
The existing in-memory store for table FOO remains configured to a maximum directory size of 10 MB; the new
option value has no impact:
The ALTER TABLE command merges the existing in-memory data for table FOO to the IQ main store and then
creates a new in-memory store. The new in-memory store uses the current property value of 5, not the original
property value of 10:
Related Information
This value provides an upper bound for the number of NBit tokens used in the in-memory dictionary.
Allowed Values
2 to 2,147,475,456
Default
2,147,475,456
Remarks
For each distinct data value an NBit token will be created. Once the number of tokens issues exceeds this
property value, the in-memory store rolls over to flat data storage.
Example
In this example, the in-memory store for table FOO can hold a maximum of 100 NBit tokens:
The existing in-memory store for table FOO remains configured to hold 100 NBit tokens; The new option value
has no impact:
The ALTER TABLE command merges the existing in-memory data for table FOO to the IQ main store and then
creates a new in-memory store; the new in-memory store uses the current property value of 50, not the original
property value of 100:
Related Information
Sets the percentage of total RLV memory size as a merge threshold for the node.
Allowed Values
0 to 100 (percent)
Default
75 (percent)
Scope
Remarks
If the total RLV memory used surpasses the threshold when compared against the maximum configured RLV
memory size for the node, the merge condition evaluator determines which table(s) to merge.
Related Information
Triggers an RLV merge to occur based on a time interval since the last commit.
Allowed Values
Integer, in minutes
Default
10
Scope
Remarks
When the merge evaluator starts, it compares the last commit time with the value defined by the
RV_MERGE_TABLE_COMMIT_AGE option. If the elapsed time since the last commit is at least equal to the
value of RV_MERGE_TABLE_COMMIT_AGE, the table is set as a merge candidate.
If the server is shut down after a commit, but prior to a merge, when the server restarts, the elapsed time
between the last commit and the shutdown is reset to the current time at restart.
Example
● RV_MERGE_TABLE_COMMIT_AGE – 10 mins
● RV_AUTO_MERGE_EVAL_INTERVAL – 6 mins
Scenario 1
RLV-enabled table T1 is created, data inserted, and a commit executed. Six minutes after the commit, the
merge evaluator starts. Since the elapsed time since the commit is less than the value of
RV_MERGE_TABLE_COMMIT_AGE (6<10), T1 is not considered a candidate for a merge.
Scenario 2
Data is inserted into T1 and a commit executed. Two minutes later, the server is shut down and restarted. Six
minutes after restart, the merge evaluator starts. The calculated time since last commit is 6 minutes (elapsed
time since restart). Since the calculated value is less than the value of RV_MERGE_TABLE_COMMIT_AGE, T1 is
not yet considered a candidate for merge.
Related Information
Defines the threshold of deleted and committed table rows to trigger a merge expressed as a percentage. If the
number of deleted rows used surpasses the threshold, a merge occurs.
Allowed Values
0 to 100 (percent)
Default
40 (percent)
Scope
RLV query performance can degrade when a large number of rows has been deleted from tables in the RLV
store, improving once the tables are merged. The automerge algorithm uses the
RV_MERGE_TABLE_DELPERCENT option as one of the factors to determine if a table needs merging. On a per
RLV-enabled table basis, the number of deleted rows is evaluated. If the number exceeds the threshold, the
table is flagged for possible merge.
A merge of a single table is deemed warranted if the system does not contain a large percentage of
uncommitted RLV rows preventing a merge, and the table exceeds any of these thresholds:
Related Information
Sets the percentage of memory used as a merge threshold for an RLV-enabled table. If the memory used
surpasses the threshold, a merge occurs.
Allowed Values
0 to 100 (percent)
Default
0 (percent)
Remarks
Note
The automerge algorithm uses the RV_MERGE_TABLE_MEMPERCENT option as one of the factors to determine
if a table needs merging. On a per RLV-enabled table basis, memory usage is evaluated. If the memory used
exceeds the threshold, the table is flagged for possible merge.
A merge of a single table is deemed warranted if the system does not contain a large percentage of
uncommitted RLV rows preventing a merge, and the table exceeds any of these thresholds:
Related Information
Sets the number of rows used as a merge threshold for an RLV-enabled table. If the number of rows used
surpasses the threshold, a merge occurs.
Allowed Values
1000 to 100,000,000
10,000,000
Scope
Remarks
The automerge algorithm uses the RV_MERGE_TABLE_NUMROWS option as one of the factors to determine if a
table needs merging. On a per RLV-enabled table basis, the number of rows used is evaluated. If usage exceeds
the threshold, the table is flagged for possible merge.
A merge of a single table is deemed warranted if the system does not contain a large percentage of
uncommitted RLV rows preventing a merge, and the table exceeds any of these thresholds:
Related Information
10.6.209 RV_PERCENT_INCREASE_IN_FIX_DATA_BLOCKSIZ
E Option
Defines the percentage size increase for subsequent array allocation for fixed length datatype columns in the
RLV in-memory store. The nth block size is the value of (n-1)th block size + ((n-1)th block size *
Allowed values
10 to 100
Default
100
Scope
Related Information
A portion of the RLV store must be reserved for memory used by data structures during critical operations.
Allowed Values
Scope
Description
This option allows you to control the amount of space set aside in the RLV store for small but critical data
structures used during release savepoint, commit, and rollback operations.
Related Information
Defines the size (in bytes) of every array allocation for variable length datatype columns in the RLV in-memory
store.
Allowed values
Default
Example
This example creates RLV-enabled table t1 containing variable length column a, and creates the RLV store by
inserting data into the table. It uses the default values for RV_VAR_DATA_BLOCKSIZE:
The RV_VAR_DATA_BLOCKSIZE value is changed to 65536 bytes and data is inserted into t1:
set public.RV_VAR_DATA_BLOCKSIZE=65536;
insert into t1 values(2);
Column b is added to table t1 and data inserted. This destroys the current RLV store for table t1 (assuming no
old transactions exist) and then re-creates it using the default value for RV_INITIAL_FIX_DATA_BLOCKSIZE
and the new value for RV_DELTA_INCREASE_IN_FIX_DATA_BLOCKSIZE:
Related Information
Specifies the minimum number of digits after the decimal point when an arithmetic result is truncated to the
maximum PRECISION, for queries on the catalog store only.
Allowed Values
Default
38
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect immediately.
Remarks
This option specifies the minimum number of digits after the decimal point when an arithmetic result is
truncated to the maximum PRECISION, for queries on the catalog store.
Multiplication, division, addition, subtraction, and aggregate functions may all have results that exceed the
maximum precision.
Related Information
Specifies the number of significant digits to the right of the decimal in exponential notation that are used in
equality tests between two complex arithmetic expressions.
Allowed Values
0 to 15
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Doubles are stored in binary (base 2) instead of decimal (base 10); this means that this setting gives the
approximate number of significant decimal digits used. If set to 0, all digits are used.
For example, when SIGNIFICANTDIGITSFORDOUBLEEQUALITY is set to 12, these numbers compare as equal;
when set to 13, they do not:
● 1.23456789012345
● 1.23456789012389
Related Information
Controls whether RLV-enabled tables are accessed using single-writer table-level versioning, or multiple writer
row-level versioning. Applies to RLV-enabled tables only.
Allowed Values
row-level, table-level
Default
table-level
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Takes effect immediately.
Remarks
Note
The allowed values may be restricted by the value defined by the ALLOW_SNAPSHOT_VERSIONING option.
row-level Enables concurrent writer access and row-level versioning for RLV-enabled tables.
The first transaction to modify a table row establishes a row write lock that persists until the
end of the transaction.
Subsequent transactions attempting to modify a locked row either fail with a lock/future
version error, or block until the lock is released based on the value of the BLOCKING option.
The first transaction to access the table establishes a table write lock, which persists until
the end of the transaction.
Subsequent transactions attempting to write to a locked table either fail with a lock/future
version error, or block until the lock is released based on the value of the BLOCKING option.
Related Information
Allowed Values
Default
Internal
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When the value of SORT_COLLATION is Internal, the ORDER BY clause remains unchanged.
When the value of this option is set to a valid collation name or collation ID, any string expression in the ORDER
BY clause is treated as if the SORTKEY function has been invoked.
Example
SELECT Name, ID
FROM Products
ORDER BY Name, ID;
SELECT Name, ID
FROM Products
ORDER BY 1, 2;
SELECT Name, ID
FROM Products
ORDER BY SORTKEY(Name, 'binary'), ID;
Related Information
Specifies the maximum percentage of currently available buffers a sort object tries to pin.
Allowed Values
0 to 100
Default
20
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For very large sorts, a larger value might help reduce the number of merge phases required by the sort. A larger
number, however, might impact the sorts and hashes of other users running on the system. If you change this
option, experiment to find the best value to increase performance, as choosing the wrong value might decrease
performance.
Tip
This option is primarily for use by Technical Support. If you change the value of
SORT_PINNABLE_CACHE_PERCENT, do so with extreme caution.
Related Information
Controls the behavior in response to any SQL code that is not part of the specified standard.
Allowed Values
● OFF
● SQL:1992/EntrySQL:1992/Intermediate
● SQL:1992/Full
● SQL:1999/Core
● SQL:1999/Package
● SQL:2003/Core
● SQL:2003/Package
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Flags as an error any SQL code that is not part of a specified standard. For example, specifying SQL:2003/
Package causes the database server to flag syntax that is not full SQL/2003 syntax.
For compatibility with previous SAP IQ versions, the following values are also accepted, and are mapped as
specified. Compatibility values for SQL_FLAGGER_ERROR_LEVEL are:
Related Information
Controls the response to any SQL that is not part of the specified standard.
Allowed Values
● OFF
● SQL:1992/Entry
● SQL:1992/Intermediate
● SQL:1992/Full
● SQL:1999/Core
● SQL:1999/Package
● SQL:2003/Core
● SQL:2003/Package
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
Remarks
Flags as an error any SQL code that is not part of a specified standard as a warning. For example, specifying
SQL:2003/Package causes the database server to flag syntax that is not full SQL/2003 syntax.
For compatibility with previous SAP IQ versions, the following values are also accepted, and are mapped as
specified. Compatibility values for SQL_FLAGGER_WARNING_LEVEL are:
Related Information
Determines whether an error is raised when an INSERT or UPDATE truncates a CHAR or VARCHAR string.
Allowed Values
ON, OFF
Default
ON
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If the truncated characters consist only of spaces, no exception is raised. ON corresponds to SQL92 behavior.
When STRING_RTRUNCATION is OFF, the exception is not raised and the character string is silently truncated.
If the option is ON and an error is raised, a ROLLBACK occurs.
This option was OFF by default prior to SAP IQ 15. It can safely be set to OFF for backward compatibility.
However, the ON setting is preferable to identify statements where truncation may cause data loss.
Related Information
Allowed Values
● 1 – use sort-based processing for the first subquery predicate. Other subquery predicates that do not have
the same ordering key are processed using a hash table to cache subquery results.
● 2 – use the hash table to cache results for all subquery predicates when it is legal. If available temp cache
cannot accommodate all of the subquery results, performance may be poor.
● 3 – cache one previous subquery result. Does not use SORT and HASH.
● 0 – let the optimizer choose.
● -1 – avoid using SORT. The IQ optimizer chooses HASH if it is legal.
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For correlated subquery predicates, the IQ optimizer offers a choice of caching outer references and subquery
results that reduces subquery execution costs. SUBQUERY_CACHING_PREFERENCE lets you override the
optimizer’s costing decision when choosing which algorithm to use. It does not override internal rules that
determine whether an algorithm is legal within the query engine.
A setting of a non-zero value affects every subquery predicate in the query. A non-zero value cannot be used
selectively for one subquery predicate in a query.
SUBQUERY_CACHING_PREFERENCE is normally used for internal testing by experienced DBAs only. It does not
apply to IN subqueries.
Related Information
Allows the user to change the threshold at which the optimizer decides to transform scalar subqueries into
joins.
Allowed Values
Default
100
Scope
● This option only applies to correlated scalar subqueries. Option can be set at the database (PUBLIC) or
user level. At the database level, the value becomes the default for any new user, but has no impact on
existing users. At the user level, overrides the PUBLIC value for that user only. No system privilege is
required to set option for self. System privilege is required to set at database level or at user level for any
user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately. If you set
SUBUERY_FLATTENING_PERCENT to a non-default value, every scalar subquery predicate in the query is
affected; this option cannot be used selectively for one scalar subquery predicate in a query.
Remarks
The SAP IQ query optimizer can convert a correlated scalar subquery into an equivalent join operation to
improve query performance. The SUBUERY_FLATTENING_PERCENT option allows the user to adjust the
threshold at which this optimization occurs.
Allows a user to override the decisions of the optimizer when transforming (flattening) scalar or EXISTS
subqueries into joins.
Allowed Values
Value Action
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately. If you set the option to a non-zero
Remarks
The SAP IQ optimizer may convert a correlated scalar subquery or an EXISTS or NOT EXISTS subquery into
an equivalent join operation to improve query performance. This optimization is called subquery flattening.
SUBQUERY_FLATTENING_PREFERENCE allows you to override the costing decision of the optimizer when
choosing the algorithm to use.
Related Information
Controls the placement of correlated subquery predicate operators within a query plan.
Allowed Values
● -1 – prefer the lowest possible location in the query plan, thereby placing the execution of the subquery as
early as possible within the query.
● 0 – let the optimizer choose.
● 1 – prefer the highest possible location in the query plan, thereby delaying the execution of the subquery to
as late as possible within the query.
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
For correlated subquery operators within a query, the IQ optimizer may have a choice of several different valid
locations within that query’s plan. SUBQUERY_PLACEMENT_PREFERENCE allows you to override the optimizer’s
cost-based decision when choosing the placement location. It does not override internal rules that determine
whether a location is valid, and in some queries, there might be only one valid choice. If you set this option to a
nonzero value, it affects every correlated subquery predicate in a query; it cannot be used to selectively modify
the placement of one subquery out of several in a query.
This option is normally used for internal testing, and only experienced DBAs should use it.
The default setting of this option is almost always appropriate. Occasionally, Technical Support might ask you
to change this value.
Related Information
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When the server is started with the -z option, debugging information appears in the server window, including
debugging information about the TDS protocol.
SUPPRESS_TDS_DEBUGGING restricts the debugging information about TDS that appears in the server
window. When this option is set to OFF (the default), TDS debugging information appears in the server window.
Related Information
Specifies the percentage of SAP IQ threads used to sweep out buffer caches.
Allowed Values
1 to 40
10
Scope
Remarks
SAP IQ uses a small percentage of its processing threads as sweeper threads. These sweeper threads clean out
dirty pages in the main and temp buffer caches.
In the IQ Monitor -cache report, the GDirty column shows the number of times the LRU buffer was grabbed
in a “dirty” (modified) state. If GDirty is greater than 0 for more than a brief time, you might need to increase
SWEEPER_THREADS_PERCENT or WASH_AREA_BUFFERS_PERCENT.
The default setting of this option is almost always appropriate. Occasionally, SAP Technical Support might ask
you to increase this value.
Related Information
Controls the size, in kilobytes, for server-allocated row blocks. Row blocks are used by Table UDFs and TPFs.
Allowed Values
0 to 4294967295
128
Scope
Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the default
for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value for that
user only. No system privilege is required to set option for self. System privilege is required to set at database
level or at user level for any user other than self.
Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Description
Specifies the row block size, in kilobytes, to fetch from the server.
The server allocates row blocks when you use fetch_into to fetch rows from a table UDF, and when you use
fetch_block to fetch rows from a TPF input table.
The row block contains as many rows as will fit into the specified size. If you specify a row block size smaller
than the size required for a single row, the server allocates the size of one row.
Controls whether empty strings are returned as NULL or a string containing one blank character for TDS
connections.
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
TDS_EMPTY_STRING_IS_NULL is set to OFF by default and empty strings are returned as a string containing
one blank character for TDS connections. When this option is set to ON, empty strings are returned as NULL
strings for TDS connections. Non-TDS connections distinguish empty strings from NULL strings.
Related Information
Specifies that any rows extracted by the data extraction facility are added to the end of an output file.
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option specifies that any rows extracted by the data extraction facility are added to the end of an output
file. You create the output file in a directory where you have WRITE/EXECUTE permissions and you set WRITE
permission on the directory and output file for the user name used to start SAP IQ. You can give permissions on
the output file to other users as appropriate. The name of the output file is specified in the
TEMP_EXTRACT_NAME1 option. The data extraction facility creates the output file, if the file does not already
exist.
Related Information
In combination with the TEMP_EXTRACT_SWAP option, specifies the type of extraction performed by the data
extraction facility.
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Use this option with the TEMP_EXTRACT_SWAP option to specify the type of extraction performed by the data
extraction facility.
binary ON OFF
binary/swap ON ON
Related Information
Specifies the delimiter between columns in the output of the data extraction facility for an ASCII extraction.
Allowed Values
String
Default
','
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Use TEMP_EXTRACT_COLUMN_DELIMITER to specify the delimiter between columns in the output of the data
extraction facility. In the case of an ASCII extraction, the default is to separate column values with commas.
Strings are unquoted by default.
The delimiter must occupy 1 – 4 bytes, and must be valid in the collation order you are using, if you are using a
multibyte collation order. Choose a delimiter that does not occur in any of the data output strings themselves.
If you set this option to the empty string '' for ASCII extractions, the extracted data is written in fixed-width
ASCII with no column delimiter. Numeric and binary data types are right-justified on a field of <n> blanks,
where <n> is the maximum number of bytes needed for any value of that type. Character data types are left-
justified on a field of <n> blanks.
Note
The minimum column width in a fixed-width ASCII extraction is 4 bytes to allow the string “NULL” for a
NULL value. For example, if the extracted column is CHAR(2) and TEMP_EXTRACT_COLUMN_DELIMITER is
set to the empty string '', there are two spaces after the extracted data.
Writes the output file for exports in gzip format. This results in significant savings of disk space when exporting
tables.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
In parallel mode, the TEMP_EXTRACT_FILE_EXTENSION option must be set to gz or '' (empty string). In
serial mode, the TEMP_EXTRACT_NAME<n> option must end with .gz.
Related Information
Controls whether a user is allowed to use the data extraction facility. Also controls the directory into which
temp extract files are placed and overrides a directory path specified in the TEMP_EXTRACT_NAMEn options.
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SYSTEM OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
● Set to the string FORBIDDEN (case insensitive) for a user – that user is not allowed to perform data
extracts. An attempt by this user to use the data extraction facility results in the error: You do not have
permission to perform Extracts.
● Set to FORBIDDEN for the PUBLIC role – no one can run data extraction.
● Set to a valid directory path, temp extract files are placed in that directory – overriding a path specified in
the TEMP_EXTRACT_NAMEn options.
● Set to an invalid directory path – an error occurs: Files does not exist File: <invalid path>
● Blank – temporary extract files are placed in directories according to their specification in
TEMP_EXTRACT_NAMEn. If no path is specified as part of TEMP_EXTRACT_NAMEn, the extract files are by
default placed in the server startup directory.
This option provides increased security and helps control disk management by restricting the creation of large
data extraction files to the directories for which a user has write access.
For details on the data extraction facility and using the extraction options, see SAP IQ Administration: Load
Management.
Related Information
Specifies whether all quotes in fields containing quotes are escaped in the output of the data extraction facility
for an ASCII extraction.
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option is ignored unless TEMP_EXTRACT_QUOTE is the default or set to the value of '"' (double quotes), and
TEMP_EXTRACT_BINARY is OFF, and either TEMP_EXTRACT_QUOTES or TEMP_EXTRACT_QUOTES_ALL is ON.
Related Information
Sets the file name extension for the generated output file of the data parallel extraction facility. When you
specify the TEMP_EXTRACT_FILE_EXTENSION option, each file name generated becomes <prefix>
<thread_ID>_<filecount>.<file extension>.
Allowed Values
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This filename extension is used in parallel temporary extract operations. To enable parallel extract, set
TEMP_EXTRACT_FILE_PREFIX but not TEMP_EXTRACT_NAME<n>. If you set TEMP_EXTRACT_COMPRESS, the
TEMP_EXTRACT_FILE_EXTENSION option must either be set to '' (the default value) or to 'gz'. Other file
extensions report an error.
Related Information
Sets the prefix of file name for the generated output file of the data parallel extraction facility. <thread_ID>
starts from 1. <filecount> starts from 1 for each thread ID. The<filecount> part increments when the size
of the output file reaches the file size limit specified by the TEMP_EXTRACT_SIZE option.
Allowed Values
Default
<prefix><thread_ID>_<filecount>
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The TEMP_EXTRACT_APPEND option is not compatible with the parallel extraction facility. If the output files do
not already exist, parallel extraction facility creates the new files. If the output files already exist, the file
contents are overwritten.
Use the parallel data extraction facility when the data set to extract is large and you need better performance.
When the parallel extract is enabled, multiple output files could be generated depending on the number of
threads used to extract in parallel.
Example
Related Information
The compression level balances compression with speed when the TEMP_EXTRACT_COMPRESS option is set to
ON.
Allowed Values
1 to 9
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
A value of 1 results in the best speed, while a value of 9 results in the best compression. The default value
provides a nice compromise between speed and compression.
In parallel mode, the TEMP_EXTRACT_FILE_EXTENSION option must be set to gz or '' (empty string). In
serial mode, the TEMP_EXTRACT_NAME<n> option must end with .gz.
Related Information
Adds a prefix field of specified length (byte) for a varchar or varbinary column in the generated output file. This
PREFIX field in the extract file holds the length of the column data.
Allowed Values
0, 1, 2, 4
Default
0 (zero)
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If you do not specify TEMP_EXTRACT_LENGTH_PREFIX, or you specify 0 (the default), the data extraction
facility does not generate a prefix length field.
When you specify any other valid value for TEMP_EXTRACT_LENGTH_PREFIX, the data extraction facility uses
that specified value for the length (byte) of the prefix field that holds the actual data length, adding it before the
actual data for a varchar or varbinary column, including the length for trailing spaces and zeros in the column. If
the TEMP_EXTRACT_VARYING option is not set, however, the total length of the actual column data in the
extracted file is its declared length in a fixed-length format. For example, the data extraction facility always
generates 10 bytes for a varchar(10) column, necessary to make the file format fixed length.
Related Information
Sets the maximum parallel degree for the data extraction facility. The
TEMP_EXTRACT_MAX_PARALLEL_DEGREE option limits the maximum number of threads that run in parallel
to extract data.
Allowed Values
Default
64
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Specifies the names of the output files or named pipes used by the data extraction facility. There are eight
options: TEMP_EXTRACT_NAME1 through TEMP_EXTRACT_NAME8.
Allowed Values
String
Default
Scope
Requires the SET ANY PUBLIC OPTION system privilege to set this option for PUBLIC or for other user or role.
Description
TEMP_EXTRACT_NAME1 through TEMP_EXTRACT_NAME8 specify the names of the output files used by the data
extraction facility. You must use these options sequentially. For example, TEMP_EXTRACT_NAME3 has no effect
unless both the options TEMP_EXTRACT_NAME1 and TEMP_EXTRACT_NAME2 are already set.
The most important of these options is TEMP_EXTRACT_NAME1. If TEMP_EXTRACT_NAME1 is set to its default
setting (the empty string ''), extraction is disabled and no output is redirected. To enable extraction, set
TEMP_EXTRACT_NAME1 to a path name. Extract starts extracting into a file with that name. Choose a path
name to a file that is not otherwise in use.
You can also use TEMP_EXTRACT_NAME1 to specify the name of the output file, when the
TEMP_EXTRACT_APPEND option is set ON. In this case, before you execute the SELECT statement, set WRITE
permission for the user name used to start SAP IQ (for example, sybase) on the directory or folder containing
the named file and on the named file. In append mode, the data extraction facility adds extracted rows to the
end of the file and does not overwrite the data that is already in the file. If the output file does not already exist,
the data extraction facility creates the file.
Caution
If you choose the path name of an existing file and the TEMP_EXTRACT_APPEND option is set OFF (the
default), the file contents are overwritten. This might be what you require if the file is for a weekly report, for
example, but not if the file is one of your database files.
If you are extracting to a single disk file or a single named pipe, leave the options TEMP_EXTRACT_NAME2
through TEMP_EXTRACT_NAME8 and TEMP_EXTRACT_SIZE1 through TEMP_EXTRACT_SIZE8 at their default
values.
● LOAD, DELETE, INSERT, or INSERT...LOCATION to a table that is the top table in a join
● INSERT...SELECT
The directory path specified using the TEMP_EXTRACT_NAMEn options can be overridden with the
TEMP_EXTRACT_DIRECTORY option.
Related Information
Controls the representation of null values in the output of the data extraction facility for an ASCII extraction.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
TEMP_EXTRACT_NULL_AS_EMPTY controls the representation of null values in the output of the data extraction
facility for ASCII extractions. When the TEMP_EXTRACT_NULL_AS_EMPTY option is set to ON, a null value is
represented as '' (the empty string) for all data types.
The quotes shown above are not present in the extract output file. When the TEMP_EXTRACT_NULL_AS_EMPTY
option is set to OFF, the string 'NULL' is used in all cases to represent a NULL value. OFF is the default value.
Controls the representation of null values in the output of the data extraction facility for an ASCII extraction.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
TEMP_EXTRACT_NULL_AS_ZERO controls the representation of null values in the output of the data extraction
facility for ASCII extractions. When TEMP_EXTRACT_NULL_AS_ZERO is set to ON, a null value is represented as
follows:
The quotes shown above are not present in the extract output file. When the TEMP_EXTRACT_NULL_AS_ZERO
option is set to OFF, the string 'NULL' is used in all cases to represent a NULL value. OFF is the default value.
Note
In SAP IQ 12.5, an ASCII extract from a CHAR or VARCHAR column in a table always returns at least four
characters to the output file. This is required if TEMP_EXTRACT_NULL_AS_ZERO is set to OFF, because SAP
IQ needs to write out the word NULL for any row in a column that has a null value. Reserving four spaces is
not required if TEMP_EXTRACT_NULL_AS_ZERO is set to ON.
In SAP IQ 12.6, if TEMP_EXTRACT_NULL_AS_ZERO is set to ON, the number of characters that an ASCII
extract writes to a file for a CHAR or VARCHAR column equals the number of characters in the column, even
if that number is less than four.
Related Information
Specifies the string to be used as the quote to enclose fields in the output of the data extraction facility for an
ASCII extraction, when either the TEMP_EXTRACT_QUOTES option or the TEMP_EXTRACT_QUOTES_ALL option
is set ON.
Allowed Values
String
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option specifies the string to be used as the quote to enclose fields in the output of the data extraction
facility for an ASCII extraction, if the default value is not suitable. TEMP_EXTRACT_QUOTE is used with the
TEMP_EXTRACT_QUOTES and TEMP_EXTRACT_QUOTES_ALL options. The quote string specified in the
TEMP_EXTRACT_QUOTE option has the same restrictions as the row and column delimiters. The default for this
option is the empty string, which SAP IQ converts to the single quote mark.
The string specified in the TEMP_EXTRACT_QUOTE option must occupy from 1 to a maximum of 4 bytes and
must be valid in the collation order you are using, if you are using a multibyte collation order. Be sure to choose
a string that does not occur in any of the data output strings themselves.
Related Information
Specifies that string fields are enclosed in quotes in the output of the data extraction facility for an ASCII
extraction.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
This option specifies that string fields are enclosed in quotes in the output of the data extraction facility for an
ASCII extraction. The string used as the quote is specified in the TEMP_EXTRACT_QUOTE option, if the default is
not suitable.
Related Information
Specifies that all fields are enclosed in quotes in the output of the data extraction facility for an ASCII
extraction.
Allowed Values
ON, OFF
Default
OFF
Scope
Requires the SET ANY PUBLIC OPTION system privilege to set this option for PUBLIC or for other user or role.
Remarks
TEMP_EXTRACT_QUOTES_ALL specifies that all fields are enclosed in quotes in the output of the data
extraction facility for an ASCII extraction. The string used as the quote is specified in TEMP_EXTRACT_QUOTE, if
the default is not suitable.
Related Information
Specifies the delimiter between rows in the output of the data extraction facility for an ASCII extraction.
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
TEMP_EXTRACT_ROW_DELIMITER specifies the delimiter between rows in the output of the data extraction
facility. In the case of an ASCII extraction, the default is to end the row with a newline on UNIX platforms and
with a carriage return/newline pair on Windows platforms.
The delimiter must occupy 1 to 4 bytes and must be valid in the collation order you are using, if you are using a
multibyte collation order. Choose a delimiter that does not occur in any of the data output strings. The default
for the TEMP_EXTRACT_ROW_DELIMITER option is the empty string. SAP IQ converts the empty string default
for this option to the newline on UNIX platforms and to the carriage return/newline pair on Windows platforms.
Related Information
Sets the maximum size (KB) of the corresponding output files generated by the parallel data extraction facility.
Allowed Values
Default
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
The default value 0 uses a platform dependent maximum file size for one disk file.
This option is different from TEMP_EXTRACT_SIZE1 through TEMP_EXTRACT_SIZE8, which are used for the
serial data extraction facility.
Related Information
Specifies the maximum sizes of the corresponding output files used by the data extraction facility.
Allowed Values
Windows: 0 – 128 GB
Note
Tape devices are not currently supported.
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
TEMP_EXTRACT_SIZE1 through TEMP_EXTRACT_SIZE8 are used to specify the maximum sizes of the
corresponding output files used by the data extraction facility. TEMP_EXTRACT_SIZE1 specifies the maximum
size of the output file specified by TEMP_EXTRACT_NAME1, TEMP_EXTRACT_SIZE2 specifies the maximum size
of the output file specified by TEMP_EXTRACT_NAME2, and so on.
When large file systems, such as JFS2, support file size larger than the default value, set
TEMP_EXTRACT_SIZEn to the value that the file system allows. For example, to support l TB set option:
TEMP_EXTRACT_SIZE1 = 1073741824 KB
If you are extracting to a single disk file or a single named pipe, leave the options TEMP_EXTRACT_NAME2
through TEMP_EXTRACT_NAME8 and TEMP_EXTRACT_SIZE1 through TEMP_EXTRACT_SIZE8 at their default
values.
Related Information
In combination with the TEMP_EXTRACT_BINARY option, specifies the type of extraction performed by the data
extraction facility.
Allowed values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Use this option with the TEMP_EXTRACT_BINARY option to specify the type of extraction performed by the
data extraction facility.
binary ON OFF
binary/swap ON ON
Related Information
Allowed Values
ON, OFF
Default
OFF
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
You can only use TEMP_EXTRACT_VARYING for varchar and varbinary columns in a binary mode extraction.
When you set TEMP_EXTRACT_VARYING to ON, the data field in the extracted file becomes variable length
(with a prefix field). The data field occupies only the data length in the extracted file, instead of the declared
length of the varchar or varbinary column, so that there is no trailing padding.
Use this option with TEMP_EXTRACT_LENGTH_PREFIX to indicate the data length in the extracted file; there is
no column delimiter in binary mode extractions.
Related Information
Allowed Values
200. SAP IQ actually reserves a maximum of 50% and a minimum of 1 percent of the last read-write file in
IQ_SYSTEM_TEMP
Scope
Remarks
TEMP_RESERVED_DBSPACE_MB lets you control the amount of space SAP IQ sets aside in your temporary IQ
store for certain small but critical data structures used during release savepoint, commit, and checkpoint
operations. For a production database, set this value between 200 MB and 1 GB. The larger your IQ page size
and number of concurrent connections, the more reserved space you need.
Related Information
Allowed Values
ON
Scope
Remarks
When TEMP_SPACE_LIMIT_CHECK is ON, the database server checks the amount of catalog store temporary
file space that a connection uses. If a connection requests more than its quota of temporary file space when
this option is set to OFF, a fatal error can occur. When this option is set to ON, if a connection requests more
than its quota of temporary file space, the request fails and the error “Temporary space limit exceeded”
is returned.
Two factors are used to determine the temporary file quota for a connection: the maximum size of the
temporary file, and the number of active database connections. The maximum size of the temporary file is the
sum of the current size of the file and the amount of disk space available on the partition containing the file.
When limit checking is turned on, the server checks a connection for exceeding its quota when the temporary
file has grown to 80% or more of its maximum size, and the connection requests more temporary file space.
Once this happens, any connection fails that uses more than the maximum temporary file space divided by the
number of active connections.
Note
This option is unrelated to IQ temporary store space. To constrain the growth of IQ temporary space, use
the QUERY_TEMP_SPACE_LIMIT option and MAX_TEMP_SPACE_PER_CONNECTION option.
You can obtain information about the space available for the temporary file using the sa_disk_free_space
system procedure.
Example
A database is started with the temporary file on a drive with 100 MB free and no other active files on the same
drive. The available temporary file space is 100 MB. The DBA enters:
As long as the temporary file stays below 80 MB, the server behaves as it did before. Once the file reaches 80
MB, the new behavior might occur. Assume that with 10 queries running, the temporary file needs to grow.
When the server finds that one query is using more than 8 MB of temporary file space that query fails.
Allowed Values
0 to 2
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Users must be licensed for the Unstructured Data Analytics Option to use TEXT indexes.
Sets the format used for times retrieved from the database.
Allowed values
A string composed of the symbols HH, NN, MM, SS, separated by colons.
Default
● 'HH:NN:SS.SSS'
● For Open Client and JDBC connections, the default is also set to HH:NN:SS.SSS.
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
Each symbol is substituted with the appropriate data for the date being formatted. Any format symbol that
represents character rather than digit output can be in uppercase, which causes the substituted characters
also to be in uppercase. For numbers, using mixed case in the format string suppresses leading zeros.
Multibyte characters are not supported in format strings. Only single-byte characters are allowed, even when
the collation order of the database is a multibyte collation order like 932JPN.
Related Information
Sets the format used for timestamps retrieved from the database.
Allowed Values
Default
'YYYY-MM-DD HH:NN:SS.SSS'
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Symbol Description
yy 2-digit year.
mmmm[m...] Character long form for month name—as many characters as there are m's, until the number of
m’s specified exceeds the number of characters in the month’s name.
dddd[d...] Character long form for day name—as many characters as there are d's, until the number of d’s
specified exceeds the number of characters in the day’s name.
hh 2-digit hours.
nn 2-digit minutes.
ss.SSS Seconds (ss) and fractions of a second (SSS), up to six decimal places. Not all platforms support
timestamps to a precision of six places.
Each symbol is substituted with the appropriate data for the date being formatted. Any format symbol that
represents character rather than digit output can be in uppercase, which causes the substituted characters
also to be in uppercase. For numbers, using mixed case in the format string suppresses leading zeros.
Multibyte characters are not supported in format strings. Only single-byte characters are allowed, even when
the collation order of the database is a multibyte collation order like 932JPN.
Related Information
Allowed Values
1 to 1000
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
TOP_NSORT_CUTOFF_PAGES sets the threshold, measured in pages, where evaluation of a query that contains
both a TOP clause and ORDER BY clause switches algorithms from ordered list-based processing to sort-based
processing. Ordered list processing performs better in cases where the TOP N value is smaller than the number
of result rows. Sort-based processing performs better for large TOP N values.
Related Information
Allowed Values
ON, OFF
Default
OFF
Scope
Remarks
Provides consistent loading of data for collations that contain both single-byte and multibyte characters. When
TRIM_PARTIAL_MBC is ON:
● A partial multibyte character is replaced with a blank when loading into a CHAR column.
● A partial multibyte character is truncated when loading into a VARCHAR column.
Related Information
Specifies the trust relationship for outbound Transport Layer Security (TLS) connections made by LDAP User
Authentication, INC, DAS INC, and MIPC connections.
Allowed Values
A valid network path to the location of a TXT file containing the list of trusted certificate authorities that sign
server certificates.
Default
NULL, meaning that no outbound TLS connection can be started because there are no trusted certificate
authorities.
Scope
Remarks
This option identifies the path to the location of the list of trusted certificate authorities. The list must be stored
in a TXT file. The file may be shared in a location in a Windows environment on the local drive to be used by all
SAP applications on that machine.
Related Information
Controls whether the @ sign can be used as a prefix for Embedded SQL host variable names.
Allowed Values
ON, OFF
Default
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When TSQL_VARIABLES is set to ON, you can use the @ sign instead of the colon as a prefix for host variable
names in Embedded SQL. This is implemented primarily for the Open Server Gateway.
Related Information
Allowed Values
Integer
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
SAP IQ tracks the number of open cursors and allocates memory accordingly. In certain circumstances, you
can use this option to adjust the minimum number of current cursors that SAP IQ thinks is currently using the
product, and allocate memory from the temporary cache more sparingly.
Set this option only after careful analysis shows it is actually required. If you need to set this parameter, contact
Technical Support with details.
Related Information
Specifies a user-supplied authentication function that can be used to implement password rules.
Allowed Values
String
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY SECURITY OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
When the VERIFY_PASSWORD_FUNCTION option value is set to a valid string, the statement GRANT CONNECT
TO <user_id> IDENTIFIED BY <password> calls the function specified by the option value.
The option value requires the form <owner.function_name> to prevent users from overriding the function.
● <user_name> VARCHAR(128)
● <new_pwd> VARCHAR(255)
If VERIFY_PASSWORD_FUNCTION is set, you cannot specify more than one user_id and password with the
GRANT CONNECT statement.
The following sample code defines a table and a function and sets some login policy options. Together they
implement advanced password rules that include requiring certain types of characters in the password,
disallowing password reuse, and expiring passwords. The function is called by the database server with the
VERIFY_PASSWORD_FUNCTION option when a user ID is created or a password is changed. The application can
call the procedure specified by the POST_LOGIN_PROCEDURE option to report that the password should be
changed before it expires.
Related Information
Allowed Values
ON, OFF
OFF
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
If this option is set to ON, the database does not check foreign key integrity until the next COMMIT statement.
Otherwise, all foreign keys not created with the CHECK ON COMMIT option are checked as they are inserted,
updated, or deleted.
Related Information
Specifies the percentage of the buffer caches above the wash marker.
Allowed Values
1 to 100
20
Scope
Remarks
SAP IQ buffer caches are organized as a long MRU/LRU chain. The area above the wash marker is used to
sweep out (that is, write) dirty pages to disk.
In the IQ Monitor -cache report, the Gdirty column shows the number of times the LRU buffer was grabbed
in a “dirty” (modified) state. If GDirty is greater than 0 for more than a brief time, you might need to increase
SWEEPER_THREADS_PERCENT or WASH_AREA_BUFFERS_PERCENT.
Note
Before changing this option, check the value of the CACHE_AFFINITY_PERCENT option.
WASH_AREA_BUFFERS_PERCENT affects the LRU side of the buffer cache and CACHE_AFFINITY_PERCENT
affects the MRU side. The total of these two values cannot exceed 100 percent.
The default setting of this option is almost always appropriate. Occasionally, SAP Technical Support might ask
you to increase this value.
Related Information
Allowed Values
● 0 – the delete method is selected by the cost model. The cost model only selects either the mid or large
method for deletion.
● 1 – forces the small method for deletion. Small method is useful when the number of rows being deleted is
a very small percentage of the total number of rows in the table. Small delete can randomly access the
index, causing cache thrashing with large data sets.
● 2 – forces the large method for deletion. This algorithm scans the entire index searching for rows to delete.
The large method is useful when the number of rows being deleted is a high percentage of the total number
of rows in the table.
● 3 – forces the mid method for deletion. Mid method is a variation of the small method that accesses the
index in order and is generally faster than the small method.
Default
Scope
● Option can be set at the database (PUBLIC) or user level. At the database level, the value becomes the
default for any new user, but has no impact on existing users. At the user level, overrides the PUBLIC value
for that user only. No system privilege is required to set option for self. System privilege is required to set at
database level or at user level for any user other than self.
● Requires the SET ANY PUBLIC OPTION system privilege to set this option. Can be set temporary for an
individual connection or for the PUBLIC role. Takes effect immediately.
Remarks
WD_DELETE_METHOD specifies the algorithm used during a delete operation in a WD index. When this option is
not set or is set to 0, the delete method is selected by the cost model. The cost model considers the CPU
related costs as well as I/O related costs in selecting the appropriate delete algorithm. The cost model takes
into account:
● Rows deleted
● Index size
Example
This example forces the large method for deletion from a WD index:
Related Information
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.