You are on page 1of 1198

Adaptive Server Anywhere

User’s Guide

Last modified: November 2000


Part Number: MC0057
Copyright © 2001 Sybase, Inc. All rights reserved.
Information in this manual may change without notice and does not represent a commitment on the part of
Sybase, Inc. and its subsidiaries.
Sybase, Inc. provides the software described in this manual under a Sybase License Agreement. The software may be
used only in accordance with the terms of the agreement.
No part of this publication may be reproduced, transmitted, or translated in any form or by any means, electronic,
mechanical, manual, optical, or otherwise, without the prior written permission of Sybase, Inc.
Sybase, SYBASE (logo), ADA Workbench, Adaptable Windowing Environment, Adaptive Component Architecture,
Adaptive Server, Adaptive Server Anywhere, Adaptive Server Enterprise, Adaptive Server Enterprise Monitor, Adaptive
Server Enterprise Replication, Adaptive Server Everywhere, Adaptive Server IQ, Adaptive Warehouse, AnswerBase,
Anywhere Studio, Application Manager, AppModeler, APT-Build, APT-Edit, APT-Execute, APT-FORMS, APT-Library,
APT-Translator, APT Workbench, ASEP, Backup Server, BayCam, Bit-Wise, Certified PowerBuilder Developer, Certified
SYBASE Professional, Certified SYBASE Professional (logo), ClearConnect, Client Services, Client-Library, CodeBank,
Cohesion, Column Design, ComponentPack, Connection Manager, CSP, Data Pipeline, Data Workbench, DataArchitect,
Database Analyzer, DataExpress, DataServer, DataWindow, DB-Library, dbQueue, Developers Workbench, Direct
Connect Anywhere, DirectConnect, Distribution Director, Dynamo, E-Anywhere, E-Whatever, Electronic Case
Management, Embedded SQL, EMS, Enterprise Application Server, Enterprise Application Studio, Enterprise
Client/Server, Enterprise Connect, Enterprise Data Studio, Enterprise Manager, Enterprise SQL Server Manager, Enterprise
Work Architecture, Enterprise Work Designer, Enterprise Work Modeler, EWA, Financial Fusion, First Impression,
Formula One, Gateway Manager, GeoPoint, ImpactNow, InfoMaker, Information Anywhere, Information Everywhere,
InformationConnect, InstaHelp, Intellidex, InternetBuilder, iScript, Jaguar CTS, jConnect for JDBC, KnowledgeBase,
Logical Memory Manager, MainframeConnect, Maintenance Express, MAP, MDI Access Server, MDI Database Gateway,
media.splash, MetaWorks, MethodSet, MobiCATS, MySupport, Net-Gateway, Net-Library, NetImpact, Next Generation
Learning, Next Generation Learning Studio, O DEVICE, OASiS, OASiS (logo), ObjectConnect, ObjectCycle,
OmniConnect, OmniSQL Access Module, OmniSQL Toolkit, Open Client, Open Client/Server, Open Client/Server
Interfaces, Open ClientConnect, Open Gateway, Open Server, Open ServerConnect, Open Solutions, Optima++,
Partnerships that Work, PB-Gen, PC APT Execute, PC DB-Net, PC Net Library, PhysicalArchitect, Power Through
Knowledge, Power++, power.stop, PowerAMC, PowerBuilder, PowerBuilder Foundation Class Library, PowerDesigner,
PowerDimensions, PowerDynamo, PowerJ, PowerScript, PowerSite, PowerSocket, Powersoft, Powersoft Portfolio,
Powersoft Professional, PowerStage, PowerStudio, PowerTips, PowerWare Desktop, PowerWare Enterprise,
ProcessAnalyst, Relational Beans, Replication Agent, Replication Driver, Replication Server, Replication Server Manager,
Replication Toolkit, Report Workbench, Report-Execute, Resource Manager, RW-DisplayLib, RW-Library, S-Designor,
S Designor, SAFE, SAFE/PRO, SDF, Secure SQL Server, Secure SQL Toolset, Security Guardian, SKILS, smart.partners,
smart.parts, smart.script, SQL Advantage, SQL Anywhere, SQL Anywhere Studio, SQL Code Checker, SQL Debug,
SQL Edit, SQL Edit/TPU, SQL Everywhere, SQL Modeler, SQL Remote, SQL Server, SQL Server Manager, SQL Server
SNMP SubAgent, SQL Server/CFT, SQL Server/DBM, SQL SMART, SQL Station, SQL Toolset, SQLJ, Startup.Com,
STEP, SupportNow, Sybase Central, Sybase Client/Server Interfaces, Sybase Development Framework, Sybase Financial
Server, Sybase Gateways, Sybase Learning Connection, Sybase SQL Desktop, Sybase SQL Lifecycle, Sybase SQL
Workgroup, Sybase Synergy Program, Sybase User Workbench, Sybase Virtual Server Architecture, Sybase MPP,
SybaseWare, Syber Financial, SyberAssist, SyBooks, System XI (logo), System 10, System 11, SystemTools, Tabular Data
Stream, The Enterprise Client/Server Company, The Extensible Software Platform, The Future Is Wide Open,
The Learning Connection, The Model For Client/Server Solutions, The Online Information Center, Transact-SQL,
Translation Toolkit, Turning Imagination Into Reality, UltraLite, UNIBOM, Unilib, Uninull, Unisep, Unistring,
URK Runtime Kit for UniCode, Viewer, Visual Components, VisualSpeller, VisualWriter, VQL, Warehouse Control
Center, Warehouse Studio, Warehouse WORKS, WarehouseArchitect, Watcom, Watcom SQL Server, Watcom SQL,
Web.PB, Web.SQL, Web Deployment Kit, WebSights, WebViewer, WorkGroup SQL Server, XA-Library, XA-Server,
and XP Server are trademarks of Sybase, Inc. or its subsidiaries.
All other trademarks are property of their respective owners.
Last modified: March 2000. Part Number: MC0057.
Contents

About This Manual.......................................................... xiii


Related documentation ...........................................................xiv
Documentation conventions.................................................... xv
The sample database............................................................ xviii

PART ONE
Starting and Connecting to Your Database ..................... 1

1 Running the Database Server ........................................... 3


Introduction ............................................................................... 4
Starting the server..................................................................... 7
Some common command-line switches ................................... 9
Stopping the database server ................................................. 15
Starting and stopping databases ............................................ 17
Running the server outside the current session ..................... 18
Troubleshooting server startup ............................................... 30

2 Connecting to a Database............................................... 33
Introduction to connections ..................................................... 34
Connecting from Sybase Central or Interactive
SQL ......................................................................................... 38
Simple connection examples .................................................. 41
Working with ODBC data sources .......................................... 49
Connecting from desktop applications to a
Windows CE database............................................................ 59
Connecting to a database using OLE DB ............................... 62
Connection parameters........................................................... 64
Troubleshooting connections .................................................. 67
Using integrated logins ........................................................... 77

3 Client/Server Communications....................................... 85
Network communication concepts .......................................... 86
Real world protocol stacks ...................................................... 91
Supported network protocols .................................................. 94
iii
Using the TCP/IP protocol ...................................................... 95
Using the SPX protocol........................................................... 98
Using the NetBIOS protocol.................................................. 100
Using Named Pipes .............................................................. 101
Troubleshooting network communications ........................... 102

PART TWO
Working with Databases ............................................... 109

4 Working with Database Objects ................................... 111


Introduction ........................................................................... 112
Working with databases........................................................ 115
Working with tables............................................................... 124
Working with views ............................................................... 138
Working with indexes ............................................................ 145

5 Queries: Selecting Data from a Table .......................... 149


Query overview ..................................................................... 150
The SELECT clause: specifying columns............................. 153
The FROM clause: specifying tables .................................... 161
The WHERE clause: specifying rows ................................... 162

6 Summarizing, Grouping, and Sorting Query Results.. 173


Summarizing query results using aggregate
functions................................................................................ 174
The GROUP BY clause: organizing query results
into groups ............................................................................ 178
Understanding GROUP BY................................................... 180
The HAVING clause: selecting groups of data ..................... 184
The ORDER BY clause: sorting query results ...................... 187
The UNION operation: combining queries............................ 190
Standards and compatibility.................................................. 192

7 Joins: Retrieving Data from Several Tables ................ 195


How joins work ...................................................................... 196
How joins are structured ....................................................... 198
Key joins................................................................................ 200
Natural joins .......................................................................... 202
Joins using comparisons....................................................... 203
Inner, left-outer, and right-outer joins.................................... 205
Self-joins and correlation names .......................................... 209
Cross joins ............................................................................ 211

iv
How joins are processed....................................................... 214
Joining more than two tables ................................................ 216
Joins involving derived tables ............................................... 219
Transact-SQL outer joins ...................................................... 220

8 Using Subqueries .......................................................... 223


What is a subquery? ............................................................. 224
Using Subqueries in the WHERE clause .............................. 225
Subqueries in the HAVING clause........................................ 226
Subquery comparison test .................................................... 228
Quantified comparison tests with ANY and ALL ................... 230
Testing set membership with IN conditions .......................... 233
Existence test........................................................................ 235
Outer references ................................................................... 237
Subqueries and joins ............................................................ 238
Nested subqueries ................................................................ 241
How subqueries work............................................................ 243

9 Adding, Changing, and Deleting Data .......................... 253


Data modification statements................................................ 254
Adding data using INSERT ................................................... 255
Changing data using UPDATE ............................................. 259
Deleting data using DELETE ................................................ 261

10 Using SQL in Applications ............................................ 263


Executing SQL statements in applications ........................... 264
Preparing statements ............................................................ 266
Introduction to cursors .......................................................... 269
Types of cursor ..................................................................... 272
Working with cursors............................................................. 275
Describing result sets............................................................ 281
Controlling transactions in applications ................................ 283

11 International Languages and Character Sets .............. 287


Introduction to international languages and
character sets........................................................................ 288
Understanding character sets in software ............................ 291
Understanding locales .......................................................... 297
Understanding collations....................................................... 303
Understanding character set translation ............................... 311
Collation internals.................................................................. 314
International language and character set tasks .................... 319

v
PART THREE
Relational Database Concepts ..................................... 331

12 Designing Your Database ............................................. 333


Introduction ........................................................................... 334
Database design concepts.................................................... 335
The design process............................................................... 341
Designing the database table properties .............................. 355

13 Ensuring Data Integrity ................................................. 357


Data integrity overview.......................................................... 358
Using column defaults........................................................... 362
Using table and column constraints ...................................... 367
Using domains ...................................................................... 371
Enforcing entity and referential integrity ............................... 374
Integrity rules in the system tables ....................................... 379

14 Using Transactions and Isolation Levels..................... 381


Introduction to transactions................................................... 382
Isolation levels and consistency............................................ 386
Transaction blocking and deadlock ...................................... 392
Choosing isolation levels ...................................................... 394
Isolation level tutorials........................................................... 398
How locking works ................................................................ 413
Particular concurrency issues ............................................... 426
Replication and concurrency................................................. 428
Summary............................................................................... 430

PART FOUR
Adding Logic to the Database ...................................... 433

15 Using Procedures, Triggers, and Batches................... 435


Procedure and trigger overview............................................ 437
Benefits of procedures and triggers...................................... 438
Introduction to procedures .................................................... 439
Introduction to user-defined functions................................... 446
Introduction to triggers .......................................................... 450
Introduction to batches.......................................................... 457
Control statements................................................................ 459
The structure of procedures and triggers.............................. 462
Returning results from procedures ....................................... 466
Using cursors in procedures and triggers ............................. 471

vi
Errors and warnings in procedures and triggers................... 474
Using the EXECUTE IMMEDIATE statement in
procedures ............................................................................ 483
Transactions and savepoints in procedures and
triggers .................................................................................. 484
Some tips for writing procedures .......................................... 485
Statements allowed in batches ............................................. 487
Calling external libraries from procedures ............................ 488

16 Automating Tasks Using Schedules and Events ........ 495


Introduction ........................................................................... 496
Understanding schedules ..................................................... 498
Understanding events ........................................................... 500
Understanding event handlers .............................................. 504
Schedule and event internals................................................ 506
Scheduling and event handling tasks ................................... 508

17 Welcome to Java in the Database................................. 513


Introduction to Java in the database ..................................... 514
Java in the database Q & A .................................................. 517
A Java seminar ..................................................................... 523
The runtime environment for Java in the
database ............................................................................... 533
A Java in the database exercise ........................................... 541

18 Using Java in the Database........................................... 549


Overview of using Java......................................................... 550
Java-enabling a database..................................................... 553
Installing Java classes into a database................................. 558
Creating columns to hold Java objects ................................. 563
Inserting, updating, and deleting Java objects...................... 565
Querying Java objects .......................................................... 570
Comparing Java fields and objects ....................................... 572
Special features of Java classes in the database................. 575
How Java objects are stored................................................. 580
Java database design ........................................................... 583
Using computed columns with Java classes ........................ 586
Configuring memory for Java................................................ 589

19 Data Access Using JDBC .............................................. 591


JDBC overview...................................................................... 592
Establishing JDBC connections ............................................ 597

vii
Using JDBC to access data .................................................. 604
Using the Sybase jConnect JDBC driver .............................. 612
Creating distributed applications........................................... 616

20 Debugging Logic in the Database ................................ 621


Introduction to debugging in the database............................ 622
Tutorial 1: Connecting to a database.................................... 624
Tutorial 2: Debugging a stored procedure ............................ 627
Tutorial 3: Debugging a Java class....................................... 630
Common debugger tasks...................................................... 635
Writing debugger scripts ....................................................... 637

PART FIVE
Database Administration and Advanced Use .............. 643

21 Backup and Data Recovery........................................... 645


Introduction to backup and recovery..................................... 646
Understanding backups ........................................................ 651
Designing backup procedures .............................................. 654
Configuring your database for data protection...................... 663
Backup and recovery internals.............................................. 667
Backup and recovery tasks................................................... 674

22 Importing and Exporting Data ...................................... 693


Introduction to import and export .......................................... 694
Understanding importing and exporting................................ 696
Designing import procedures ................................................ 701
Designing export procedures ................................................ 705
Designing rebuild and extract procedures ............................ 709
Import and export internals ................................................... 713
Import tasks .......................................................................... 715
Export tasks .......................................................................... 721
Rebuild tasks ........................................................................ 729
Extract Tasks ........................................................................ 734

23 Managing User IDs and Permissions ........................... 735


Database permissions overview ........................................... 736
Setting user and group options ............................................. 740
Managing individual user IDs and permissions .................... 741
Managing connected users................................................... 752
Managing groups .................................................................. 753
Database object names and prefixes ................................... 760

viii
Using views and procedures for extra security ..................... 762
Changing Ownership on Nested Objects.............................. 765
How user permissions are assessed .................................... 767
Managing the resources connections use ............................ 768
Users and permissions in the system tables ........................ 769

24 Keeping Your Data Secure ............................................ 771


Security features overview.................................................... 772
Security tips........................................................................... 773
Controlling database access................................................. 775
Controlling the tasks users can perform ............................... 777
Auditing database activity ..................................................... 778
Running the database server in a secure fashion ................ 782

25 Working with Database Files ........................................ 785


Overview of database files.................................................... 786
Using additional dbspaces .................................................... 788
Working with write files ......................................................... 792
Using the utility database...................................................... 794

26 Monitoring and Improving Performance ...................... 799


Top performance tips ............................................................ 800
Using the cache to improve performance ............................. 807
Using keys to improve query performance ........................... 811
Using indexes to improve query performance ...................... 816
Search strategies for queries from more than one
table ...................................................................................... 820
Sorting query results ............................................................. 823
Temporary tables used in query processing......................... 824
How the optimizer works....................................................... 826
Monitoring database performance ........................................ 829

27 Query Optimization........................................................ 835


The role of the optimizer ....................................................... 836
Steps in optimization ............................................................. 838
Reading access plans ........................................................... 839
Underlying assumptions........................................................ 841
Physical data organization and access................................. 845
Indexes.................................................................................. 848
Predicate analysis ................................................................. 850
Semantic query transformations ........................................... 852
Selectivity estimation............................................................. 856

ix
Join enumeration and index selection .................................. 862
Cost estimation ..................................................................... 864
Subquery caching ................................................................. 865

28 Deploying Databases and Applications ....................... 867


Deployment overview............................................................ 868
Understanding installation directories and file
names ................................................................................... 870
Using InstallShield templates for deployment....................... 873
Using a silent installation for deployment ............................. 875
Deploying client applications................................................. 878
Deploying database servers ................................................. 887
Deploying embedded database applications ........................ 889

29 Accessing Remote Data................................................ 893


Introduction ........................................................................... 894
Basic concepts ...................................................................... 896
Working with remote servers ................................................ 898
Working with external logins ................................................. 903
Working with proxy tables ..................................................... 905
Example: a join between two remote tables ......................... 910
Accessing multiple local databases ...................................... 912
Sending native statements to remote servers ...................... 913
Using remote procedure calls (RPCs) .................................. 914
Transaction management and remote data.......................... 917
Internal operations ................................................................ 919
Troubleshooting remote data access.................................... 923

30 Server Classes for Remote Data Access ..................... 925


Overview ............................................................................... 926
JDBC-based server classes.................................................. 927
ODBC-based server classes................................................. 930

31 Three-tier Computing and Distributed Transactions .. 943


Introduction ........................................................................... 944
Three-tier computing architecture ......................................... 945
Using distributed transactions............................................... 949
Using Enterprise Application Server with Adaptive
Server Anywhere .................................................................. 951

x
PART SIX
The Adaptive Server Family .......................................... 955

32 Transact-SQL Compatibility.......................................... 957


An overview of Transact-SQL support .................................. 958
Adaptive Server architectures............................................... 961
Configuring databases for Transact-SQL
compatibility .......................................................................... 967
Writing compatible SQL statements...................................... 975
Transact-SQL procedure language overview ....................... 980
Automatic translation of stored procedures .......................... 983
Returning result sets from Transact-SQL
procedures ............................................................................ 984
Variables in Transact-SQL procedures................................. 985
Error handling in Transact-SQL procedures ......................... 986

33 Adaptive Server Anywhere as an Open Server............ 989


Open Clients, Open Servers, and TDS................................. 990
Setting up Adaptive Server Anywhere as an
Open Server.......................................................................... 992
Configuring Open Servers .................................................... 994
Characteristics of Open Client and jConnect
connections ......................................................................... 1000

34 Replicating Data with Replication Server................... 1003


Introduction to replication.................................................... 1004
A replication tutorial............................................................. 1007
Configuring databases for Replication Server .................... 1017
Using the LTM..................................................................... 1020

PART SEVEN
Appendixes .................................................................. 1031

A Dialog Box Descriptions ............................................. 1033


Introduction to dialog boxes ................................................ 1034
Dialogs accessed through the File menu............................ 1035
Dialogs accessed through the Tools menu......................... 1045
Debugger windows.............................................................. 1053

xi
B Property Sheet Descriptions........................................1061
Introduction to property sheets ........................................... 1063
Service properties ............................................................... 1064
Server properties ................................................................ 1067
Statistics properties............................................................. 1069
Database properties............................................................ 1070
Table properties .................................................................. 1072
Column properties............................................................... 1075
Foreign Key properties........................................................ 1078
Index properties .................................................................. 1081
Trigger properties................................................................ 1082
View properties ................................................................... 1083
Procedures and Functions properties................................. 1084
Users and Groups properties.............................................. 1085
Integrated Logins properties ............................................... 1088
Java Objects properties ...................................................... 1089
Domains properties............................................................. 1090
Events properties ................................................................ 1091
Publications properties........................................................ 1092
Articles properties ............................................................... 1093
Remote Users properties .................................................... 1095
Message Types properties.................................................. 1099
Connected Users properties ............................................... 1100
Database Space properties ................................................ 1101
Remote Servers properties ................................................. 1102
MobiLink Synchronization Templates properties ................ 1103

Glossary........................................................................1107

Index..............................................................................1125

xii
About This Manual

Subject
This manual describes how to use Adaptive Server Anywhere. It includes
material needed to develop applications that work with Adaptive Server
Anywhere and material for designing, building, and administering Adaptive
Server Anywhere databases.

Audience
This manual is for all users of Adaptive Server Anywhere.

Before you begin


This manual assumes that you have an elementary familiarity with database
management systems and Adaptive Server Anywhere in particular. If you do
not have such a familiarity, you should consider reading Adaptive Server
Anywhere Getting Started before reading this manual.

Online documentation more current


The printed version of this book may not be updated with each
maintenance release of Adaptive Server Anywhere. The online Help
version is updated with each maintenance release and so is more current.

Contents
Topic Page
Related documentation xiv
Documentation conventions xv
The sample database xviii

xiii
Related documentation
Adaptive Server Anywhere is a part of SQL Anywhere Studio. For an
overview of the different components of SQL Anywhere Studio, see
Introducing SQL Anywhere Studio.
The Adaptive Server Anywhere documentation consists of the following
books:
♦ Getting Started Intended for all users of Adaptive Server Anywhere,
this book describes the following:
♦ New features in Adaptive Server Anywhere
♦ Behavior changes from previous releases
♦ Upgrade procedures
♦ Introductory material for beginning users.
♦ Programming Interfaces Guide Intended for application developers
writing programs that directly access the ODBC, Embedded SQL, or
Open Client interfaces, this book describes how to develop applications
for Adaptive Server Anywhere.
This book is not required for users of Application Development tools
with built-in ODBC support, such as Sybase PowerBuilder.
♦ Reference A full reference to Adaptive Server Anywhere. This book
describes the database server, the administration utilities, the SQL
language, and error messages.
♦ Quick Reference A handy printed booklet with complete SQL syntax
and other key reference material in a concise format.
♦ Read Me First (UNIX only) A separate booklet is provided with UNIX
versions of Adaptive Server Anywhere, describing installation and
adding some UNIX-specific notes.
The format of these books (printed or online) may depend on the product in
which you obtained Adaptive Server Anywhere. Depending on which
package you have purchased, you may have additional books describing
other components of your product.

xiv
Documentation conventions
This section lists the typographic and graphical conventions used in this
documentation.

Syntax conventions
The following conventions are used in the SQL syntax descriptions:
♦ Keywords All SQL keywords are shown in UPPER CASE. However,
SQL keywords are case insensitive, so you can enter keywords in any
case you wish; SELECT is the same as Select which is the same as
select.
♦ Placeholders Items that must be replaced with appropriate identifiers
or expressions are shown in italics.
♦ Continuation Lines beginning with ... are a continuation of the
statements from the previous line.
♦ Repeating items Lists of repeating items are shown with an element
of the list followed by an ellipsis (three dots). One or more list elements
are allowed. If more than one is specified, they must be separated by
commas.
♦ Optional portions Optional portions of a statement are enclosed by
square brackets. For example,
RELEASE SAVEPOINT [ savepoint-name ]
indicates that the savepoint-name is optional. The square brackets should
not be typed.
♦ Options When none or only one of a list of items must be chosen, the
items are separated by vertical bars and the list enclosed in square
brackets. For example,
[ ASC | DESC ]
indicates that you can choose one of ASC, DESC, or neither. The square
brackets should not be typed.
♦ Alternatives When precisely one of the options must be chosen, the
alternatives are enclosed in curly braces. For example,
QUOTES { ON | OFF }
indicates that exactly one of ON or OFF must be provided. The braces
should not be typed.

xv
Graphic icons
The following icons are used in this documentation:

Icon Meaning

A client application.

A database server, such as Sybase Adaptive Server


Anywhere or Adaptive Server Enterprise.

An UltraLite application and database server.


In UltraLite, the database server and the application
are part of the same process.

A database.
In some high-level diagrams, the icon may be used to
represent both the database and the database server
that manages it.

Replication or synchronization middleware.


These pieces of software assist in sharing data among
databases. Examples are the MobiLink
synchronization server, the SQL Remote Message
Agent, and the Replication Agent for use with
Replication Server.
A Sybase Replication Server.

A programming interface.
API

xvi
Installed files
The following terms are used throughout the manual:
♦ Installation directory The directory into which you install Adaptive
Server Anywhere.
♦ Executable directory The executables and other files for each
operating system are held in an executable subdirectory of the
installation directory. This subdirectory has the following name:
♦ Windows NT and Windows 95/98 win32
♦ UNIX bin
♦ NetWare and Windows CE The executables are held in the
Adaptive Server Anywhere installation directory itself on these
platforms.

xvii
The sample database
There is a sample database included with Adaptive Server Anywhere. Many
of the examples throughout the documentation use this sample database.
The sample database represents a small company. It contains internal
information about the company (employees, departments, and financial data)
as well as product information (products), sales information (sales orders,
customers, and contacts), and financial information (fin_code, fin_data).
The following figure shows the tables in the sample database and how they
relate to each other.

asademo.db

product employee
id <pk> integer sales_order_items
emp_id <pk> integer
name char(15) id <pk,fk> integer
manager_id integer
description char(30) line_id <pk> smallint
emp_fname char(20)
size char(18) id = prod_id prod_id <fk> integer
emp_lname char(20)
color char(6) quantity integer
dept_id <fk> integer
quantity integer ship_date date
street char(40)
unit_price numeric(15,2)
city char(20)
state char(4)
id = id
emp_id = sales_rep zip_code char(9)
phone char(10)
customer status char(1)
ss_number char(11)
id <pk> integer sales_order
salary numeric(20,3)
fname char(15) id <pk> integer start_date date
lname char(20) cust_id <fk> integer termination_date date
address char(35) order_date date birth_date date
city char(20) id = cust_id fin_code_id <fk> char(2) bene_health_ins char(1)
state char(2) region char(7) bene_life_ins char(1)
zip char(10) sales_rep <fk> integer bene_day_care char(1)
phone char(12)
sex char(1)
company_name char(35) code = fin_code_id

fin_code
code <pk> char(2)
contact dept_id = dept_id
type char(10)
id <pk> integer description char(50) emp_id = dept_head_id
last_name char(15)
first_name char(15)
title char(2) code = code
street char(30)
city char(20) fin_data
state char(2) year <pk> char(4) department
zip char(5) quarter <pk> char(2) dept_id <pk> integer
phone char(10) code <pk,fk> char(2) dept_name char(40)
fax char(10) amount numeric(9) dept_head_id <fk> integer

xviii
The sample database is held in a file named asademo.db, and is located in
your installation directory.

xix
xx
P A R T O N E

Starting and Connecting to Your


Database

This part describes how to start the Adaptive Server Anywhere database
server, and how to connect to your database from a client application.

1
2
C H A P T E R 1

Running the Database Server

About this chapter This chapter describes how to start and stop the Adaptive Server Anywhere
database server, and the options open to you on startup under different
operating systems.
Contents
Topic Page
Introduction 4
Starting the server 7
Some common command-line switches 9
Stopping the database server 15
Starting and stopping databases 17
Running the server outside the current session 18
Troubleshooting server startup 30

3
Introduction

Introduction
Adaptive Server Anywhere provides two versions of the database server:
♦ The personal database server This executable does not support
client/server communications across a network. Although provided for
single-user, same-machine usefor example, as an embedded database
engineit is also useful for development work.
On Windows 95/98 and Windows NT the name of the personal server
executable is dbeng7.exe. On UNIX operating systems is dbeng7.
♦ The network database server Intended for multi-user use, this
executable supports client/server communications across a network.
On Windows 95/98 and Windows NT the name of the network server
executable is dbsrv7.exe. On Novell NetWare the name is dbsrv7.nlm,
and on UNIX operating systems it is dbsrv7.

Server differences The request-processing engine is identical in the two servers. Each supports
exactly the same SQL, and exactly the same database features. The main
differences include:
♦ Network protocol support Only the network server supports
communications across a network.
♦ Number of connections The personal server has a limit of ten
simultaneous connections. The limit for the network server depends on
your license.
♦ Number of CPUs The personal database server uses a maximum of
two CPUs for request processing. The network database server has no
set limit.
♦ Default number of internal threads You can configure the number of
requests the server can process at one time using the -gn command-line
switch. The network database server has a default of 20 threads and no
set limit, while the personal database server has a default and limit of 10
threads.
$ For information on database server command-line switches, see
"The database server" on page 14 of the book ASA Reference.
♦ Startup defaults To reflect their use as a personal server and a server
for many users, the startup defaults are slightly different for each.

4
Chapter 1 Running the Database Server

First steps
You can start a personal server running on a single database very simply. For
example, you can start both a personal server and a database called test.db by
typing the following command in the directory where test.db is located:
dbeng7 test

Where to enter You can enter commands in several ways, depending on your operating
commands system. For example, you can:
♦ type the command at a system command prompt.
♦ place the command in a shortcut or desktop icon.
♦ run the command in a batch file.
♦ include the command as a StartLine parameter in a connection string.
$ For more information, see "StartLine connection parameter" on
page 63 of the book ASA Reference.
There are slight variations in the basic command from platform to platform,
described in the following section.
$ You can also start a personal server using a database file name in a
connection string. For more information, see "Connecting to an embedded
database" on page 43.

Start the database server


The way you start the database server varies slightly depending on the
operating system you use. This section describes how to enter command
lines for the simple case of running a single database with default settings, on
each supported operating system.
Notes ♦ These commands start the personal server (dbeng7). To start a network
server, simply replace dbeng7 with dbsrv7.
♦ If the database file is in the starting directory for the command, you do
not need to specify path.
♦ If you do not specify a file extension in database-file, the extension .db
is assumed.

v To start the database server using default options:


♦ Windows 95/98/NT Open a command prompt, and enter the following
command:
start dbeng7 path\database-file

5
Introduction

If you omit the database file, a window is displayed allowing you to


locate a database file from a Browse button.
♦ UNIX Open a command prompt, and enter the following command:
dbeng7 path/database-file
♦ NetWare The database server for NetWare is a NetWare Loadable
Module (dbsrv7.nlm). An NLM is a program that you can run on your
NetWare server. Load a database server on your NetWare server as
follows:
load dbsrv7.nlm path\database-file
The database file must be on a NetWare volume. A typical filename is of
the form DB:\database\sales.db.
You can load the server from a client machine using the Novell remote
console utility. See your Novell documentation for details.
You can put the command line into your Novell autoexec.ncf file so
Adaptive Server Anywhere loads automatically each time you start the
NetWare server.
There is no personal server for Novell NetWare, just a network server.

What else is there to it?


Although you can start a personal server in the simple way described above,
there are many other aspects to running a database server in a production
environment. For example:
♦ You can choose from many command-line options or switches to
specify such features as how much memory to use as cache, how many
CPUs to use (on multi-processor machines), and the network protocols
to use (network server only). The command-line switches are one of the
major ways of tuning Adaptive Server Anywhere behavior and
performance.
♦ You can run the server as a service under Windows NT. This allows it
to keep running even when you log off the machine.
♦ You can start the personal server from an application, and shut it down
when the application has finished with it. This is typical when using the
database server an embedded database.
The remainder of this chapter describes these options in more detail.

6
Chapter 1 Running the Database Server

Starting the server


The general form for the server command line is as follows:
executable [ server-switches ] [ database-file [ database-switches ], ...]
If you supply no switches and no database file, then on Windows CE/95/98
and Windows NT operating systems a dialog box is displayed, allowing you
to use a Browse button to locate your database file.
The elements of the database server command line include the following:
♦ Executable This can be either the personal server or the network
server. For the file names on different operating systems, see
"Introduction" on page 4.
In this chapter, unless discussing network-specific options, we use the
personal server in sample command lines. The network server takes a
very similar set of command-line options.
♦ Server switches These options control the behavior of the database
server, for all running databases.
♦ Database file You can enter zero, one, or more database file names on
the command line. Each of these databases starts and remains available
for applications.

Caution
The database file and the transaction log file must be located on
the same physical machine as the database server. Database files
and transaction log files located on a network drive can lead to
poor performance and data corruption.

♦ Database switches For each database file you start, you can provide
database switches that control certain aspects of its behavior.
$ In this section, we look at some of the more important and commonly-
used options. For full reference information on each of these switches, see
"The database server" on page 14 of the book ASA Reference.
In examples throughout this chapter where there are several command-line
options, we show them for clarity on separate lines, as they could be written
in a configuration file. If you enter them directly on a command line, you
must enter them all on one line.
Case sensitivity Command-line parameters are generally case sensitive. You should enter all
parameters in lower case.

7
Starting the server

Listing available
command-line v To list the database server command-line switches:
switches ♦ Open a command prompt, and enter the following command:
dbeng7 -?

8
Chapter 1 Running the Database Server

Some common command-line switches


This section describes some of the most common command-line switches,
and points out when you may wish to use them. They are:
♦ Using configuration files
♦ Naming the server and the databases
♦ Performance
♦ Permissions
♦ Maximum page size
♦ Special modes
♦ Network communications (network server only)

Using configuration files


If you use an extensive set of command-line options, you can store them in a
configuration file, and invoke that file on a server command line. The
configuration file can contain switches on several lines. For example, the
following configuration file starts the personal database server and the
sample database. It sets a cache of 10 Mb, and names this instance of the
personal server Elora.
-n Elora
-c 10M
path\asademo.db
where path is the name of your Adaptive Server Anywhere installation
directory. On UNIX, you would use a forward slash instead of the backslash
in the file path.
If you name the file sample.cfg, you could use these command-line options
as follows:
dbeng7 @sample.cfg

Naming the server and the databases


You can use the –n command-line option as a database switch (to name the
database) or as a server switch (to name the server).

9
Some common command-line switches

The server and database names are among the connection parameters that
client applications may use when connecting to a database. The server name
appears on the desktop icon and on the title bar of the server window.
Naming databases You may want to provide a database name to provide a more meaningful
name than the file name for users of client applications. The database will be
identified by that name until it is stopped.
If you don’t provide a database name, the default name is the root of the
database file name (the file name without the .db extension). For example, in
the following command line the first database is named asademo, and the
second sample.
dbeng7 asademo.db sample.db

You can name databases by supplying a –n switch following the database


file. For example, the following command line starts the sample database and
names it MyDB:
dbeng7 asademo.db -n MyDB

Naming the server You may want to provide a database server name to avoid conflicts with
other server names on your network, or to provide a meaningful name for
users of client applications. The server keeps its name for its lifetime (until it
is shut down). If you don’t provide a server name, the server is given the
name of the first database started.
You can name the server by supplying a –n switch before the first database
file. For example, the following command line starts a server on the
asademo database, and gives it the name Cambridge:
dbeng7 –n Cambridge asademo.db
If you supply a server name, you can start a database server with no database
started. The following command starts a server named Galt with no database
started:
dbeng7 –n Galt

$ For information about starting databases on a running server, see


"Starting and stopping databases" on page 17.
Case sensitivity Server names and database names are case insensitive as long as the
character set is single-byte. For more information, see "Connection strings
and character sets" on page 312.

Controlling performance and memory from the command line


Several command-line options can have a major impact on database server
performance, including:

10
Chapter 1 Running the Database Server

♦ Cache size The –c switch controls the amount of memory that


Adaptive Server Anywhere uses as a cache. This can be a major factor in
affecting performance.
Generally speaking, the more memory made available to the database
server, the faster it performs. The cache holds information that may be
required more than once. Accessing information in cache is many times
faster than accessing it from disk. The default initial cache size is
computed based on the amount of physical memory, the operating
system, and the size of the database files. On Windows NT, Windows
95/98, and UNIX the database server takes additional cache when the
available cache is exhausted.
$ For a detailed description of performance tuning, see "Monitoring
and Improving Performance" on page 799. For information on
controlling cache size, see "Cache size" on page 17 of the book ASA
Reference.
♦ Number of processors If you are running on a multi-processor
machine, you can set the number of processors with the -gt option.
$ For more information, see "–gt command-line option" on page 30
of the book ASA Reference.
♦ Other performance-related switches There are several switches
available for tuning network performance, including -gb (database
process priority), and -u (buffered disk I/O).
$ For a full list of startup options, see "The database server" on
page 14 of the book ASA Reference.

Controlling permissions from the command line


Some command-line options control the permissions required to carry out
certain global operations, including permissions to start and stop databases,
load and unload data, and create and delete database files.
$ For more information, see "Running the database server in a secure
fashion" on page 782

Setting a maximum page size


The database server cache is arranged in pages—fixed-size areas of memory.
Since the server uses a single cache for its lifetime (until it is shut down), all
pages must have the same size.

11
Some common command-line switches

A database file is also arranged in pages, with a size that is specified on the
command line. Every database page must fit into a cache page. By default,
the server page size is the same as the largest page size of the databases on
the command line. Once the server starts, you cannot start a database with a
larger page size than the server.
To allow databases with larger page sizes to be started after startup, you can
force the server to start with a specified page size using the – gp option. If
you use larger page sizes, remember to increase your cache size. A cache of
the same size will accommodate only a fraction of the number of the larger
pages, leaving less flexibility in arranging the space.
The following command starts a server that reserves an 8 Mb cache and can
accommodate databases of page sizes up to 4096 bytes.
dbsrv7 –gp 4096 –c 8M –n myserver

Running in special modes


You can run Adaptive Server Anywhere in special modes for particular
purposes.
♦ Read-only You can run databases in read-only mode by supplying the
-r command-line switch.
$ For more information, see "–r command-line option" on page 33 of
the book ASA Reference.
♦ Bulk load This is useful when loading large quantities of data into a
database through the Interactive SQL INPUT command. Do not use the
–b option if you are using LOAD TABLE to bulk load data.
$ For more information, see "–b command-line option" on page 20
of the book ASA Reference, and "Importing and Exporting Data" on
page 693.
♦ Starting without a transaction log Use the -f database option for
recoveryeither to force the database server to start after the transaction
log has been lost, or to force the database server to start using a
transaction log it would otherwise not find. Note that -f is a database
option, not a server option.
Once the recovery is complete, you should stop your server and restart
without the -f option.
$ For more information, see "The database server" on page 14 of the
book ASA Reference.

12
Chapter 1 Running the Database Server

Selecting communications protocols


Any communication between a client application and a database server
requires a communications protocol. Adaptive Server Anywhere supports a
set of communications protocols for communications across networks and
for same-machine communications.
By default, the database server starts up all available protocols. You can limit
the protocols available to a database server by using the –x command-line
switch. On the client side, many of the same options can be controlled using
the CommLinks connection parameter.
Available protocols The personal database server (dbeng7.exe) supports the following protocols:
for the personal
♦ Shared memory This protocol is for same-machine communications,
server
and always remains available. It is available on all platforms.
♦ TCP/IP This protocol is for same-machine communications only, from
TDS clients, Open Client or the jConnect JDBC driver. You must run
TCP/IP if you wish to connect from Open Client or jConnect.
$ For more information on TDS clients, see "Adaptive Server
Anywhere as an Open Server" on page 989.
♦ Named Pipes Provided on Windows NT only, named pipes is for
same machine communications for applications that wish to run under a
certified security environment.

Available protocols The network database server (dbsrv7.exe) supports the following protocols:
for the network
♦ Shared memory This protocol is for same-machine communications,
server
and always remains available. It is available on all platforms.
♦ SPX This protocol is supported on all platforms except for UNIX.
♦ TCP/IP This protocol is supported on all platforms.
♦ IPX This protocol is supported on all platforms except for UNIX.
Although IPX is still available, it is recommended that you now use SPX
instead of IPX.
♦ NetBIOS This protocol is supported on all platforms except for
NetWare and UNIX.
♦ Named Pipes Provided on Windows NT only, named pipes is for
same machine communications for applications that wish to run under a
certified security environment.

$ For more information on running the server using these options, see
"Supported network protocols" on page 94.

13
Some common command-line switches

Specifying You can instruct a server to use only some of the available network protocols
protocols when starting up, by using the –x command-line switch. The following
command starts a server using the TCP/IP and SPX protocols:
dbsrv7 –x "tcpip,spx"
Although not strictly required in this example, the quotes are necessary if
there are spaces in any of the arguments to –x.
You can add additional parameters to tune the behavior of the server for each
protocol. For example, the following command line (entered all on one line)
instructs the server to use two network cards, one with a specified port
number.
dbsrv7 -x
"tcpip{MyIP=192.75.209.12:2367,192.75.209.32}"
path\asademo.db
$ For detailed descriptions of the available network communications
parameters that can serve as part of the –x switch, see "Network
communications parameters" on page 65 of the book ASA Reference.

14
Chapter 1 Running the Database Server

Stopping the database server


You can stop the database server by:
♦ Clicking SHUTDOWN on the database server window.
♦ Using the dbstop command-line utility.
The dbstop utility is particularly useful in batch files, or for stopping a
server on another machine. It requires a connection string on its
command line.
♦ Letting it shut down automatically by default when the application
disconnects. (This only works if the server is a personal server started by
an application connection string.)
♦ Pressing q when the server display window on UNIX or NetWare
machines has the focus.
Examples 1 Start a server. For example, the following command executed from the
Adaptive Server Anywhere installation directory starts a server named
Ottawa using the sample database:
dbsrv7 –n Ottawa asademo.db
2 Stop the server using dbstop:
dbstop –c "eng=Ottawa;uid=dba;pwd=sql"

$ For information on dbstop command-line switches, see "The dbstop


command-line utility" on page 130 of the book ASA Reference.

Who can stop the server?


When you start a server, you can use the –gk option to set the level of
permissions required for users to stop the server with dbstop. The default
level of permissions required is dba, but you can also set the value to all or
none. (Interactively, of course, anybody at the machine can click Shutdown
on the server window.)

Shutting down operating system sessions


If you close an operating system session where a database server is running,
or if you use an operating system command to stop the database server, the
server shuts down, but not cleanly. Next time the database loads, recovery
will be required, and happens automatically (see "Backup and Data
Recovery" on page 645).

15
Stopping the database server

It is better to stop the database server explicitly before closing the operating
system session. On NetWare, however, shutting down the NetWare server
machine properly does stop the database server cleanly.
Examples of commands that will not stop a server cleanly include:
♦ Stopping the process in Windows NT Task Manager.
♦ Using a UNIX slay or kill command.

16
Chapter 1 Running the Database Server

Starting and stopping databases


A database server can have more than one database loaded at a time. You can
start databases and start the server at the same time, as follows:
dbeng7 asademo sample

Starting a You can also start databases after starting a server, in one of the following
database on a ways:
running server
♦ While connected to a server, connect to a database using a DBF
parameter. This parameter specifies a database file for a new connection.
The database file is started on the current server.
$ For more information, see "Connecting to an embedded database"
on page 43.
♦ Use the START DATABASE statement, or select Start Database from
the File menu in Sybase Central when you have a server selected.
$ For a description, see "START DATABASE statement" on
page 620 of the book ASA Reference.

Limitations ♦ The server holds database information in memory using pages of a fixed
size. Once a server has been started, you cannot start a database that has
a larger page size than the server.
♦ The -gd server command-line option determines the permissions
required to start databases.

Stopping a You can stop a database by:


database
♦ Disconnecting from a database started by a connection string. Unless
you explicitly set the AUTOSTOP connection parameter to NO this
happens automatically.
$ For information, see "AutoStop connection parameter" on page 50
of the book ASA Reference.
♦ Using the STOP DATABASE statement from Interactive SQL or
Embedded SQL.
$ For a description, see "STOP DATABASE statement" on page 625
of the book ASA Reference.

17
Running the server outside the current session

Running the server outside the current session


When you log on to a computer using a user ID and a password, you
establish a session. When you start a database server, or any other
application, it runs within that session. When you log off the computer, all
applications associated with the session terminate.
It is common to require database servers to be available all the time. To make
this easier, you can run Adaptive Server Anywhere for Windows NT and for
UNIX in such a way that, when you log off the computer, the database server
remains running. The way you do this depends on your operating system.
♦ UNIX daemon You can run the UNIX database server as a daemon by
using the -ud command-line option, enabling the database server to run
in the background, and to continue running after you log off.
♦ Windows NT service You can run the Windows NT database server
as a service. This has many convenient properties for running high
availability servers.

Running the UNIX database server as a daemon


To run the UNIX database server in the background, and to enable it to run
independently of the current session, you run it as a daemon.

Do not use ’&’ to run the database server in the background


If you use the UNIX & (ampersand) command to run the database server
in the background, it will not work. You must instead run the database
server as a daemon.

v To run the UNIX database server as a daemon:


♦ Use the -ud command-line option when starting the database server. For
example:
dbsrv7 -ud asademo

Understanding Windows NT services


Although you can run the database server like any other Windows NT
program rather than as a service, there are limitations to running it as a
standard program, particularly in multi-user environments.

18
Chapter 1 Running the Database Server

Limitations of When you start a program, it runs under your Windows NT login session,
running as a which means that if you log off the computer, the program terminates. Only
standard one person logs onto Windows NT (on any one computer) at one time. This
executable restricts the use of the computer if you wish to keep a program running much
of the time, as is commonly the case with database servers. You must stay
logged onto the computer running the database server for the database server
to keep running. This can also present a security risk as the Windows NT
computer must be left in a logged on state.
Advantages of Installing an application as a Windows NT service enables it to run even
services when you log off.
When you start a service, it logs on using a special system account called
LocalSystem (or using another account you specify). Since the service is not
tied to the user ID of the person starting it, the service remains open even
when that person who started it logs off. You can also configure a service to
start automatically when the NT computer starts, before a user logs on.
Managing services Sybase Central provides a more convenient and comprehensive way of
managing Adaptive Server Anywhere services than the Windows NT
services manager.

Programs that can be run as Windows NT services


You can run the following programs as services:
♦ Network Database Server (dbsrv7.exe)
♦ Personal Database Server (dbeng7.exe)
♦ SQL Remote Message Agent (dbremote.exe)
♦ The MobiLink Synchronization server (dbmlsrv7.exe)
♦ A sample application
Not all these applications are supplied in all editions of Adaptive Server
Anywhere.

Managing services
You can carry out the following service management tasks from the
command-line, or in the Services folder in Sybase Central:
♦ Add, edit, and remove services.
♦ Start, stop, and pause services.

19
Running the server outside the current session

♦ Modify the parameters governing a service.


♦ Add databases to a service, so you can run several databases at one time.
n

The service icons in Sybase Central display the current state of each service
using a traffic light icon (running, paused, or stopped).

Adding a service
This section describes how to set up services using Sybase Central and the
Service Creation command-line utility.

v To add a new service (Sybase Central):


1 In Sybase Central, open the Services folder.
2 Double-click Add Service.
3 Follow the instructions in the wizard.

v To add a new service (command-line):


1 Choose Start➤Programs➤Command Prompt to open the command
prompt.
2 Execute the Service Creation utility using the -w switch.
For example, to create a personal server service called myserv, which
starts the specified engine with the specified parameters. The engine
runs as the LocalSystem user:

20
Chapter 1 Running the Database Server

dbsvc -as -w myserv E:\asa70\win32\dbeng7 -n william -c


8m e:\asa70\sample.db

$ For more information about the service creation utility and switches,
see "The Service Creation utility" on page 126 of the book ASA Reference.
Notes ♦ Service names must be unique within the first eight characters.
♦ If you choose to start a service automatically, it starts whenever the
computer starts Windows NT. If you choose to start manually, you need
to start the service from Sybase Central each time. You may want to
select Disabled if you are setting up a service for future use.
♦ Enter command-line switches for the executable, without the executable
name itself, in the window. For example, if you want a network server to
run using the sample database with a cache size of 20Mb and a name of
myserver, you would enter the following in the Parameters box:
-c 20M
-n myserver c:\asa7\asademo.db
Line breaks are optional. For information on valid command-line
switches, see the description of each program in "Database
Administration Utilities" on page 75 of the book ASA Reference.
♦ Choose the account under which the service will run: the special
LocalSystem account or another user ID. For more information about
this choice, see "Setting the account options" on page 24.
♦ If you want the service to be accessible from the Windows NT desktop,
check Allow Service to Interact with Desktop. If this option is
unchecked, no icon or window appears on the desktop.
$ For more information on the configuration options, see "Configuring
services" on page 22.

Removing a service
Removing a service removes the server name from the list of services.
Removing a service does not remove any software from your hard disk.
If you wish to re-install a service you previously removed, you need to re-
enter the command-line switches.

v To remove a service (Sybase Central):


1 In Sybase Central, open the Services folder.
2 In the right pane, right-click the icon of the service you want to remove
and choose Delete from the popup menu.

21
Running the server outside the current session

v To remove a service (command-line):


1 Choose Start➤Programs➤Command Prompt to open the command
prompt.
2 Execute the Service Creation utility using the -d switch.
For example, to delete the service called myserv, without prompting for
confirmation, enter the following command:
dbsvc -y -d myserv

$ For more information about the service creation utility and switches,
see "The Service Creation utility" on page 126 of the book ASA Reference.

Configuring services
A service runs a database server or other application with a set of command-
line switches. For a full description of the command-line switches for each of
the administration utilities, see "Database Administration Utilities" on
page 75 of the book ASA Reference.
In addition to the command-line switches, services accept other parameters
that specify the account under which the service runs and the conditions
under which it starts.

v To change the parameters for a service:


1 In Sybase Central, open the Services folder.
2 In the right pane, right-click the service you want to change and choose
Properties from the popup menu.
3 Alter the parameters as needed on the pages of the Properties dialog.
4 Click OK when finished.
Changes to a service configuration take effect next time someone starts the
service. The Startup option is applied the next time Windows NT is started.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

Setting the startup option


The following options govern startup behavior for Adaptive Server
Anywhere services. You can set them on the General tab of the service
property sheet.

22
Chapter 1 Running the Database Server

♦ Automatic If you choose the Automatic setting, the service starts


whenever the Windows NT operating system is starts. This setting is
appropriate for database servers and other applications running all the
time.
♦ Manual If you choose the Manual setting, the service starts only when
a user with Administrator permissions starts it. For information about
Administrator permissions, see your Windows NT documentation.
♦ Disabled If you choose the Disabled setting, the service will not start.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

Entering command-line switches


The Configuration tab of the service property sheet provides a text box for
entering command-line switches for a service. Do not enter the name of the
program executable in this box.
Examples ♦ To start a network server service running two databases, with a cache
size of 20 Mb, and with a name of my_server, you would enter the
following in the Parameters box:
-c 20M
-n my_server
c:\asa7\db_1.db
c:\asa7\db_2.db
♦ To start a SQL Remote Message Agent service, connecting to the
sample database as user ID DBA, you would enter the following:
-c "uid=dba;pwd=sql;dbn=asademo"

The following figure illustrates a sample property sheet. For a full


description of this property sheet, see "Service properties" on page 1064.

23
Running the server outside the current session

$ The command-line switches for a service are the same as those for the
executable. For a full description of the command-line switches for each
program, see "The Database Server" on page 13 of the book ASA Reference.

Setting the account options


You can choose under which account the service runs. Most services run
under the special LocalSystem account, which is the default option for
services. You can set the service to log on under another account by opening
the Account tab on the service property sheet, and entering the account
information.
If you choose to run the service under an account other than LocalSystem,
that account must have the "log on as a service" privilege. This can be
granted from the Windows NT User Manager application, under Advanced
Privileges.
When an icon Whether or not an icon for the service appears on the taskbar or desktop
appears on the depends on the account you select, and whether Allow Service to Interact
taskbar with Desktop is checked, as follows:

24
Chapter 1 Running the Database Server

♦ If a service runs under LocalSystem, and Allow Service to Interact with


Desktop is checked in the service property sheet, an icon appears on the
desktop of every user logged in to NT on the computer running the
service. Consequently, any user can open the application window and
stop the program running as a service.
♦ If a service runs under LocalSystem, and Allow Service to Interact with
Desktop is unchecked in the service property sheet, no icon appears on
the desktop for any user. Only users with permissions to change the state
of services can stop the service.
♦ If a service runs under another account, no icon appears on the desktop.
Only users with permissions to change the state of services can stop the
service.

Changing the executable file


To change the program executable file associated with a service, click the
Configuration tab on the service property sheet and enter the new path and
file name in the Path of Executable box.
If you move an executable file to a new directory, you must modify this
entry.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

Adding new databases to a service


Each network server or personal server can run more than one database. If
you wish to run more than one database at a time, we recommend that you do
so by attaching new databases to your existing service, rather than by
creating new services.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

v To add a new database to a service:


1 Open the Services folder.
2 Right-click the service and choose Properties from the popup menu.
3 Click the Configuration tab.
4 Add the path and filename of the new database to the end of the list of
parameters.
5 Click OK to save the changes.

25
Running the server outside the current session

The new database is started the next time the service starts.
Databases can be started on running servers by client applications, such as
Interactive SQL.
$ For a description of how to start a database on a server from Interactive
SQL, see "START DATABASE statement" on page 620 of the book ASA
Reference.
$ For a description of how to implement this function in an Embedded
SQL application, see the db_start_database function in "The Embedded
SQL Interface" on page 7 of the book ASA Programming Interfaces Guide.
Starting a database from an application does not attach it to the service. If the
service is stopped and restarted, the additional database will not be started
automatically.

Setting the service polling frequency


Sybase Central can poll at specified intervals to check the state (started,
stopped, paused, removed) of each service, and update the icons to display
the current state. The default setting is that polling is off. If you leave it off,
you must click Refresh to see changes to the state.

v To set the Sybase Central polling frequency:


1 Open the Services folder.
2 Right-click the service and choose Properties from the popup menu.
3 Click the Polling tab.
4 Set the polling frequency.
The frequency applies to all services, not just the one selected. The value
you set in this window remains in effect for subsequent sessions, until
you change it.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

Starting, stopping, and pausing services

v To start, stop, or pause a service:


1 Open the Services folder.
2 Right-click the service and choose Start, Stop, or Pause from the popup
menu.

26
Chapter 1 Running the Database Server

To resume a paused service, right-click the service and choose Continue


in the popup menu.
If you start a service, it keeps running until you stop it. Closing Sybase
Central or logging off does not stop the service.
Stopping a service closes all connections to the database and stops the
database server. For other applications, the program closes down.
Pausing a service prevents any further action being taken by the application.
It does not shut the application down or (in the case of server services) close
any client connections to the database. Most users do not need to pause their
services.

The Windows NT Service Manager


You can use Sybase Central to carry out all the service management for
Adaptive Server Anywhere. Although you can use the Windows NT Service
Manager in the Control Panel for some tasks, you cannot install or configure
an Adaptive Server Anywhere service from the Windows NT Service
Manager.
If you open the Windows NT Service Manager (from the Windows NT
Control Panel), a list of services appears. The names of the Adaptive Server
Anywhere services are formed from the Service Name you provided when
installing the service, prefixed by Adaptive Server Anywhere. All the
installed services appear together in the list.

Running more than one service


This section describes some topics specific to running more than one service
at a time.

Service dependencies
In some circumstances you may wish to run more than one executable as a
service, and these executables may depend on each other. For example, you
may wish to run a server and a SQL Remote Message Agent or Log Transfer
Manager to assist in replication.
In cases such as these, the services must start in the proper order. If a SQL
Remote Message Agent service starts up before the server has started, it fails
because it cannot find the server.
You can prevent these problems using service groups, which you manage
from Sybase Central.

27
Running the server outside the current session

Service groups overview


You can assign each service on your system to be a member of a service
group. By default, each service belongs to a group, as listed in the following
table.

Service Default group


Network server ASANYServer
Personal server ASANYEngine
SQL Remote Message Agent ASANYRemote
MobiLink Synchronization Server ASANYMobiLink
Replication Agent ASANYLTM

Before you can configure your services to ensure they start in the correct
order, you must check that your service is a member of an appropriate group.
You can check which group a service belongs to, and change this group,
from Sybase Central.

v To check and change which group a service belongs to:


1 Open the Services folder.
2 Right-click the service and choose Properties from the popup menu.
3 Click the Dependencies tab. The top text box displays the name of the
group the service belongs to.
4 Click Look Up to display a list of available groups on your system.
5 Select one of the groups, or type a name for a new group.
6 Click OK to assign the service to that group.
$ For a full description of the service property sheet, see "Service
properties" on page 1064.

Managing service dependencies


With Sybase Central you can specify dependencies for a service. For
example:
♦ You can ensure that at least one member of each of a list of service
groups has started before the current service.

28
Chapter 1 Running the Database Server

♦ You can ensure that any number of services start before the current
service. For example, you may want to ensure that a particular network
server has started before a SQL Remote Message Agent that is to run
against that server starts.

v To add a service or group to a list of dependencies:


1 Open the Services folder.
2 Right-click the service and choose Properties from the popup menu.
3 Click the Dependencies tab.
4 Click Add Service or Add Group to add a service or group to the list of
dependencies.
5 Select one of the services or groups from the list.
6 Click OK to add the service or group to the list of dependencies.

$ For a full description of the service property sheet, see "Service


properties" on page 1064.

29
Troubleshooting server startup

Troubleshooting server startup


This section describes some common problems when starting the database
server.

Ensure that your transaction log file is valid


The server won’t start if the existing transaction log is invalid. For example,
during development you may replace a database file with a new version,
without deleting the transaction log at the same time. This causes the
transaction log file to be different than the database, and results in an invalid
transaction log file.

Ensure that you have sufficient disk space for your temporary file
Adaptive Server Anywhere uses a temporary file to store information while
running. This file is stored in the directory pointed to by the TMP or TEMP
environment variable, typically c:\temp.
If you do not have sufficient disk space available to the temporary directory,
you will have problems starting the server.

Ensure that network communication software is running


Appropriate network communication software must be installed and running
before you run the database server. If you are running reliable network
software with just one network installed, this should be straightforward. If
you experience problems, if you are running non-standard software, or if you
are running multiple networks, you may want to read the full discussion of
network communication issues in "Client/Server Communications" on
page 85.
You should confirm that other software requiring network communications is
working properly before running the database server.
For example, if you are using NetBIOS under Windows 95/98 or Windows
NT you may want to confirm that the chat or winpopup application is
working properly between machines running client and database server
software.
If you are running under the TCP/IP protocol, you may want to confirm that
ping and telnet are working properly. The ping and telnet applications are
provided with many TCP/IP protocol stacks.

30
Chapter 1 Running the Database Server

Debugging network communications startup problems


If you are having problems establishing a connection across a network, you
can use debugging options at both client and server to diagnose problems. On
the server, you use the -z command-line option. The startup information
appears on the server window: you can use the -o option to log the results to
an output file.

31
Troubleshooting server startup

32
C H A P T E R 2

Connecting to a Database

About this chapter This chapter describes how client applications connect to databases. It
contains information about connecting to databases from ODBC, OLE DB,
and embedded SQL applications. It also describes connecting from Sybase
Central and Interactive SQL.
$ For information on connecting to a database from Sybase Open Client
applications, see "Adaptive Server Anywhere as an Open Server" on
page 989.
$ For information on connecting via JDBC (if you are not working in
Sybase Central or Interactive SQL), see "Data Access Using JDBC" on
page 591.
Contents
Topic Page
Introduction to connections 34
Connecting from Sybase Central or Interactive SQL 38
Simple connection examples 41
Working with ODBC data sources 49
Connecting from desktop applications to a Windows CE database 59
Connecting to a database using OLE DB 62
Connection parameters 64
Troubleshooting connections 67
Using integrated logins 77

33
Introduction to connections

Introduction to connections
Any client application that uses a database must establish a connection to
that database before any work can be done. The connection forms a channel
through which all activity from the client application takes place. For
example, your user ID determines permissions to carry out actions on the
database—and the database server has your user ID because it is part of the
request to establish a connection.
How connections To establish a connection, the client application calls functions in one of the
are established Adaptive Server Anywhere interfaces. Adaptive Server Anywhere provides
the following interfaces:
♦ ODBC ODBC connections are discussed in this chapter.
♦ OLE DB OLE DB connections are discussed in this chapter.
♦ Embedded SQL Embedded SQL connections are discussed in this
chapter.
♦ Sybase Open Client Open Client connections are not discussed in this
chapter. For information on connecting from Open Client applications,
see "Adaptive Server Anywhere as an Open Server" on page 989.
♦ JDBC Sybase Central and Interactive SQL have the connection logic
described in this chapter built into them. Other JDBC applications
cannot use the connection logic discussed in this chapter.
$ For general information on connecting via JDBC, see "Data Access
Using JDBC" on page 591.
The interface uses connection information included in the call from the client
application, perhaps together with information held on disk in a file data
source, to locate and connect to a server running the required database. The
following figure is a simplified representation of the pieces involved.
Client Database
application server

Interface
Library

34
Chapter 2 Connecting to a Database

What to read If you want... Consider reading...


An overview of connecting from Sybase "Connecting from Sybase Central or
Central or Interactive SQL (including a Interactive SQL" on page 38
description of the drivers involved)
Some examples to get started quickly, "Simple connection examples" on
including Sybase Central and Interactive page 41.
SQL scenarios
To learn about data sources "Working with ODBC data sources"
on page 49
To learn what connection parameters are "Connection parameters" on page 64.
available
To see an in-depth description of how "Troubleshooting connections" on
connections are established page 67.
To learn about network-specific "Client/Server Communications" on
connection issues. page 85.
To learn about character set issues "Connection strings and character
affecting connections sets" on page 312.

How connection parameters work


When an application connects to a database, it uses a set of connection
parameters to define the connection. Connection parameters include
information such as the server name, the database name, and a user ID.
A keyword-value pair (of the form parameter=value) specifies each
connection parameter. For example, you specify the password connection
parameter for the default password as follows:
Password=sql
Connection parameters are assembled into connection strings. In a
connection string, a semicolon separates each connection parameter, as
follows:
ServerName=asademo;DatabaseName=asademo

Representing This chapter has many examples of connection strings represented in the
connection strings following form:
parameter1=value1
parameter2=value2
...
This is equivalent to the following connection string:

35
Introduction to connections

parameter1=value1;parameter2=value2
You must enter a connection string on a single line, with the parameter
settings separated by semicolons.

Connection parameters passed as connection strings


Connection parameters are passed to the interface library as a connection
string. This string consists of a set of parameters, separated by semicolons:
parameter1=value1;parameter2=value2;...
In general, the connection string built up by an application and passed to the
interface library does not correspond directly to the way a user enters the
information. Instead, a user may fill in a dialog box, or the application may
read connection information from an initialization file.
Many of the Adaptive Server Anywhere utilities accept a connection string
as the -c command-line option and pass the connection string on to the
interface library without change. For example, the following is a typical
Collation utility (dbcollat) command line (which should be entered all on one
line):
dbcollat –c "uid=dba;pwd=sql;dbn=asademo"
c:\temp\asademo.col

Interactive SQL connection strings


Interactive SQL processes the connection string internally. These utilities
do not simply pass on the connection parameters to the interface library.
Do not use Interactive SQL to test command strings from a command
prompt.

Saving connection parameters in ODBC data sources


Many client applications, including application development systems, use the
ODBC interface to access Adaptive Server Anywhere. When connecting to
the database, ODBC applications typically use ODBC data sources. An
ODBC data source is a set of connection parameters, stored in the registry or
in a file.
For Adaptive Server Anywhere, ODBC data sources can be used not only by
ODBC applications on Windows, but also by other applications:
♦ Adaptive Server Anywhere client applications on UNIX can use ODBC
data sources, as well as those on Windows operating systems. On UNIX,
the data source is stored as a file.

36
Chapter 2 Connecting to a Database

♦ Adaptive Server Anywhere client applications using the OLE DB or


embedded SQL interfaces can use ODBC data sources, as well as ODBC
applications.
♦ Interactive SQL and Sybase Central can use ODBC data sources.
$ For more information on ODBC data sources, see "Working with
ODBC data sources" on page 49.

37
Connecting from Sybase Central or Interactive SQL

Connecting from Sybase Central or Interactive


SQL
To use Sybase Central or Interactive SQL for managing your database, you
must first connect to it. In the Connect dialog, you tell Sybase Central or
Interactive SQL what database you want to connect to, where it is located,
and how you want to connect to it.
The connecting process depends on your situation. For example, if you have
a server already running on your machine and this server contains only one
database, all you have to do in the Connect dialog is provide a user ID and a
password. Sybase Central or Interactive SQL then knows to connect
immediately to the database on the running server.
If this running server has more than one database loaded on it, if it is not yet
running, or if it is running on another machine, you need to provide more
detailed information in the Connect dialog so that Sybase Central or
Interactive SQL knows what database to connect to.
This section describes how to access the Connect dialog in Sybase Central
and Interactive SQL. It also provides a general description of this dialog; for
a more detailed description, see "Connect dialog" on page 1045.
$ For connection examples, including examples for Sybase Central and
Interactive SQL, see "Simple connection examples" on page 41.

Opening the Connect dialog


A common Connect dialog is available in both Sybase Central and
Interactive SQL to let you connect to a database.
When you start Sybase Central, you need to manually display this dialog.
When you start Interactive SQL, the dialog automatically appears; you can
also make it appear for a new connection by choosing File➤New Window.

v To open the Connect dialog (Sybase Central):


♦ In Sybase Central, choose Tools➤Connect.
If you have more than one Sybase Central plugin installed, choose
Adaptive Server Anywhere from the displayed list.
or
Click the Connect button on the main toolbar.
or
Press F11.
38
Chapter 2 Connecting to a Database

Tip
You can make subsequent connections to a given database easier and
faster by using a connection profile.

v To display the Connect dialog (Interactive SQL):


1 In Interactive SQL, choose File➤New.
or
Click SQL➤Connect.
Once the Connect dialog is displayed, you must specify the connection
parameters you need to connect. For example, you can connect to the
Adaptive Server Anywhere sample database by choosing ASA 7.0 Sample
from the ODBC Data Source Name list and clicking OK.

Specifying a driver for your connection


When you are working with a database, all your requests and commands go
through a driver to the database itself. Sybase Central and Interactive SQL
support two main types of drivers: a JDBC driver (called jConnect) and an
ODBC driver (called JDBC-ODBC Bridge). Both are included with
Adaptive Server Anywhere.
Sybase jConnect is a fully supported, fully featured JDBC driver. This driver
is platform-independent and offers better performance than JDBC-ODBC
Bridge. It is enabled by default.
The JDBC-ODBC Bridge driver is a Sun product and is available solely as
an alternative method of connecting. The jConnect driver is recommended
over JDBC-ODBC Bridge.
As you connect to a database in the Connect dialog, you can choose which
driver you want to use for the connection. This is an optional configuration;
the jConnect driver is the preferred driver and is automatically used for all
connections unless you specify otherwise.
Data sources and As a general rule, the jConnect driver cannot use ODBC data sources.
the jConnect driver However, Sybase Central and Interactive SQL are special cases. When you
use the jConnect driver in either of them, you can specify an ODBC data
source to establish a connection. For example, you can connect to the sample
database using the ASA 7.0 Sample data source, even if you are using the
jConnect driver.
This customized functionality is only available while you are working in
Sybase Central or Interactive SQL. If you are constructing a JDBC
application, do not try to use a data source to connect to a database.
39
Connecting from Sybase Central or Interactive SQL

v To specify a driver for the connection:


1 In Sybase Central, choose Tools➤Connect to open the Connect dialog.
2 Configure the necessary settings on the Identification and Database tabs
of the dialog.
3 On the Advanced tab of the dialog, select either jConnect5 or JDBC-
ODBC Bridge.
$ For more information on ODBC and JDBC drivers, see "Using the
Sybase jConnect JDBC driver" on page 612, and "Working with ODBC data
sources" on page 49.

Working with the Connect dialog


The Connect dialog lets you define parameters for connecting to a server or
database. The same dialog is used in both Sybase Central and Interactive
SQL.
The Connect dialog has the following tabs:
♦ The Identification tab lets you identify yourself to the database and
specify a data source.
♦ The Database tab lets you identify a server and/or database to connect
to.
♦ The Advanced tab lets you add additional connection parameters and
specify a driver for the connection.
In Sybase Central, after you connect successfully, the database name appears
in the left pane of the main window, under the server that it is running on.
The user ID for the connection is shown in brackets after the database name.
In Interactive SQL, the connection information (including the database name,
your user ID, and the database server) appears on a title bar above the SQL
Statements pane.
$ For a detailed description of the individual fields in the Connect dialog,
see "Connect dialog" on page 1045.

40
Chapter 2 Connecting to a Database

Simple connection examples


Although the connection model for Adaptive Server Anywhere is
configurable, and can become complex, in many cases connecting to a
database is very simple.
Who should read This section describes some simple cases of applications connecting to an
this section? Adaptive Server Anywhere database. This section may be all you need to get
started.
$ For more detailed information on available connection parameters and
their use, see "Connection parameters" on page 64.

Connecting to the sample database from Sybase Central or


Interactive SQL
Many examples and exercises throughout the documentation start by
connecting to the sample database from Sybase Central or Interactive SQL.

v To connect to the sample database (Sybase Central):


1 To start Sybase Central: from the Start menu, choose Programs➤Sybase
SQL Anywhere 7➤Sybase Central 4.0.
2 To open the Connect dialog: from the Tools menu, choose Connect.
3 Select the ODBC Data Source Name option and click Browse.
4 Select ASA 7.0 Sample and click OK.

v To connect to the sample database (Interactive SQL):


1 To start Interactive SQL: from the Start menu, choose
Programs➤Sybase SQL Anywhere 7➤Adaptive Server Anywhere
7➤Interactive SQL.
2 To open the Connect dialog: from the SQL menu, choose Connect.
3 Select the ODBC Data Source Name option and click Browse.
4 Select ASA 7.0 Sample and click OK.

Note You do not need to enter a user ID and a password for this connection
because the data source already contains this information.

41
Simple connection examples

Connecting to a database on your own machine from Sybase


Central or Interactive SQL
The simplest connection scenario is when the database you want to connect
to resides on your own machine. If this is the case for you, ask yourself the
following questions:
♦ Is the database already running on a server? If so, you can specify fewer
parameters in the Connect dialog. If not, you need to identify the
database file so that Sybase Central or Interactive SQL can start it for
you.
♦ Are there multiple databases running on your machine? If so, you need
to tell Sybase Central or Interactive SQL which database in particular to
connect to. If there is only one, Sybase Central or Interactive SQL
assumes that it is the one you want to connect to, and you don’t need to
specify it in the Connect dialog.
The procedures below depend on your answers to these questions.

v To connect to a database on an already-running local server:


1 Start Sybase Central or Interactive SQL and open the Connect dialog (if
it doesn’t appear automatically).
2 On the Identification tab of the dialog, enter a user ID and a password.
3 Do one of the following:
♦ If the server only contains the one database, click OK to connect to
it.
♦ If the server contains multiple databases, click the Database tab of
the dialog and specify a database name. This is usually the database
file name, without the path or extension.

v To connect to a database that is not yet running:


1 Start Sybase Central or Interactive SQL and open the Connect dialog (if
it doesn’t appear automatically).
2 On the Identification tab of the dialog, enter a user ID and a password.
3 Click the Database tab of the dialog.
4 Specify a file in the Database File field (including the full path, name,
and extension). You can search for a file by clicking Browse.
5 If you want the database name for subsequent connections to be
different from the file name, enter a name in the Database Name field
(without including a path or extension).

42
Chapter 2 Connecting to a Database

Tips
If the database is already loaded (started) on the server, you only need to
provide a database name for a successful connection. The database file is
not necessary.
You can connect using a data source (a stored set of connection
parameters) for either of the above scenarios by selecting the appropriate
data source option at the bottom of the Identification tab of the Connect
dialog. For information about using data sources in conjunction with the
JDBC driver (jConnect), see "Specifying a driver for your connection" on
page 39.

$ See also
♦ "Opening the Connect dialog" on page 38
♦ "Simple connection examples" on page 41

Connecting to an embedded database


An embedded database, designed for use by a single application, runs on
the same machine as the application and is largely hidden from the
application user.
When an application uses an embedded database, the personal server is
generally not running when the application connects. In this case, you can
start the database using the connection string, and by specifying the database
file in the DatabaseFile (DBF) parameter of the connection string.
Using the DBF The DBF parameter specifies which database file to use. The database file
parameter automatically loads onto the default server, or starts a server if none is
running.
The database unloads when there are no more connections to the database
(generally when the application that started the connection disconnects). If
the connection started the server, it stops once the database unloads.
The following connection parameters show how to load the sample database
as an embedded database:
dbf=path\asademo.db
uid=dba
pwd=sql
where path is the name of your Adaptive Server Anywhere installation
directory.

43
Simple connection examples

Using the Start The following connection parameters show how you can customize the
parameter startup of the sample database as an embedded database. This is useful if you
wish to use command-line options, such as the cache size:
Start=dbeng7 -c 8M
dbf=path\asademo.db
uid=dba
pwd=sql
$ See also
♦ "Opening the Connect dialog" on page 38
♦ "Simple connection examples" on page 41

Connecting using a data source


You can save sets of connection parameters in a data source. ODBC and
Embedded SQL applications use data sources. You can create data sources
from the ODBC Administrator.
If you are constructing an application, you should only use data sources for
ODBC applications. It is possible to specify data sources when you are using
the JDBC driver (jConnect), but only within Sybase Central or Interactive
SQL. For more information, see "Specifying a driver for your connection" on
page 39.

v To connect from Sybase Central or Interactive SQL using a data


source:
1 Start Sybase Central or Interactive SQL and open the Connect dialog (if
it doesn’t appear automatically).
2 On the Identification tab, enter a user ID and password.
3 On the lower half of the Identification tab, do one of the following:
♦ Select the ODBC Data Source Name option and specify a data
source name (equivalent to the DSN connection parameter, which
references a data source in the registry). You can view a list of data
sources by clicking Browse.
♦ Select the ODBC Data Source File option and specify a data source
file (equivalent to the FileDSN connection parameter, which
references a data source held in a file). You can search for a file by
clicking Browse.

The ASA 7.0 Sample data source holds a set of connection parameters,
including the database file and a Start parameter to start the database.

44
Chapter 2 Connecting to a Database

$ See also
♦ "Opening the Connect dialog" on page 38
♦ "Simple connection examples" on page 41

Connecting to a server on a network


To connect to a database running on a network server somewhere on a local
or wide area network, the client software must locate the database server.
Adaptive Server Anywhere provides a network library to handle this task.
Network connections occur over a network protocol. Several protocols are
supported, including TCP/IP, SPX, and NetBIOS.
$ For a full description of client/server communications over a network,
see "Client/Server Communications" on page 85.

Interface
library

Network

Specifying the Adaptive Server Anywhere server names must be unique on a local domain
server for a given network protocol. The following connection parameters provide a
simple example for connecting to a server running elsewhere on a network:
eng=svr_name
dbn=db_name
uid=user_id
pwd=password
CommLinks=all
The client library first looks for a personal server of the given name, and then
looks on the network for a server of the given name.
$ The above example finds any server started using the default port
number. However, you can start servers using other port numbers by
providing more information in the CommLinks parameter. For information,
see "CommLinks connection parameter" on page 52 of the book ASA
Reference.

45
Simple connection examples

Specifying the If several protocols are available, you can instruct the network library which
protocol ones to use to improve performance. The following parameters use only the
TCP/IP protocol:
eng=svr_name
dbn=db_name
uid=user_id
pwd=password
CommLinks=tcpip
The network library searches for a server by broadcasting over the network,
which can be a time-consuming process. Once the network library locates a
server, the client library stores its name and network address in a file, and
reuses this entry for subsequent connection attempts to that server using the
specified protocol. Subsequent connections can be many times faster than a
connection achieved by broadcast.
$ Many other connection parameters are available to assist Adaptive
Server Anywhere in locating a server efficiently over a network. For more
information see "Network communications parameters" on page 65 of the
book ASA Reference.

v To connect to a database on a network server from Sybase Central


or Interactive SQL:
1 Start Sybase Central or Interactive SQL and open the Connect dialog (if
it does not appear automatically).
2 On the Identification tab of the dialog, enter a user ID and a password.
3 On the Database tab of the dialog, enter the Server Name. You can
search for a server by clicking Find.
4 Identify the database by specifying a database name.

Tips
You can connect using a data source (a stored set of connection
parameters) by selecting the appropriate data source option at the bottom
of the Identification tab of the Connect dialog. For information about
using data sources in conjunction with the JDBC driver (jConnect), see
"Specifying a driver for your connection" on page 39.
By default, all network connections in Sybase Central and Interactive
SQL use the TCP/IP network protocol. For more information about
network protocol options, see "Network communication concepts" on
page 86.

$ See also

46
Chapter 2 Connecting to a Database

♦ "Opening the Connect dialog" on page 38


♦ "Simple connection examples" on page 41

Using default connection parameters


You can leave many connection parameters unspecified, and instead use the
default behavior to make a connection. Be cautious about relying on default
behavior in production environments, especially if you distribute your
application to customers who may install other Adaptive Server Anywhere
applications on their machine.
Default database If a single personal server is running, with a single loaded database, you can
server and connect using entirely default parameters:
database uid=user_id
pwd=password

Default database If more than one database is loaded on a single personal server, you can
server leave the server as a default, but you need to specify the database you wish to
connect to:
dbn=db_name
uid=user_id
pwd=password

Default database If more than one server is running, you need to specify which server you
wish to connect to. If only one database is loaded on that server, you do not
need to specify the database name. The following connection string connects
to a named server, using the default database:
eng=server_name
uid=user_id
pwd=password

No defaults The following connection string connects to a named server, using a named
database:
eng=server_name
dbn=db_name
uid=user_id
pwd=password
$ For more information about default behavior, see "Troubleshooting
connections" on page 67.

47
Simple connection examples

Connecting from Adaptive Server Anywhere utilities


All Adaptive Server Anywhere database utilities that communicate with the
server (rather than acting directly on database files) do so using Embedded
SQL. They follow the procedure outlined in "Troubleshooting connections"
on page 67 when connecting to a database.
How database Many of the administration utilities obtain the connection parameter values
tools obtain by:
connection
1 Using values specified on the command line (if there are any). For
parameter values
example, the following command starts a backup of the default database
on the default server using the user ID DBA and the password SQL:
dbbackup -c "uid=dba;pwd=sql" c:\backup
2 Using the SQLCONNECT environment variable settings if any
command line values are missing. Adaptive Server Anywhere does not
set this variable automatically.
$ For a description of the SQLCONNECT environment variable, see
"Environment variables" on page 6 of the book ASA Reference.
3 Prompting you for a user ID and password to connect to the default
database on the default server, if parameters are not set in the command
line, or the SQLCONNECT environment variable.

$ For a description of command line switches for each database tool, see
the chapter "Database Administration Utilities" on page 75 of the book ASA
Reference.

48
Chapter 2 Connecting to a Database

Working with ODBC data sources


Microsoft Corporation defines the Open Database Connectivity (ODBC)
interface, which is a standard interface for connecting client applications to
database management systems in the Windows 95/98 and Windows NT
environments. Many client applications, including application development
systems, use the ODBC interface to access a wide range of database systems.
Where data You connect to an ODBC database using an ODBC data source. You need
sources are held an ODBC data source on the client computer for each database you want to
connect to.
The ODBC data source contains a set of connection parameters. You can
store sets of Adaptive Server Anywhere connection parameters as an ODBC
data source, in either the system registry or as files.
If you have a data source, your connection string can simply name the data
source to use:
♦ Data source Use the DSN connection parameter to reference a data
source in the registry:
DSN=my data source
♦ File data source Use the FileDSN connection parameter to reference
a data source held in a file:
FileDSN=mysource.dsn

For Adaptive Server Anywhere, the use of ODBC data sources goes beyond
Windows applications using the ODBC interface:
♦ Adaptive Server Anywhere client applications on UNIX can use ODBC
data sources, as well as those on Windows operating systems.
♦ Adaptive Server Anywhere client applications using the OLE DB or
embedded SQL interfaces can use ODBC data sources, as well as ODBC
applications.
♦ Interactive SQL and Sybase Central can use ODBC data sources.

Creating an ODBC data source


You can create ODBC data sources on Windows 95/98 and Windows NT
operating systems using the ODBC Administrator, which provides a central
place for creating and managing ODBC data sources.
Adaptive Server Anywhere also includes a cross-platform command-line
utility named dbdsn to create data sources.

49
Working with ODBC data sources

Before you begin This section describes how to create an ODBC data source. Before you
create a data source, you need to know which connection parameters you
want to include in it.
$ For more information, see "Simple connection examples" on page 41,
and "Connection parameters" on page 64.
ODBC On Windows 95/98 and Windows NT, you can use the Microsoft ODBC
Administrator Administrator to create and edit data sources. You can work with User Data
Sources, File Data Sources, and System Data Sources in this utility.

v To create an ODBC data source (ODBC Administrator):


1 Start the ODBC Administrator:
In Sybase Central, choose Tools➤Adaptive Server
Anywhere➤ODBC Administrator.
or
From the Windows Start menu, choose
Programs➤Sybase SQL Anywhere 7➤Adaptive Server Anywhere
7➤ODBC Administrator.
The ODBC Data Source Administrator appears.
2 Click Add.
The Create New Data Source wizard appears.

50
Chapter 2 Connecting to a Database

3 From the list of drivers, choose Adaptive Server Anywhere 7.0, and
click Finish. The ODBC Configuration for Adaptive Server Anywhere
window appears.

Most of the fields in this window are optional. Click the question mark
at the top right of the window and click a dialog field to find more
information about that field.
$ For more information about the fields in the dialog, see
"Configuring ODBC data sources using the ODBC Administrator" on
page 52.
4 When you have specified the parameters you need, click OK to close the
window and create the data source.
To edit a data source, find and select one in the ODBC Administrator main
window and click Configure.

51
Working with ODBC data sources

You can create User Data Sources using the dbdsn command-line utility.
Creating an ODBC You cannot create File Data Sources or System Data Sources. File and
data source from System Data Sources are limited to Windows operating systems only, and
the command line you can use the ODBC Administrator to create them.

v To create an ODBC data source (Command line):


1 Open a command prompt.
2 Enter a dbdsn command, specifying the connection parameters you wish
to use. For example, the following command creates a data source for
the Adaptive Server Anywhere sample database. The command must be
entered on one line:
dbdsn –w "My DSN"
"uid=DBA;pwd=SQL;dbf=c:\Program Files\Sybase\SQL Anywhere 7\asademo.db"

$ For more information on the dbdsn utility, see "The Data Source
utility" on page 89 of the book ASA Reference.

Configuring ODBC data sources using the ODBC Administrator


This section describes the meaning of each of the options on the ODBC
configuration dialog.

ODBC tab
Data source name The Data Source Name is used to identify the ODBC
data source. You can use any descriptive name for the data source (spaces
are allowed) but it is recommended that you keep the name short, as you may
need to enter it in connection strings.
$ For more information, see "DataSourceName connection parameter" on
page 56 of the book ASA Reference.

Description You may enter an optional longer description of the Data


Source.

Translator Choose Adaptive Server Anywhere 7.0 Translator if your


database uses an OEM code page. If your database uses an ANSI code page,
which is the default, leave this empty.

Isolation level Select the desired isolation level for this data source:
♦ 0 Dirty reads, non-repeatable reads and phantom rows may occur. This
is the default.

52
Chapter 2 Connecting to a Database

♦ 1 Non-repeatable rows and phantom rows may occur. Dirty reads are
prevented.
♦ 2 Phantom rows may occur. Dirty reads and non-repeatable rows are
prevented.
♦ 3 Dirty reads, non-repeatable reads and phantom rows are prevented.
$ For more information, see "Choosing isolation levels" on page 394.
Microsoft applications (keys in SQLStatistics) Check this box if you
wish foreign keys to be returned by the SQLStatistics function. The ODBC
specifications states that primary and foreign keys should not be returned by
SQLStatistics. However, some Microsoft applications (such as Visual Basic
and Access) assume that primary and foreign keys are returned by
SQLStatistics.

Delphi applications Check this box to improve performance for Borland


Delphi applications. When this option is checked, one bookmark value is
assigned to each row, instead of the two that are otherwise assigned (one for
fetching forwards and a different one for fetching backwards).
Delphi cannot handle multiple bookmark values for a row. If the option is
unchecked, scrollable cursor performance can suffer since scrolling must
always take place from the beginning of the cursor to the row requested in
order to get the correct bookmark value.

Prevent driver not capable errors The Adaptive Server Anywhere ODBC
driver returns a Driver not capable error code because it does not support
qualifiers. Some ODBC applications do not handle this error properly. Check
this box to disable this error code, allowing such applications to work.

Delay AutoCommit until statement close Check this box if you wish the
Adaptive Server Anywhere ODBC driver to delay the commit operation until
a statement has been closed.

Describe cursor behavior Select how often you wish a cursor to be re-
described when a procedure is executed or resumed.

Test Connection Tests if the information provided will result in a proper


connection. In order for the test to work a user ID and password must have
been specified.

53
Working with ODBC data sources

Login tab
Use integrated login Connects using an integrated login. The User ID and
password do not need to be specified. To use this type of login users must
have been granted integrated login permission. The database being connected
to must also be set up to accept integrated logins. Only users with DBA
access may administer integrated login permissions.
$ For more information, see "Using integrated logins" on page 77.
User ID Provides a place for you to enter the User ID for the connection.
$ For more information, see "Userid connection parameter" on page 64 of
the book ASA Reference.

Password Provides a place for you to enter the password for the
connection.
$ For more information, see "Password connection parameter" on
page 61 of the book ASA Reference.

Encrypt password Check this box if you wish the password to be stored in
encrypted form in the profile.
$ For more information, see "EncryptedPassword connection parameter"
on page 58 of the book ASA Reference.

Database tab
Server name Provides a place for you to enter the name of the Adaptive
Server Anywhere personal or network server.
$ For more information, see "EngineName connection parameter" on
page 57 of the book ASA Reference.

Start line Enter the server that should be started. Only provide a Start Line
parameter if a database server is being connected to that is not currently
running. For example:
C:\Program Files\Sybase\ SQL Anywhere 7\win32\dbeng7.exe -c 8m

$ For more information, see "StartLine connection parameter" on page 63


of the book ASA Reference.

Database name Provides a place for you to enter the name of the Adaptive
Server Anywhere database that you wish to connect to.
$ For more information, see "DatabaseName connection parameter" on
page 55 of the book ASA Reference.

54
Chapter 2 Connecting to a Database

Database file Provides a place for you to enter the full path and name of
the Adaptive Server Anywhere database file on the server PC. You may also
click Browse to locate the file. For example:
C:\Program Files\Sybase\SQL Anywhere 7\asademo.db

$ For more information, see "DatabaseFile connection parameter" on


page 54 of the book ASA Reference.

Automatically start the database if it isn’t running Causes the database


to start automatically (if it is not already running) when you start a new
session.

Automatically shut down database after last disconnect Causes the


automatic shutdown of the server after the last user has disconnected.
$ For more information, see "AutoStop connection parameter" on
page 50 of the book ASA Reference.

Network tab
Select the network protocols and specify any protocol-specific options
where necessary These check boxes specify what protocol or protocols
the ODBC DSN uses to access a network database server. In the adjacent
boxes, you may enter communication parameters that establish and tune
connections from your client application to a database.
$ For more information see "CommLinks connection parameter" on
page 52 of the book ASA Reference, and "Network communications
parameters" on page 65 of the book ASA Reference.

Encrypt all network packets Enables encryption of packets transmitted


from the client machine over the network. By default, network encryption
packets is set to OFF.
$ For more information, see "Encryption connection parameter" on
page 58 of the book ASA Reference.

Liveness timeout A liveness packet is sent across a client/server to


confirm that a connection is intact. If the client runs for the liveness timeout
period without detecting a liveness packet, the communication will be
severed. This parameter works only with network server and TCP/IP or IPX
communications protocols. The default is 120 seconds.
$ For more information, see "LivenessTimeout connection parameter" on
page 60 of the book ASA Reference.

Buffer size Sets the maximum size of communication packets, in bytes.

55
Working with ODBC data sources

$ For more information, see "CommBufferSize connection parameter" on


page 51 of the book ASA Reference.

Buffer space Indicates the amount of space to allocate on startup for


network buffers, in kilobytes.
$ For more information, see "CommBufferSpace connection parameter"
on page 52 of the book ASA Reference.

Advanced tab
Connection name The name of the connection that is being created.

Character set Lets you specify a character set (a set of 256 letters,
numbers, and symbols specific to a country or language). The ANSI
character set is used by Microsoft Windows. An OEM character set is any
character set except the ANSI character set.

Allow multiple record fetching Enables multiple records to be retrieved at


one time instead of individually. By default, multiple record fetching is
allowed.

Display debugging information in a log file The name of the file in which
the debugging information is to be saved.

Additional connection parameters Enter any additional switches here.


Parameters set throughout the remainder of this dialog take precedence over
parameters typed here.
$ For a complete listing, see "Connection parameters" on page 46 of the
book ASA Reference.

Using file data sources on Windows


On Windows operating systems, ODBC data sources are typically stored in
the system registry. File data sources are an alternative, which are stored as
files. In Windows, file data sources typically have the extension .dsn. They
consist of sections, each section starting with a name enclosed in square
brackets. DSN files are very similar in layout to initialization files.
To connect using a File Data Source, use the FileDSN connection parameter.
You cannot use both DSN and FileDSN in the same connection.
File data sources One benefit of file data sources is that you can distribute the file to users. If
can be distributed the file is placed in the default location for file data sources, it is picked up
automatically by ODBC. In this way, managing connections for many users
can be made simpler.

56
Chapter 2 Connecting to a Database

Embedded SQL applications can also use ODBC file data sources.

v To create an ODBC file data source (ODBC Administrator):


1 Start the ODBC Administrator, click the File DSN tab and click Add.
2 Select Adaptive Server Anywhere 7.0 from the list of drivers, and click
Next.
3 Follow the instructions to create the data source.

Using ODBC data sources on UNIX


On UNIX operating systems, ODBC data sources are held in a file named
.odbc.ini. A sample file looks like this:
[My Data Source]
ENG=myserver
CommLinks=tcpip(Host=hostname)
UID=dba
PWD=sql
You can enter any connection parameter in the .odbc.ini file. For a complete
list, see "Connection parameters" on page 46 of the book ASA Reference.
Network communications parameters are added as part of the CommLinks
parameter. For a complete list, see "Network communications parameters"
on page 65 of the book ASA Reference.
You can create and manage ODBC data sources on UNIX using the dbdsn
command-line utility.
$ For more information, see "Creating an ODBC data source" on
page 49, and "The Data Source utility" on page 89 of the book ASA
Reference.
File location The database server looks for the .odbc.ini file in the following locations:
1 ODBCINI environment variable
2 ODBCHOME and HOME environment variables
3 The user’s home directory
4 The path

57
Working with ODBC data sources

Using ODBC data sources on Windows CE


Windows CE does not provide an ODBC driver manager or an ODBC
Administrator. On this platform, Adaptive Server Anywhere uses ODBC
data sources stored in files. You can specify either the DSN or the FileDSN
keyword to use these data source definitions—on Windows CE (only), DSN
and FileDSN are synonyms.
Data source Windows CE searches for the data source files in the following locations:
location
1 The directory from which the ODBC driver ( dbodbc7.dll ) was loaded.
This is usually the Windows directory.
2 The directory specified in Location key of the Adaptive Server
Anywhere section of the registry. This is usually the same as the
Adaptive Server Anywhere installation directory. The default
installation directory is:
\Program Files
\SQL Anywhere 7
\Windows

Each data source itself is held in a file. The file has the same name as the
data source, with an extension of .dsn.
$ For more information about file data sources, see "Using file data
sources on Windows" on page 56.

58
Chapter 2 Connecting to a Database

Connecting from desktop applications to a


Windows CE database
You can connect from applications running on a desktop PC, such as Sybase
Central or Interactive SQL, to a database server running on a Windows CE
device. The connection uses TCP/IP over the ActiveSync link between the
desktop machine and the Windows CE device.
If you are using an Ethernet connection between your desktop machine and
the Windows CE device, or if you are using Windows CE Services 2.2, the
following procedure works. If you are using a serial cable connection and
ActiveSync 3.0, see the following section.

v To connect from a desktop application to a database server running


on Windows CE:
1 Determine the IP address of the server. Start the server on the Windows
CE device with the -z option (output extra debug information).
For example:
dbsrv7 -z -x tcpip -n TestServer asademo.db

With the -z switch, the server writes out its IP address during startup.
The address may change if you disconnect your HPC from the network
and then re-connect it.
To change between static and dynamic IP assignment for the HPC,
configure the settings in the Windows Control Panel. Open Network,
and choose the Services tab. Select Remote Access Service and click
Properties➤Network➤TCP/IP Configuration.
2 Create an ODBC profile on your desktop machine.
Open the ODBC Administrator, and click Add. Choose Adaptive Server
Anywhere 7.0 from the list of drivers and click Finish. The ODBC
Configuration for Adaptive Server Anywhere dialog appears.
♦ On the Login tab, type a user ID and password.
♦ On the Database tab, type the server name.
♦ On the Network tab, check TCP/IP, and type the following in the
adjacent field:
dobroadcast=direct;host=XXX.XXX.XXX.XXX
where XXX.XXX.XXX.XXX is the server IP address.

59
Connecting from desktop applications to a Windows CE database

♦ On the ODBC tab, click Test Connection to confirm that your


ODBC data source is properly configured.
3 Exit the ODBC administrator.
4 Ensure the database server is running on your Windows CE machine.
5 On your desktop machine, start an application such as Interactive SQL
and select the ODBC data source you have created. The application
connects to the Windows CE database.

Using ActiveSync 3.0 and a serial cable


To connect to a Windows CE device from your desktop over a serial cable,
Adaptive Server Anywhere uses the TCP/IP protocol. For TCP/IP to work in
this context, your ActiveSync installation must be set up to provide a Remote
Access Service (RAS) link between desktop machine and Windows CE
device.
ActiveSync 2.2 automatically installs and configures RAS, and you can
connect in a straightforward manner. For information, see "Connecting from
desktop applications to a Windows CE database" on page 59.
ActiveSync 3.0 does not install and configure RAS. You must install RAS
yourself to obtain TCP/IP connectivity to your device over a serial
connection.
Instructions for installing RAS are provided by Microsoft. They are available
at http://support.microsoft.com/support/kb/articles/Q241/2/16.ASP
(Microsoft Knowledge Base article Q241216). You must follow these
instructions exactly, including re-installing any Windows NT service packs,
and granting your user account dial-in access using User Manager.
As you follow the instructions, where it says to install your modem, choose
Dial-up Networking Serial Cable between 2 PCs instead.

Using RAS with The following list includes suggestions for enabling ActiveSync 3.0
ActiveSync connections over a serial connection using RAS:
1 In the ActiveSync Connection Settings, select the checkbox Allow
network (Ethernet) and Remote Access Service (RAS) server connection
with desktop computer. You may need to turn off the checkbox Allow
serial cable or infrared connection to this COM port.
2 On the desktop, using Remote Access Administrator (under
Administrative Tools on Windows NT), start RAS on COM1.
3 On the Windows CE device, run the ActiveSync client (repllog.exe on a
Windows CE PC). Choose serial connection.

60
Chapter 2 Connecting to a Database

4 Wait for up to one minute for a connection to be established.


5 As a test, run the ipconfig utility on Windows NT, and see the
192.168.55.100 static IP of the device. This is the IP you would use
when connecting to an Adaptive Server Anywhere database server (for
example) running on the CE device.
6 If you switch devices, Stop and Restart the RAS service (or reboot).
7 If everything is set up as above, but you still fail to get a connection
from the device to the desktop, you should make sure your Port settings
match the baud rates in the Modems Control Panel applet.

61
Connecting to a database using OLE DB

Connecting to a database using OLE DB


OLE DB uses the Component Object Model (COM) to make data from a
variety of sources available to applications. Relational databases are among
the classes of data sources that you can access through OLE DB.
This section describes how to connect to an Adaptive Server Anywhere
database using OLE DB from the following environments:
♦ Sybase PowerBuilder can access OLE DB data sources, and you can use
Adaptive Server Anywhere as a PowerBuilder OLE DB database profile.
♦ Microsoft ActiveX Data Objects (ADO) provides a programming
interface for OLE DB data sources. You can access Adaptive Server
Anywhere from programming tools such as Microsoft Visual Basic.
This section is an introduction to how to use OLE DB from Sybase
PowerBuilder and Microsoft ADO environments such as Visual Basic. It is
not complete documentation on how to program using ADO or OLE DB.
The primary source of information on development topics is your
development tool documentation.
$ For more information about OLE DB, see "Introduction to OLE DB"
on page 152 of the book ASA Programming Interfaces Guide.

OLE DB providers
You need an OLE DB provider for each type of data source you wish to
access. Each provider is a dynamic-link library. There are two OLE DB
providers you can use to access Adaptive Server Anywhere:
♦ Sybase ASA OLE DB provider The Adaptive Server Anywhere OLE
DB provider provides access to Adaptive Server Anywhere as an OLE
DB data source without the need for ODBC components. The short
name for this provider is ASAProv.
When the ASAProv provider is installed, it registers itself. This
registration process includes making registry entries in the COM section
of the registry, so that ADO can locate the DLL when the ASAProv
provider is called. If you change the location of your DLL, you must
reregister it.
$ For more information about OLE DB providers, see "Introduction
to OLE DB" on page 152 of the book ASA Programming Interfaces
Guide.
♦ Microsoft OLE DB provider for ODBC Microsoft provides an
OLE DB provider with a short name of MSDASQL.
62
Chapter 2 Connecting to a Database

The MSDASQL provider makes ODBC data sources appear as OLE DB


data sources. It requires the Adaptive Server Anywhere ODBC driver.

Connecting from ADO


ADO is an object-oriented programming interface. In ADO, the Connection
object represents a unique session with a data source.
You can use the following Connection object features to initiate a
connection:
♦ The Provider property that holds the name of the provider. If you do not
supply a Provider name, ADO uses the MSDASQL provider.
♦ The ConnectionString property that holds a connection string. This
property holds an Adaptive Server Anywhere connection string, which is
used in just the same way as the ODBC driver. You can supply ODBC
data source names, or explicit UserID, Password, DatabaseName, and
other parameters, just as in other connection strings.
$ For a list of connection parameters, see "Connection parameters"
on page 64.
♦ The Open method initiates a connection.
$ For more information about ADO, see "ADO programming with
Adaptive Server Anywhere" on page 154 of the book ASA Programming
Interfaces Guide.
Example The following Visual Basic code initiates an OLE DB connection to
Adaptive Server Anywhere:
’ Declare the connection object
Dim myConn as New ADODB.Connection
myConn.Provider = "ASAProv"
myConn.ConnectionString = "Data Source=ASA 7.0 Sample"
myConn.Open

63
Connection parameters

Connection parameters
The following table lists the Adaptive Server Anywhere connection
parameters.
$ For a full description of each of these connection parameters, see
"Connection and Communication Parameters" on page 45 of the book ASA
Reference. For character set issues in connection strings, see "Connection
strings and character sets" on page 312.

Parameter Short Argument


form
Agent Agent String (Any or Server)
AppInfo APP String
AutoStart AStart Boolean
AutoStop AStop Boolean
Charset CS String
CommBufferSize CBSize Integer
CommBufferSpace CBSpace Integer
CommLinks Links String
ConnectionName CON String
DatabaseFile DBF String
DatabaseName DBN String
DatabaseSwitches DBS String
DataSourceName DSN String
Debug DBG Boolean
DisableMultiRowFetch DMRF Boolean
EncryptedPassword ENP Encrypted string
Encryption ENC Boolean
EngineName / ServerName ENG String
FileDataSourceName FileDSN String
ForceStart FORCE Boolean
Integrated INT Boolean
LivenessTimeout LTO Integer

64
Chapter 2 Connecting to a Database

Parameter Short Argument


form
Logfile LOG String
Password ** PWD String
StartLine Start String
Unconditional UNC Boolean
Userid ** UID String
** Verbose form of keyword not used by ODBC connection parameters

Notes ♦ Boolean values Boolean (true or false) arguments are either YES,
ON, 1, or TRUE if true, or NO, OFF, 0, or FALSE if false.
♦ Case sensitivity Connection parameters are case insensitive.
♦ The connection parameters used by the interface library can be obtained
from the following places (in order of precedence):
♦ Connection string You can pass parameters explicitly in the
connection string.
♦ SQLCONNECT environment variable The SQLCONNECT
environment variable can store connection parameters.
♦ Data sources ODBC data sources can store parameters.
♦ Character set restrictions The server name must be composed of the
ASCII character set in the range 1 to 127. There is no such limitation on
other parameters.
$ For more information on the character set issues, see "Connection
strings and character sets" on page 312.
♦ Priority The following rules govern the priority of parameters:
♦ The entries in a connect string are read left to right. If the same
parameter is specified more than once, the last one in the string
applies.
♦ If a string contains a data source or file data source entry, the profile
is read from the configuration file, and the entries from the file are
used if they are not already set. For example, if a connection string
contains a data source name and sets some of the parameters
contained in the data source explicitly, then in case of conflict the
explicit parameters are used.

65
Connection parameters

Connection parameter priorities


Connection parameters often provide more than one way of accomplishing a
given task. This is particularly the case with embedded databases, where the
connection string starts a database server. For example, if your connection
starts a database, you can specify the database name using the DBN
connection parameter or using the DBS parameter.
Here are some recommendations and notes for situations where connection
parameters conflict:
♦ Specify database files using DBF You can specify a database file on
the Start parameter or using the DBF parameter (recommended).
♦ Specify database names using DBN You can specify a database
name on the Start parameter, the DBS parameter, or using the DBN
parameter (recommended).
♦ Use the Start parameter to specify cache size Even though you use
the DBF connection parameter to specify a database file, you may still
want to tune the way in which it starts. You can use the Start parameter
to do this.
For example, if you are using the Java features of Adaptive Server
Anywhere, you should provide additional cache memory on the Start
parameter. The following sample set of embedded database connection
parameters describes a connection that may use Java features:
DBF=path\asademo.db
DBN=Sample
ENG=Sample Server
UID=dba
PWD=sql
Start=dbeng7 -c 8M

66
Chapter 2 Connecting to a Database

Troubleshooting connections
Who needs to read In many cases, establishing a connection to a database is straightforward
this section? using the information presented in the first part of this chapter.
However, if you are having problems establishing connections to a server,
you may need to understand the process by which Adaptive Server
Anywhere establishes connections in order to resolve your problems. This
section describes how Adaptive Server Anywhere connections work.
The software follows exactly the same procedure for each of the following
types of client application:
♦ ODBC Any ODBC application using the SQLDriverConnect
function, which is the common method of connection for ODBC
applications. Many application development systems, such as Sybase
PowerBuilder and Power++, belong to this class of application.
♦ Embedded SQL Any client application using Embedded SQL and
using the recommended function for connecting to a database
(db_string_connect).
The SQL CONNECT statement is available for Embedded SQL
applications and in Interactive SQL. It has two forms: CONNECT AS...
and CONNECT USING. All the database administration tools, including
Interactive SQL, use db_string_connect.

$ For information on network-specific issues, including connections


across firewalls, see "Client/Server Communications" on page 85.

The steps in establishing a connection


To establish a connection, Adaptive Server Anywhere carries out the
following steps:
1 The client application must locate the
Locate the interface library
ODBC driver or Embedded SQL interface library.
2 Assemble a list of connection parameters Since connection
parameters may appear in several places (such as data sources, a
connection string assembled by the application, and an environment
variable) Adaptive Server Anywhere assembles the parameters into a
single list.
3 Locate a server Using the connection parameters, Adaptive Server
Anywhere locates a database server on your machine or over a network.

67
Troubleshooting connections

4 Locate the database Adaptive Server Anywhere locates the database


you want to connect to.
5 Start a personal server If Adaptive Server Anywhere fails to locate a
server, it attempts to start a personal database server and load the
database.
The following sections describe in detail each of these steps.

Locating the interface library


The client application makes a call to one of the Adaptive Server Anywhere
interface libraries. In general, the location of this DLL or shared library is
transparent to the user. Here we describe how to locate the library, in case of
problems.
ODBC driver For ODBC, the interface library is also called an ODBC driver. An ODBC
location client application calls the ODBC driver manager, and the driver manager
locates the Adaptive Server Anywhere driver.
The ODBC driver manager looks in the supplied data source in the odbc.ini
file or registry to locate the driver. When you create a data source using the
ODBC Administrator, Adaptive Server Anywhere fills in the current location
for your ODBC driver.
Embedded SQL Embedded SQL applications call the interface library by name. The name of
interface library the Adaptive Server Anywhere Embedded SQL interface library is as
location follows:
♦ Windows NT and Windows 95/98 dblib7.dll
♦ UNIX dblib7 with an operating-system-specific extension.
♦ NetWare dblib7.nlm

The locations that are searched depend on the operating system:


♦ PC operating systems On PC operating systems such as Windows
and Windows NT, files are looked for in the current directory, in the
system path, and in the Windows and Windows\system directories.
♦ UNIX operating systems On UNIX, files are looked for in the system
path and the user library path.
♦ NetWare On NetWare, files are looked for in the search path, and in
the sys:system directory.

68
Chapter 2 Connecting to a Database

When the library is Once the client application locates the interface library, it passes a
located connection string to it. The interface library uses the connection string to
assemble a list of connection parameters, which it uses to establish a
connection to a server.

Assembling a list of connection parameters


The following figure illustrates how the interface libraries assemble the list
of connection parameters they use to establish a connection.

Read Read parameters


Is there a data
parameters from not already
source in the
the connection specified from
parameter list?
string SQLCONNECT

Yes No

Is there a
Does the data
Yes compatibility data
source exist?
source?
No

No

Failure Yes
Read parameters
Connection
not already
parameters
specified from the
complete
data source

Notes Key points from the figure include:


♦ Precedence Parameters held in more than one place are subject to the
following order of precedence:
Connection string > SQLCONNECT > data source
That is, if a parameter is supplied both in a data source and in a
connection string, the connection string value overrides the data source
value.
♦ Failure Failure at this stage occurs only if you specify in the
connection string or in SQLCONNECT a data source that does not exist
in the client connection file.

69
Troubleshooting connections

♦ Common parameters Depending on other connections already in use,


some connection parameters may be ignored, including:
♦ Autostop Ignored if the database is already loaded.
♦ CommLinks The specifications for a network protocol are
ignored if another connection has already set parameters for that
protocol.
♦ CommBufferSize Ignored if another connection has already set
this parameter.
♦ CommBufferSpace Ignored if another connection has already set
this parameter.
♦ Unconditional Ignored if the database is already loaded or if the
server is already running.
The interface library uses the completed list of connection parameters to
attempt to connect.

Locating a server
In the next step toward establishing a connection, Adaptive Server Anywhere
attempts to locate a server. If the connection parameter list includes a server
name (ENG parameter), it carries out a search first for a local server
(personal server or network server running on the same machine) of that
name, followed by a search over a network. If no ENG parameter is supplied,
Adaptive Server Anywhere looks for a default server.

70
Chapter 2 Connecting to a Database

Yes Is ENG supplied? No

Is there a local
Is there a default
server named
personal server?
ENG?

No Yes
Yes

Are ports specified Locate


in CommLinks Yes database on
already available? server

Yes No

No

Attempt to locate
Start up required
a server named Can a server be
network protocol
ENG using located?
ports
available ports

No

Attempt to start a
personal server

$ If Adaptive Server Anywhere locates a server, it tries to locate or load


the required database on that server. For information, see "Locating the
database" on page 72.
$ If Adaptive Server Anywhere cannot locate a server, it attempts to start
a personal server. For information, see "Starting a personal server" on
page 73.
Notes ♦ For local connections, locating a server is simple. For connections over a
network, you can use the CommLinks parameter to tune the search in
many ways by supplying network communication parameters.

71
Troubleshooting connections

♦ The network search involves a search over one or more of the protocols
supported by Adaptive Server Anywhere. For each protocol, the network
library starts a single port. All connections over that protocol at any one
time use a single port.
♦ You can specify a set of network communication parameters for each
network port in the argument to the CommLinks parameter. Since these
parameters are necessary only when the port first starts, the interface
library ignores any connection parameters specified in CommLinks for
a port already started.
♦ Each attempt to locate a server (the local attempt and the attempt for
each network port) involves two steps. First, Adaptive Server Anywhere
looks in the server name cache to see if a server of that name is
available. Second, it uses the available connection parameters to attempt
a connection.

Locating the database


If Adaptive Server Anywhere successfully locates a server, it then tries to
locate the database. For example:

72
Chapter 2 Connecting to a Database

Is DBN
No
specified?

Is DBF
Yes No
specified?

Attempt to
connect Yes Is there a default
No
database running?

Is a database Failure
running whose
name is the root of Yes
DBF? Attempt to
No connect

Yes Can the DBF


file be No

Attempt to located?
connect
Yes
Failure
Load and attempt to
connect

Starting a personal server


If no server can be located, the interface libraries attempt to start a personal
server using other parameters. The Start and DBF parameters can be used to
start a personal server.

73
Troubleshooting connections

Is there a START
No Yes
parameter?

Is there a DBF
No
parameter?

Yes Use the START


Failure
parameter
Attempt to start a
personal server on
the file

The START parameter takes a personal database server command line. If a


START parameter is unavailable, Adaptive Server Anywhere attempts to
start a personal server on the file indicated by the DBF. If both an ENG
parameter and a DBF parameter appear, the ENG parameter becomes the
name of the server.

Server name caching for faster connections


The network library looks for a database server on a network by broadcasting
over the network using the CommLinks connection parameter.
Tuning the The CommLinks parameter takes as argument a string listing the protocols
broadcast to use and, optionally for each protocol, a variety of network communication
parameters that tune the broadcast.
$ For a complete listing of network communications parameters, see
"Network communications parameters" on page 65 of the book ASA
Reference.
Caching server Broadcasting over large networks searching for a server of a specific name
information can be time-consuming. To speed up network connections (except for the
first connection to a server), when a server is located, the protocol it was
found on and its address are saved to a file.

74
Chapter 2 Connecting to a Database

The server information is saved in a file named asasrv.ini, in your Adaptive


Server Anywhere executable directory. The file contains a set of sections,
each of the following form:
[Server name]
Link=protocol_name
Address=address_string

How the cache is When a connection specifies a server name, and a server with that name is
used not found, the network library looks first in the server name cache to see if
the server is known. If there is an entry for the server name, an attempt is
made to connect using the link and address in the cache. If the server is
located using this method, the connection is much faster, as no broadcast is
involved.
If the server is not located using cached information, the connection string
information and CommLinks parameter are used to search for the server
using a broadcast. If the broadcast is successful, the server name entry in the
named cache is overwritten.

Cache precedes CommLinks


If a server name is held in the cache, the cache entry is used before the
CommLinks string.

Interactive SQL connections


The Interactive SQL utility has a different behavior from the default
Embedded SQL behavior when a CONNECT statement is issued while
already connected to a database. If no database or server is specified in the
CONNECT statement, Interactive SQL connects to the current database,
rather than to the default database. This behavior is required for database
reloading operations.
$ For an example, see "CONNECT statement" on page 423 of the book
ASA Reference.

Testing that a server can be found


The dbping command-line utility is provided to help in troubleshooting
connections. In particular, you can use it to test if a server with a particular
name is available on your network.
The dbping utility takes a connection string as a command-line option, but by
default only those pieces required to locate a server are used. It does not
attempt to start a server.

75
Troubleshooting connections

Examples The following command line tests to see if a server named Waterloo is
available over a TCP/IP connection:
dbping -c "eng=Waterloo;CommLinks=tcpip"
The following command tests to see if a default server is available on the
current machine.
dbping
$ For more information on dbping options, see "The Ping utility" on
page 122 of the book ASA Reference.
$ For more information about printing more output during connection
attempts, see "Debug connection parameter" on page 56 of the book ASA
Reference. This feature is especially useful when used in conjunction with
the "Logfile connection parameter" on page 60 of the book ASA Reference.

76
Chapter 2 Connecting to a Database

Using integrated logins


The integrated login feature allows you to maintain a single user ID and
password for both database connections and operating system and/or network
logins. This section describes the integrated login feature.
Operating systems Integrated login capabilities are available for the Windows NT server only. It
supported is possible for Windows 95/98 clients as well as Windows NT clients to use
integrated logins to connect to a network server running on Windows NT.
Benefits of an An integrated login is a mapping from one or more Windows NT user
integrated login profiles to an existing user in a database. A user who has successfully
navigated the security for that user profile and logged in to their machine can
connect to a database without providing an additional user ID or password.
To accomplish this, the database must be enabled to use integrated logins and
a mapping must have been granted between the user profile used to log in to
the machine and/or network, and a database user.
Using an integrated login is more convenient for the user and permits a
single security system for database and network security. Its advantages
include:
♦ When connecting to a database using an integrated login, the user does
not need to enter a user ID or password.
♦ If you use an integrated login, the user authentication is done by the
operating system, not the database: a single system is used for database
security and machine or network security.
♦ Multiple user profiles can be mapped to a single database user ID.
♦ The name and password used to login to the Windows NT machine do
not have to match the database user ID and password.

Caution
Integrated logins offer the convenience of a single security system but
there are important security implications which database administrators
should be familiar with.

$ For more information about security and integrated logins, see


"Security concerns: unrestricted database access" on page 81.

77
Using integrated logins

Using integrated logins


Several steps must be implemented in order to connect successfully via an
integrated login.

v To use an integrated login:


1 Enable the integrated login feature in a database by setting the value of
the LOGIN_MODE database option to either Mixed or Integrated (the
option is case insensitive), in place of the default value of Standard.
This step requires DBA authority).
2 Create an integrated login mapping between a user profile and an
existing database user. This can be done using a SQL statement or a
wizard in Sybase Central. In Sybase Central, all users with integrated
login permission are shown in the Integrated Logins folder.
3 Connect from a client application in such a way that the integrated login
facility is triggered.
Each of these steps is described in the sections below.

Enabling the integrated login feature


The LOGIN_MODE database option determines whether the integrated login
feature is enabled. As database options apply only to the database in which
they are found, different databases can have a different integrated login
setting even if they are loaded and running within the same server.
The LOGIN_MODE database option accepts one of following three values
(which are case insensitive).
♦ Standard This is the default setting, which does not permit integrated
logins. An error occurs if an integrated login connection is attempted.
♦ Mixed With this setting, both integrated logins and standard logins are
allowed.
♦ Integrated With this setting, all logins to the database must be made
using integrated logins.

Caution
Setting the LOGIN_MODE database option to Integrated restricts
connections to only those users who have been granted an
integrated login mapping. Attempting to connect using a user ID
and password generates an error. The only exception to this are
users with DBA authority (full administrative rights).

78
Chapter 2 Connecting to a Database

Example The following SQL statement sets the value of the LOGIN_MODE database
option to Mixed, allowing both standard and integrated login connections:
SET OPTION Public.LOGIN_MODE = Mixed

Creating an integrated login


User profiles can only be mapped to an existing database user ID. When that
database user ID is removed from the database, all integrated login mappings
based on that database user ID are automatically removed.
A user profile does not have to exist for it to be mapped to a database user
ID. More than one user profile can be mapped to the same user ID.
Only users with DBA authority are able to create or remove an integrated
login mapping.
An integrated login mapping is made either using a wizard in Sybase Central
or a SQL statement.

v To map an integrated login (Sybase Central):


1 Connect to a database as a user with DBA authority.
2 Open the Integrated Logins folder for the database and double-click Add
Integrated Login.
3 On the first page of the wizard, specify the name of the system
(computer) user for whom the integrated login is to be created.
Also, select the database user ID this user maps to. The wizard displays
the available database users. You must select one of these. You cannot
add a new database user ID.
4 Follow the remaining instructions in the wizard.

v To map an integrated login (SQL):


1 Connect to a database with DBA authority.
2 Execute a GRANT INTEGRATED LOGIN TO statement.

Example The following SQL statement allows Windows NT users fran_whitney and
matthew_cobb to log in to the database as the user DBA, without having to
know or provide the DBA user ID or password.
GRANT INTEGRATED LOGIN
TO fran_whitney, matthew_cobb
AS USER dba

79
Using integrated logins

$ For more information, see "GRANT statement" on page 540 of the


book ASA Reference.

Revoking integrated login permission


You can remove an integrated login mapping using either Sybase Central or
Interactive SQL.

v To revoke an integrated login permission (Sybase Central):


1 Connect to a database with DBA authority.
2 Open the Integrated Logins folder.
3 In the right pane, right-click the user/group you would like to remove
the integrated login permission from and choose Delete from the popup
menu.

v To revoke an integrated login permission (SQL):


1 Connect to a database with DBA authority.
2 Execute a REVOKE INTEGRATED LOGIN FROM statement.

Example The following SQL statement removes integrated login permission from the
Windows NT user dmelanso.
REVOKE INTEGRATED LOGIN
FROM dmelanso

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference

Connecting from a client application


A client application can connect to a database using an integrated login in
one of the following ways:
♦ Set the INTEGRATED parameter in the list of connection parameters to
yes.
♦ Specify neither a user ID nor a password in the connection string or
connection dialog. This method is available only for Embedded SQL
applications, including the Adaptive Server Anywhere administration
utilities.
If INTEGRATED=yes is specified in the connection string, an integrated
login is attempted. If the connection attempt fails and the LOGIN_MODE
database option is set to Mixed, the server attempts a standard login.

80
Chapter 2 Connecting to a Database

If an attempt to connect to a database is made without providing a user ID or


password, an integrated login is attempted. The attempt succeeds or fails
depending on whether the current user profile name matches an integrated
login mapping in the database.
Interactive SQL For example, a connection attempt using the following Interactive SQL
Examples statement will succeed, providing the user has logged on with a user profile
name that matches a integrated login mapping in a default database of a
server:
CONNECT USING ’INTEGRATED=yes’
The following Interactive SQL statement...
CONNECT
...can connect to a database if all the following are true:
♦ A server is currently running.
♦ The default database on the current server is enabled to accept integrated
login connections.
♦ An integrated login mapping has been created that matches the current
user’s user profile name.
♦ If the user is prompted with a dialog box by the server for more
connection information (such as occurs when using the Interactive SQL
utility), the user clicks OK without providing more information.

Integrated logins A client application connecting to a database via ODBC can use an
via ODBC integrated login by including the Integrated parameter among other attributes
in its Data Source configuration.
Setting the attribute Integrated=yes in an ODBC data source causes
database connection attempts using that DSN to attempt an integrated login.
If the LOGIN_MODE database option is set to Standard, the ODBC driver
prompts the user for a database user ID and password.

Security concerns: unrestricted database access


The integrated login feature works by using the login control system of
Windows NT in place of the Adaptive Server Anywhere security system.
Essentially, the user passes through the database security if they can log in to
the machine hosting the database, and if other conditions, outlined in "Using
integrated logins" on page 77, are met.

81
Using integrated logins

If the user successfully logs in to the Windows NT server as "dsmith", they


can connect to the database without further proof of identification provided
there is either an integrated login mapping or a default integrated login user
ID.
When using integrated logins, database administrators should give special
consideration to the way Windows NT enforces login security in order to
prevent unwanted access to the database.
In particular, be aware that by default a "Guest" user profile is created and
enabled when Windows NT Workstation or Server is installed.

Caution
Leaving the user profile Guest enabled can permit unrestricted access to a
database that is hosted by that server.

If the Guest user profile is enabled and has a blank password, any attempt to
log in to the server will be successful. It is not required that a user profile
exist on the server, or that the login ID provided have domain login
permissions. Literally any user can log in to the server using any login ID
and any password: they are logged in by default to the Guest user profile.
This has important implications for connecting to a database with the
integrated login feature enabled.
Consider the following scenario, which assumes the Windows NT server
hosting a database has a "Guest" user profile that is enabled with a blank
password.
♦ An integrated login mapping exists between the user fran-whitney and
the database user ID DBA. When the user fran-whitney connects to the
server with her correct login ID and password, she connects to the
database as DBA, a user with full administrative rights.
But anyone else attempting to connect to the server as fran-whitney will
successfully log in to the server regardless of the password they provide
because Windows NT will default that connection attempt to the "Guest"
user profile. Having successfully logged in to the server using the
fran_whitney login ID, the unauthorized user successfully connects to
the database as DBA using the integrated login mapping.

Disable the Guest user profile for security


The safest integrated login policy is to disable the "Guest" user profile on
any Windows NT machine hosting an Adaptive Server Anywhere
database. This can be done using the Windows NT User Manager utility.

82
Chapter 2 Connecting to a Database

Setting temporary public options for added security


Setting the value of the LOGIN_MODE option for a given database to
Mixed or Integrated using the following SQL statement permanently
enables integrated logins for that database.
SET OPTION Public.LOGIN_MODE = Mixed
If the database is shut down and restarted, the option value remains the same
and integrated logins are still enabled.
Changing the LOGIN_MODE option temporarily will still allow user access
via integrated logins. The following statement will change the option value
temporarily:
SET TEMPORARY OPTION Public.LOGIN_MODE = Mixed
If the permanent option value is Standard, the database will revert to that
value when it is shut down.
Setting temporary public options can be considered an additional security
measure for database access since enabling integrated logins means that the
database is relying on the security of the operating system on which it is
running. If the database is shut down and copied to another machine (such as
a user’s machine) access to the database reverts to the Adaptive Server
Anywhere security model and not the security model of the operating system
of the machine where the database has been copied.
$ For more information on using the SET OPTION statement see "SET
OPTION statement" on page 612 of the book ASA Reference.

Network aspects of integrated logins


If the database is located on a network server, then one of two conditions
must be met for integrated logins to be used:
♦ The user profile used for the integrated login connection attempt must
exist on both the local machine and the server. As well as having
identical user profile names on both machines, the passwords for both
user profiles must also be identical.
For example, when the user jsmith attempts to connect using an
integrated login to a database loaded on a network server, identical user
profile names and passwords must exist on both the local machine and
application server hosting the database. jsmith must be permitted to
login to both the local machine and the server hosting the network
server.

83
Using integrated logins

♦ If network access is controlled by a Microsoft Domain, the user


attempting an integrated login must have domain permissions with the
Domain Controller server and be logged in to the network. A user
profile on the network server matching the user profile on the local
machine is not required.

Creating a default integrated login user


A default integrated login user ID can be created so that connecting via an
integrated login will be successful even if no integrated login mapping exists
for the user profile currently in use.
For example, if no integrated login mapping exists for the user profile name
JSMITH, an integrated login connection attempt will normally fail when
JSMITH is the user profile in use.
However, if you create a user ID named Guest in a database, an integrated
login will successfully map to the Guest user ID if no integrated login
mapping explicitly identifies the user profile JSMITH.
The default integrated login user permits anyone attempting an integrated
login to successfully connect to a database if the database contains a user ID
named Guest. The permissions and authorities granted to the newly-
connected user are determined by the authorities granted to the Guest user
ID.

84
C H A P T E R 3

Client/Server Communications

About this chapter Each network environment has its own peculiarities. This chapter describes
those aspects of network communication that are relevant to the proper
functioning of your database server, and provides some tips for diagnosing
network communication problems. It describes how networks operate, and
provides hints on running the network database server under each protocol.

Network database server only


The material in this chapter applies only to the network server. You do not
need to read this chapter if you are using the personal database server.

Contents
Topic Page
Network communication concepts 86
Real world protocol stacks 91
Supported network protocols 94
Using the TCP/IP protocol 95
Using the SPX protocol 98
Using the NetBIOS protocol 100
Using Named Pipes 101
Troubleshooting network communications 102

85
Network communication concepts

Network communication concepts


Applications in a local area network communicate using a set of rules and
conventions called an application protocol. Each application is isolated from
the lower-level details of how information gets transported across the
network by lower-level protocols, which form a protocol stack.
This section provides a brief description of how protocol stacks work. Hints
regarding specific network issues later in this chapter assume a basic
understanding of how protocol stacks operate.

The protocol stack

Application Application level


protocol Application

Transport Transport level


proocol Transport

Network Network level


protocol Network

Data Link Data link


protocl Data Link

Physical Physical
transmission Physical

Computer A Computer B

The figure shows the layers of a protocol stack, based on a simplification of


the OSI Reference Model of network communications.

86
Chapter 3 Client/Server Communications

The OSI (Open Systems Interconnection) Reference Model, developed by


the International Organization for Standardization (ISO), is a step towards
international standardization of network protocols. Most current networking
software and hardware conforms to some extent, but not exactly, to this
model. Even though conformance is incomplete, the OSI model is a valuable
aid in understanding how network communications work.

How information is passed across a network


When one network application sends information to another network
application (such as the database server), the information passes down one
protocol stack, across the network, and up the other protocol stack to the
other application.
Protocol stacks The protocol stack isolates the different functions needed for reliable data
have layers transfer. Each layer of the protocol stack is connected to the layers above and
below it by an interface.
Each layer of a protocol stack treats information passed to it by the layer
above it merely as data, labeling that data in such a way as to be identified
and deciphered by the equivalent layer on the other computer. Only the
physical layer is responsible for actually placing data onto the wire—all
other layers provide some well-defined level of functionality, such as error
detection, correction, encryption and so on.
Each layer has a Although actual data transmission is vertical (down one stack, up the other),
protocol each layer is programmed as if it were in direct communication with the
corresponding layer on the other stack (peer-to-peer communication). The
rules and conventions that govern each level of peer-to-peer communication
are called a protocol for that level. There exist transport protocols, network
protocols, data link protocols, and application level protocols, among others.
Protocol stack The protocol stacks on each side of the communication must be compatible
compatibility at each level for network communications to work properly. If they are not
compatible, the layer on the receiving stack does not understand the
information being passed to it by its corresponding layer on the sending
stack.
Software that manages lower levels in a protocol stack is often called a
driver. The different layers of a protocol stack have the following functions:

87
Network communication concepts

The physical layer


Your network adapter, or network card, and the network wiring form the
physical layer for network communication. The higher layers in the protocol
stacks ensure that software does not have to be aware of the details of
network wiring or network adapter design.

The data link layer


The job of a data link layer is to handle the safe passing of information
across the network at the level of individual sets of bits. At higher levels of
the protocol stack, data is not described in terms of individual bits. It is at the
data link layer that information is broken down to this elementary level.
The data link layer’s interface with the network layer above it in the protocol
stack commonly conforms to one of two specifications: the Network Driver
Interface Specification (NDIS) developed jointly by IBM and Microsoft, and
the Open Device Interface (ODI) developed by Novell.
ODI and NDIS data link layers can be made compatible with each other
using a translation driver.
$ For more information, see "Working with multiple protocol stacks" on
page 92.

The network layer


The network layer takes a packet of information from the transport layer
above it and gets it to the corresponding network layer on the receiving
protocol stack. Issues of routing are handled by the network layer.
Information at a higher level is broken down into packets of a specified size
(in numbers of bytes) for transmission across a network.

The transport layer


The principal task of the transport layer is to guarantee the transmission of
information between applications. It accepts information directly from a
network application, such as a database server, splits it up if necessary,
passes the packets to the network layer and ensures that the pieces all arrive
correctly at the other end, where it assembles the packets before passing
them up the stack to the application layer.

88
Chapter 3 Client/Server Communications

Examples of Novell’s SPX, Microsoft and IBM’s NetBEUI, and Named Pipes are widely-
transport protocols used transport protocols. The TCP/IP suite of protocols includes more than
one transport layer. NetBIOS is an interface specification to the transport
layer from IBM and Microsoft that is commonly (but not necessarily) paired
with the NetBEUI protocol.
Adaptive Server Anywhere supports the NetBIOS interface to the transport
layer. In addition, Adaptive Server Anywhere has an interface to Named
Pipes for same-machine communications only.
Adaptive Server Anywhere applies its own checks to the data passed
between client application and server, to further ensure the integrity of data
transfer.

The application layer


Database servers and client applications are typical application layers in a
protocol stack, from a networking point of view. They communicate using an
application-defined protocol. This protocol is internal to Adaptive Server
Anywhere programs.
Typical data for transmission includes a SQL query or other statement (from
client application to database server) or the results of a SQL query (from
database server to client application).
Passing The client library and database server have an interface at the transport level
information down (in the case of NetBIOS or Named Pipes), or at the network level, passing
the protocol stack the information to the network communications software. The lower levels of
the protocol stack are then responsible, independent of the application, for
transmitting the data to the equivalent layer on the other computer. The
receiving layer hands the information to Adaptive Server Anywhere on the
receiving machine.
The database server and the client library perform a set of checks and
functions to ensure that data passed across the network arrives correctly, in a
proper form.

Compatible protocol stacks


For two protocol stacks to be compatible, they must be operating the same
transport layer (say NetBIOS) on each stack, and the same network protocol
on each stack (say IP). If one stack employs a NetBIOS interface to the
transport layer, so must the other. A client application running on a protocol
stack employing NetBIOS cannot communicate with a database server using
a TCP/IP protocol stack.

89
Network communication concepts

At the data link layer, ODI-based protocol stacks can be made compatible
with NDIS-based protocol stacks using translation drivers, as discussed in
"Working with multiple protocol stacks" on page 92.

90
Chapter 3 Client/Server Communications

Real world protocol stacks


The OSI model is close enough to current networking software and hardware
to be a useful model for thinking about networks. There are inevitable
complications because of the different ways in which software companies
have designed their own systems. Also, there are complications when an
individual computer is running more than one protocol stack, as is
increasingly common.

Common protocol stacks


The TCP/IP protocol is available on all major operating systems. In addition,
some widely used network software packages implement their own protocol
stacks. Here are some networking software vendors together with their most
common proprietary protocol stacks.
♦ Novell Novell NetWare typically operates using an SPX transport
level atop an IPX network protocol (often written as SPX/IPX). Novell’s
IPX requires an ODI data link layer. The NetWare client installation
installs this protocol stack on Novell NetWare client computers.
Adaptive Server Anywhere has an interface directly to the IPX protocol,
and does not rely on the services of the SPX layer.
NetWare, IPX, and ODI are often used interchangeably when discussing
network protocols.
♦ Microsoft Microsoft Windows NT 3.5 and later comes with
networking software for NetBIOS, SPX/IPX, and TCP/IP.
Windows 95/98 installs SPX/IPX as default networking software, and
also comes with NetBEUI software.
♦ IBM IBM LAN Server and other IBM networking software use a
NetBIOS interface to a NetBEUI transport layer on top of an NDIS data
link layer. The NDIS data link layer protocol was jointly developed by
Microsoft and IBM, and is employed by all IBM’s higher-level
protocols.
In each case, the installed data link layer may be changed from ODI to NDIS
(or vice versa) by other networking software if more than one network
protocol is installed. In these cases a translation driver ensures that both ODI
and NDIS data link layer interfaces are available to the higher levels of the
protocol stack. Working with more than one protocol stack is discussed in
more detail in the next section.

91
Real world protocol stacks

Working with multiple protocol stacks


For two network applications (such as a client application and a database
server) to communicate, the client computer and the server computer must
have compatible protocol stacks at each level. When each computer has a
single network protocol installed, it is fairly straightforward to ensure that
each is running the same stack. However, when each computer is running
multiple protocols, the situation becomes more complex.
At the transport and network layers, there is no problem with more than one
protocol running at once; as long as there is a compatible path through the
protocol stacks, the two applications can communicate.
ODI and NDIS There are two widely-used interfaces between the data link layer and the
interfaces network layer above it: ODI and NDIS. The data link layer communicates
directly with the network adapter, so only one data link driver can be
installed at a time. However, because most data link layers do not modify the
information placed on the network wire, an ODI data link driver on one
computer is compatible with an NDIS driver on another computer.
Each network layer is written to work with one, and only one, of ODI or
NDIS. However, translation drivers are available that enable network level
protocols that require an NDIS (or ODI) data link interface to work with ODI
(or NDIS) data link layers. Consequently, a protocol stack built on an ODI
data link layer can be compatible with a protocol stack based on an NDIS
data link layer as long as the upper layers are compatible.
ODI and NDIS For example, Novell provides a driver named odinsup, which enables
translation drivers support for NDIS network protocols on an ODI data link layer, as well as a
driver named odi2ndi, which translates in the other direction. Microsoft
provides an ODI to NDIS mapper called odihlp.exe with Windows 95/98.
The translation drivers enable NDIS-based protocol stacks to be compatible
with ODI-based protocol stacks on other computers, and to coexist on the
same computer. You can think of the network adapter driver, the NDIS or
ODI interface, and (if present) the translation driver as together forming the
data link layer. For instance, you can configure a computer with an NDIS
driver to communicate (via odi2ndi) with a Novell NetWare server using
SPX/IPX on ODI drivers and, at the same time, communicate (via the NDIS
driver) with a Windows NT computer using TCP/IP on an NDIS driver.

92
Chapter 3 Client/Server Communications

Troubleshooting tip
Not all translation drivers achieve complete compatibility. Be sure to get
the latest available version of the driver you need. Although we provide
some tips concerning network troubleshooting, the primary source of
assistance in troubleshooting a particular protocol stack is the
documentation for the network communications software that you install.

93
Supported network protocols

Supported network protocols


Properly configured Adaptive Server Anywhere servers run under the
following networks and protocols:
♦ Windows NT and Windows 95/98 NetBIOS, TCP/IP, or SPX
protocols.
♦ Windows CE TCP/IP protocol.
♦ NetWare All Novell networks using the SPX or TCP/IP protocols.
♦ UNIX TCP/IP protocol.
Where SPX is listed, IPX can also be used, but is deprecated. Support for
IPX is to be dropped in the next major release of Adaptive Server Anywhere.
In addition, two protocols are provided for communication from a client
application running on the same machine but built for a different operating
system.
♦ Windows NT Named Pipes is used to communicate with Windows 3.x
client applications from previous versions of the software, running on
the same machine as the server. Named Pipes can also be used for same-
machine communication between any application and the server, but is
generally not recommended for this purpose. Named Pipes are not used
for network communications.
♦ Windows 95/98 TCP/IP can be used for communications between a
Windows 3.x client from a previous version of the software, and a
Windows 95/98 database server running on the local machine or a
different machine.
The client library for each platform supports the same protocols as the
corresponding server. In order for Adaptive Server Anywhere to run
properly, the protocol stack on the client and server computers must be
compatible at each layer.

94
Chapter 3 Client/Server Communications

Using the TCP/IP protocol


TCP/IP is a suite of protocols originally implemented by the University of
California at Berkeley for BSD UNIX. TCP/IP has gained widespread use
with the expansion of the Internet and the World-Wide Web. Different
TCP/IP implementations rely on particular data link drivers in order to work
correctly. For example, TCP/IP for NetWare relies on ODI drivers.
Unlike NetBEUI and SPX/IPX, the TCP/IP protocol is not associated with
any specific software vendor. There are many implementations of TCP/IP
available from different vendors, and many different programming interfaces
to TCP. Consequently, Adaptive Server Anywhere supports only certain
TCP/IP implementations on each platform. For details, see the subsections
below for each platform. Because all TCP/IP implementations do implement
the same protocol suite, they are all compatible.
UDP is a transport layer protocol that sits on top of IP. Adaptive Server
Anywhere uses UDP on top of IP to do initial server name resolution and
TCP for connection and communication after that.

Using TCP/IP with Windows 95/98 and Windows NT


The TCP/IP implementation for Windows NT is written to the Winsock 2.0
standard, and that for Windows 95/98 uses the Winsock 1.1 standard. Your
TCP/IP software must support this standard.
Windows 95/98 and Windows NT ship with TCP/IP software that uses NDIS
network drivers. If you do not have TCP/IP installed, choose TCP/IP
Protocol from the Windows NT Control Panel, Network Settings.
This software allows a database server for Windows NT or a client
application to use Windows NT TCP/IP.

Using TCP/IP with UNIX


The UNIX database server supports TCP/IP. This enables non-UNIX clients
to communicate with a UNIX database server.

95
Using the TCP/IP protocol

Tuning TCP/IP performance


Increasing the packet size may improve query response time, especially for
queries transferring a large amount of data between a client and a server
process. You can set the packet size using the -p parameter on the database
server command line, or by setting the CommBufferSize connection
parameter in your connection profile.
$ For more information, see , "–p command-line option" on page 33 of
the book ASA Reference, or "CommBufferSize connection parameter" on
page 51 of the book ASA Reference.

Connecting across a firewall


There are restrictions on connections when the client application is on one
side of a firewall, and the server is on the other. Firewall software filters
network packets according to network port. Also, it is common to disallow
UDP packets from crossing the firewall.
When connecting across a firewall, you must use a set of communication
parameters in the CommLinks connection parameter of your application's
connection string.
♦ ClientPort Set this parameter to a range of allowed values for the
client application to use. You can then configure your firewall to allow
these packets across. You can use the short form CPort.
♦ HOST Set this parameter to the host name on which the database
server is running. You can use the short form IP.
♦ ServerPort If your database server is not using the default port of
2638, you must specify the port it is using. You can use the short form
Port.

Example The following connection string fragment restricts the client application to
ports 5050 through 5060, and connect to a server named myeng running on
the machine at address myhost using the server port 2020. No UDP
broadcast is carried out because of the DoBroadcast option.
Eng=myeng; Links=tcpip(ClientPort=5050-5060;Host=myhost;Port=2020;DoBroadcast=NONE)

$ For more information, see the following:


♦ "CommLinks connection parameter" on page 52 of the book ASA
Reference
♦ "ClientPort parameter" on page 65 of the book ASA Reference
♦ "ServerPort parameter" on page 70 of the book ASA Reference

96
Chapter 3 Client/Server Communications

♦ "Host parameter" on page 68 of the book ASA Reference


♦ "DoBroadcast parameter" on page 66 of the book ASA Reference.

Connecting on a dialup network connection


You can use connection and communications parameters to assist with
connecting to a database across a dialup link.
On the client side, you should specify the following communications
parameters:
♦ Host parameter You should specify the host name or IP address of the
database server using the Host communication parameter.
$ For more information, see "Host parameter" on page 68 of the
book ASA Reference.
♦ DoBroadcast parameter If you specify the Host parameter, there is
no need to do a broadcast search for the database server. For this reason,
turn off broadcasting.
$ For more information, see "DoBroadcast parameter" on page 66 of
the book ASA Reference.
♦ MyIP parameter You should set MyIP=NONE on the client side.
$ For more information, see "MyIP parameter" on page 69 of the
book ASA Reference.
A typical CommLinks parameter may look as follows:
Links=tcpip(MyIP=NONE;DoBroadcast=NO;Host=server_ip)

97
Using the SPX protocol

Using the SPX protocol


SPX is a protocol from Novell. Adaptive Server Anywhere for NetWare,
Windows NT, and Windows 95/98 can all employ the SPX protocol. This
section provides some tips for using SPX under different operating systems.
Connecting via Some machines can use the NetWare bindery. These machines are NetWare
SPX servers or Windows NT or Windows 95/98 machine where the Client Service
for NetWare is installed. A client application on one of these machines does
not need to use broadcasts to connect to the server, if the server to which it is
connecting is also using the bindery.
Applications and servers running on machines not using the bindery must
connect using either of the following:
♦ An explicit address You can specify an explicit address using the
HOST communication parameter.
$ For more information, see "Host parameter" on page 68 of the
book ASA Reference.
♦ Broadcast You can broadcast over a network by specifying the
DOBROADCAST communication parameter.
$ For more information, see "DoBroadcast parameter" on page 66 of
the book ASA Reference.

Using IPX with Windows NT


Windows NT 3.5 ships with IPX network software that uses NDIS network
drivers. This software can be installed from the Windows NT Control Panel,
Network Settings. This software allows a Windows NT database server or a
client application to use Windows NT IPX running on NDIS, while also
allowing access to a NetWare file server, even though the NetWare file
server will generally be using an ODI driver.
You must install both NWLink IPX/SPX Compatible Transport and Client
Service for NetWare to use this feature. In some environments, problems
have been found when the default setting (Auto-detect) has been chosen for
the frame type. In this case, the frame type settings for the client and the
server should be the same.

98
Chapter 3 Client/Server Communications

Using SPX with Windows 95/98


The IPX/SPX network protocol is the default network installed by
Windows 95/98. If you installed Windows 95/98 without network support,
you can install it later from the Control Panel, Network Settings.

99
Using the NetBIOS protocol

Using the NetBIOS protocol


NetBIOS is an interface for interprocess communications, not a protocol
specification. Programs written to the NetBIOS interface should operate over
a variety of protocol stacks.
Protocol stacks that implement the NetBIOS interface must be compatible
between client and server machines. For instance, a NetBIOS interface using
SPX/IPX as the communication mechanism is incompatible with another
NetBIOS implementation using NetBEUI.
As Windows 95/98 and IBM networking software all use NetBIOS with
NetBEUI as a standard protocol, configuration of the NetBIOS protocol
stack for these environments is carried out by the network software
installation.
The number of connections permitted per server is limited by the setting of
the Sessions parameter, and by the number of NetBIOS commands available
on your machine. The number of NetBIOS commands is a machine-
dependent setting independent of Adaptive Server Anywhere.
$ For more information, see "Sessions parameter" on page 71 of the book
ASA Reference.

Using NetBIOS with Windows NT


Windows NT 3.5 and later ships with NetBIOS interfaces to a number of
different protocol stacks (NetBEUI, NetWare IPX/SPX transport, TCP/IP)
that use NDIS network drivers. To use NetBIOS, install NetBIOS Interface
from the Windows NT Control Panel, Network Settings.
Adaptive Server Anywhere works with any of these protocol stacks, but you
must be sure to use compatible protocol stacks on the client and server sides
of the communication.
You can have more than one NetBIOS interface active at one time. Each
interface appears to as a different LAN adapter number. Adaptive Server
Anywhere can simultaneously use different LAN adapter numbers, and so
can simultaneously communicate on multiple NetBIOS interface protocol
stacks.

100
Chapter 3 Client/Server Communications

Using Named Pipes


Named pipes are a facility for interprocess communication. Named pipes can
be used for communication between processes on the same computer or on
different computers.
Adaptive Server Anywhere for Windows NT uses local named pipes. This
allows Windows 3.x applications from previous versions of the software
running on Windows NT to communicate with a database server on the same
machine.
Adaptive Server Anywhere does not use named pipes for client/server
communications between different machines. You do not need to install
remote named pipe support for local named pipes to work.

101
Troubleshooting network communications

Troubleshooting network communications


Network software involves several different components, increasing the
likelihood of problems. Although we provide some tips concerning network
troubleshooting here, the primary source of assistance in network
troubleshooting should be the documentation and technical support for your
network communications software, as provided by your network
communications software vendor.

Ensure that you are using compatible protocols


If you have more than one protocol stack installed on the client or server
computer, you should ensure that the client and the database server are using
the same protocol. The -x command line switch for the server selects a list of
protocols for the server to use, and the CommLinks connection parameter
does the same for the client application.
You can use these options to ensure that each application is using the same
protocol.
By default, both the database server and client library use all available
protocol stacks. The server supports client requests on any active protocol,
and the client searches for a server on all active protocols.
$ For more information about the -x switch, see "The database server" on
page 14 of the book ASA Reference. For information about the CommLinks
connection parameter, see "CommLinks connection parameter" on page 52
of the book ASA Reference.

Ensure that you have current drivers


Old network adapter drivers are a common source of communication
problems. You should ensure that you have the latest version of the NDIS or
ODI driver for your network adapter, as appropriate. You should be able to
obtain current network adapter drivers from the manufacturer or supplier of
the card.
Network adapter manufacturers and suppliers make the latest versions of
drivers for their cards available. Most card manufacturers have a Web site
from which you can download the latest versions of NDIS and ODI drivers.
You may also be able to obtain a current network adapter driver from the
provider of your networking software.

102
Chapter 3 Client/Server Communications

When you download Novell client software, it has ODI drivers for some
network adapters in addition to the Novell software that is used for all
network adapters.

Switch off your computer between attempts


Some network adapter boards do not reset cleanly when you reboot the
computer. When you are troubleshooting, it is better to turn the computer off,
wait a few seconds, and then turn it back on between attempts.

Diagnose your protocol stack layer by layer


If you are having problems getting your client application to communicate
with a database server, you need to ensure that the client and the database
server are using compatible protocol stacks.
A helpful method of isolating network communication problems is to work
up the protocol stack, testing whether each level of communication is
working properly.
If you can connect to the server computer in any way, then the data link layer
is working, regardless of whether the connection is made using the same
higher-layer protocols you will be using for Adaptive Server Anywhere.
For example, you may want to try to connect to a disk drive on the computer
running the database server from the computer running the client application.
Having verified that the data link layer is working, the next step is to verify
that other applications using the same network and transport layers as
Adaptive Server Anywhere are working properly.

Testing a NetBIOS protocol stack


If you are using Windows 95/98, or Windows NT, and you are using the
native protocol, try using the chat or WinPopup application. This tests
whether applications on the client and server computers can communicate
with each other.
You should ensure that the applications that come with your networking
software are running properly before testing Adaptive Server Anywhere.

103
Troubleshooting network communications

Testing a TCP/IP protocol stack


If you are running under TCP/IP, there are several applications that you can
use to test the compatibility of the client computer and server computer
TCP/IP protocol stack. The ping utility provided with many TCP/IP
packages is useful for testing the IP network layer.
Using ping to test Each IP layer has an associated address—a four-integer period-separated
the IP layer number (such as 191.72.109.12). Ping takes as an argument an IP-address
and attempts to send a single packet to the named IP-protocol stack.
First, determine if your own protocol stack is operating correctly by
"pinging" yourself. If your IP-address is 191.72.109.12, you would type:
ping 191.72.109.12
at the command line prompt and wait to see if the packets are routed at all. If
they are, the output will appear similar to the following:
c:> ping 191.72.109.12
Pinging 191.72.109.12 with 32 bytes of data:
Reply from 191.72.109.12: bytes=32 time<.10ms TTL=32
Reply from 191.72.109.12: bytes=32 time<.10ms TTL=32
Reply from 191.72.109.12: bytes=32 time<.10ms TTL=32
...
If this works, it means that the computer is able to route packets to itself.
This is reasonable assurance that the IP layer is set up correctly. You could
also ask someone else running TCP/IP for their IP address and try pinging
them.
You should ensure that you can ping the computer running the database
server from the client computer before proceeding.
Using telnet to test To further test the TCP/IP stack you can start a server application on one
the TCP/IP stack computer, and a client program on the other computer, and test whether they
can communicate properly.
There are several applications commonly provided with TCP/IP
implementations that can be used for this purpose: here we show how to use
telnet to test the TCP/IP stack.
1 Start a telnet server process (or daemon) on one machine. Check with
your TCP/IP software for how to do this. To start a typical command
line telnet program you would type the following instruction at the
command prompt:
telnetd

104
Chapter 3 Client/Server Communications

2 Start the telnet client process on the other machine, and see if you get a
connection. Again, check with your TCP/IP software for how to do this.
For command line programs, you would typically type the following
instruction:
telnet server_name
where server_name is the name or IP address of the computer running
the telnet server process.
If a telnet connection is established between these two machines, the
protocol stack is stable and the client and server should be able to
communicate using the TCP/IP link between the two computers. If a telnet
connection cannot be established, there is a problem. You should ensure that
your TCP/IP protocol stack is working correctly before proceeding.

Diagnosing wiring problems


Faulty network wiring or connectors can cause problems that are difficult to
track down. Try recreating problems on a similar machine with the same
configuration. If a problem occurs on only one machine, it may be a wiring
problem or a hardware problem.
For information on detecting wiring problems under NetWare, see your
Novell NetWare manuals. The Novell LANalyzer program is useful for
tracking down wiring problems with Ethernet or TokenRing networks. Your
NetWare authorized reseller can also supply you with the name of a Certified
NetWare Engineer who can help diagnose and solve wiring problems.

A checklist of common problems


$ For more information about network communication parameters, see
"Network communications parameters" on page 65 of the book ASA
Reference.
The following list presents some common problems and their solutions.
If you receive the message Unable to start — server not found when trying to
start the client, the client cannot find the database server on the network.
Check for the following problems:
♦ The network configuration parameters of your network driver on the
client machine are different from those on the server machine. For
example, two Ethernet adapter cards should be using a common frame
type. For Novell NetWare, the frame type is set in the net.cfg file. Under
Windows 95/98 and Windows NT the settings can be accessed through
the Control Panel Network Settings.

105
Troubleshooting network communications

♦ Under the TCP/IP protocol, clients search for database servers by


broadcasting a request. Such broadcasts will typically not pass through
gateways, so any database server on a machine in another (sub)network,
will not be found. If this is the case, you must supply the host name of
the machine on which the server is running using the -x command-line
option. This is required to connect to NetWare servers over TCP.
♦ Your network drivers are not installed properly or the network wiring is
not installed properly.
♦ The network configuration parameters of your network driver are not
compatible with Adaptive Server Anywhere multi-user support. See
"Configuring your network adapter board" on page 106 for a description
of configuration parameters that affect performance and may affect
operation of the client and server.
♦ If your network communications are being carried out using TCP/IP,
and you are operating under indows NT, check that your TCP/IP
software conforms to the Winsock 1.1 standard.
♦ If you receive the message Unable to initialize any communication links,
no link can be established. The probable cause is that your network
drivers have not been installed. The server and the client try to start
communication links using all available protocols unless you have
specified otherwise using the -x option. Check your network
documentation to find out how to install the driver you wish to use.

Configuring your network adapter board


Network adapter boards have configuration settings that allow them to use
different interrupt request (IRQ) levels. The network adapter board may
work only when it is using the same IRQ level as another adapter board in
your computer (for example, your parallel card). Performance may suffer
when the network adapter board shares an IRQ level.
The drivers for some adapter boards have parameters. The most important
type of parameter to look out for is one that controls buffer size or number of
buffers. For example, some versions of the NDIS driver for an SMC PLUS
Ethernet adapter have these configuration parameters:
ReceiveBuffers = 12
ReceiveBufSize = 768

106
Chapter 3 Client/Server Communications

The default packet size for Adaptive Server Anywhere is 1024 bytes. The
maximum buffer size used by the adapter board should be greater than this to
allow for protocol information in the packet. The computer running the
database server might need more than the default number of buffers used by
a driver.

107
Troubleshooting network communications

108
P A R T TW O

Working with Databases

This part of the manual describes the mechanics of carrying out common tasks
with Adaptive Server Anywhere.

109
110
C H A P T E R 4

Working with Database Objects

About this chapter This chapter describes the mechanics of creating, altering, and deleting
database objects such as tables, views, and indexes.
Contents
Topic Page
Introduction 112
Working with databases 115
Working with tables 124
Working with views 138
Working with indexes 145

111
Introduction

Introduction
With the Adaptive Server Anywhere tools, you can create a database file to
hold your data. Once this file is created, you can begin managing the
database. For example, you can add database objects, such as tables or users,
and you can set overall database properties.
This chapter describes how to create a database and the objects within it. It
includes procedures for Sybase Central, Interactive SQL, and command-line
utilities. If you want more conceptual information before you begin, see the
following chapters:
♦ "Designing Your Database" on page 333
♦ "Ensuring Data Integrity" on page 357
♦ "About Sybase Central" on page 36 of the book Introducing SQL
Anywhere Studio
♦ "Using Interactive SQL" on page 161 of the book Getting Started with
ASA
The SQL statements for carrying out the tasks in this chapter are called the
data definition language (DDL). The definitions of the database objects
form the database schema: you can think of the schema as an empty
database.
$ Procedures and triggers are also database objects, but they are
discussed in "Using Procedures, Triggers, and Batches" on page 435.
Chapter contents This chapter contains the following material:
♦ An introduction to working with database objects (this section)
♦ A description of how to create and work with the database itself
♦ A description of how to create and alter tables, views, and indexes
Questions and To answer the question... Consider reading...
answers
How do I create or erase a database? "Creating a database" on page 115 and
"Erasing a database" on page 118
How do I disconnect from a database? "Disconnecting from a database" on
page 119
How do I set the properties for any "Setting properties for database
database object? objects" on page 120
How do I set database options? "Setting database options" on
page 120
How do I set the consolidated database? "Setting a consolidated database" on

112
Chapter 4 Working with Database Objects

To answer the question... Consider reading...


page 121
How do I show system objects in Sybase "Showing system objects in a
Central? database" on page 121
How do I keep track of the SQL "Logging SQL statements as you work
Statements that Sybase Central with a database" on page 122
automatically generates as I work with
it?
How do I add jConnect meta-data "Installing the jConnect meta-data
support for an existing database? support to an existing database" on
page 123
What is the Table Editor, and what can I "Using the Sybase Central Table
do with it? Editor" on page 124
What are system tables, and how can I "Showing system tables" on page 136
work with them?
How do I create or delete tables? "Creating tables" on page 126 and
"Deleting tables" on page 130
How do I alter existing tables? "Altering tables" on page 128. This
section does not describe how to
change properties or permissions, but
it does provide cross-references to the
appropriate locations.
If I’m working in Sybase Central, how do "Browsing the information in tables"
I browse the information in a table or on page 131 and "Browsing the
view? information in views" on page 144
What are primary keys, and what can I "Managing primary keys" on page 131
do with them?
What are foreign keys, and what can I do "Managing foreign keys" on page 133
with them?
How do I copy tables or columns? "Copying tables or columns
within/between databases" on
page 136
How do I create or delete views? "Creating views" on page 138 and
"Deleting views" on page 143

113
Introduction

To answer the question... Consider reading...


How can I use views? "Using views" on page 140 and
"Using the WITH CHECK OPTION
clause" on page 140
How can I work with views in the system "Views in the system tables" on
tables? page 144
How do I modify existing views? "Modifying views" on page 142
How do I create or delete indexes? "Creating indexes" on page 146 and
"Deleting indexes" on page 147
How do I validate an existing index? "Validating indexes" on page 146

114
Chapter 4 Working with Database Objects

Working with databases


This section describes how to create and work with a database. As you read
this section, keep in mind the following simple concepts:
♦ The databases that you can create (called relational databases) are a
collection of tables, related by primary and foreign keys. These tables
hold the information in a database, and the tables and keys together
define the structure of the database. A database may be stored in one or
more database files, on one or more devices.
♦ A database file also contains the system tables, which hold the schema
definition as you build your database.

Creating a database
Adaptive Server Anywhere provides a number of ways to create a database:
in Sybase Central, in Interactive SQL, and with the command line. Creating a
database is also called initializing it. Once the database is created, you can
connect to it and build the tables and other objects that you need in the
database.
Transaction log When you create a database, you must decide where to place the transaction
log. This log stores all changes made to a database, in the order in which they
are made. In the event of a media failure on a database file, the transaction
log is essential for database recovery. It also makes your work more
efficient. By default, it is placed in the same directory as the database file,
but this is not recommended for production use.
$ For information on placing the transaction log, see "Configuring your
database for data protection" on page 663.
Database file An Adaptive Server Anywhere database is an operating system file. It can be
compatibility copied to other locations just like any other file is copied.
Database files are compatible among all operating systems, except where file
system file size limitations or Adaptive Server Anywhere support for large
files apply. A database created from any operating system can be used from
another operating system by copying the database file(s). Similarly, a
database created with a personal server can be used with a network server.
Adaptive Server Anywhere servers can manage databases created with
earlier versions of the software, but old servers cannot manage newer
databases.
$ For more information about limitations, see "Size and number
limitations" on page 958 of the book ASA Reference.

115
Working with databases

Some application design systems, such as Sybase PowerBuilder, contain


Using other facilities for creating database objects. These tools construct SQL statements
applications to that are submitted to the server, typically through its ODBC interface. If you
create databases are using one of these tools, you do not need to construct SQL statements to
create tables, assign permissions, and so on.
This chapter describes the SQL statements for defining database objects. You
can use these statements directly if you are building your database from an
interactive SQL tool, such as Interactive SQL. Even if you are using an
application design tool, you may want to use SQL statements to add features
to the database if they are not supported by the design tool.
For more advanced use, database design tools such as Sybase PowerDesigner
provide a more thorough and reliable approach to developing well-designed
databases.
$ For more information about database design, see "Designing Your
Database" on page 333.

Creating databases (Sybase Central)


You can create a database in Sybase Central using the Create Database
utility. After you have created a database, it appears under its server in the
Sybase Central object tree.
Creating Sybase Central has features to make database creation easy for Windows CE
databases for databases. If you have Windows CE services installed on your Windows 95
Windows CE or Windows NT desktop, you have the option to create a Windows CE
database when you create a database from Sybase Central. Sybase Central
enforces the requirements for Windows CE databases, and optionally copies
the resulting database file to your Windows CE machine.

v To create a new database (Sybase Central):


1 Choose Tools➤Adaptive Server Anywhere➤Create Database.
2 Follow the instructions of the wizard.

v To create a new database based on a current connection:


1 Connect to a database.
2 Open the Utilities folder (located within the server folder).
3 In the right pane, double-click Create Database.
4 Follow the instructions of the wizard.

$ For more information, see "Creating databases (SQL)" on page 117,


and "Creating databases (command line)" on page 117.

116
Chapter 4 Working with Database Objects

Creating databases (SQL)


In Interactive SQL, you can use the CREATE DATABASE statement to
create databases. You need to connect to an existing database before you can
use this statement.

v To create a new database (SQL):


1 Connect to an existing database.
2 Execute a CREATE DATABASE statement.
$ For more information, see "CREATE DATABASE statement" on
page 427 of the book ASA Reference.
Example Create a database file in the c:\temp directory with the database name
temp.db.
CREATE DATABASE ’c:\\temp\\temp.db’
The directory path is relative to the database server. You set the permissions
required to execute this statement on the server command line, using the -gu
command-line option. The default setting requires DBA authority.
The backslash is an escape character in SQL, and must be doubled. For more
information, see "Strings" on page 224 of the book ASA Reference.

Creating databases (command line)


You can create a database from a command line with the dbinit utility. With
this utility, you can include command-line parameters to specify different
options for the database.

v To create a new database (command line):


1 Open a command prompt.
2 From a command line, run the dbinit utility. Include any necessary
parameters.

Example Create a database called company.db with a 4kb page size:


dbinit –p 4096 company.db

$ For more information, see "The Initialization utility" on page 98 of the


book ASA Reference.

117
Working with databases

Erasing a database
Erasing a database deletes all tables and data from disk, including the
transaction log that records alterations to the database. All database files are
read-only to prevent accidental modification or deletion of the database files.
In Sybase Central, you can erase a database using the Erase Database utility.
You need to connect to a database to access this utility, but the Erase
Database wizard lets you specify any database for erasing. In order to erase a
non-running database, the database server must be running.
In Interactive SQL, you can erase a database using the DROP DATABASE
statement. Required permissions can be set using the database server -gu
command-line option. The default setting is to require DBA authority.
You can also erase a database from a command line with the dberase utility.
The database to be erased must not be running when this utility is used.

v To erase a database (Sybase Central):


1 Open the Utilities folder.
2 In the right pane, double-click Erase Database.
3 Follow the instructions of the wizard.

v To erase a database (SQL):


♦ Execute a DROP DATABASE statement.

Example Erase the database temp.db, in the C:\temp directory.


DROP DATABASE ’c:\temp\temp.db’

v To erase a database (command line):


♦ From a command line, run the dberase utility.

Example Erase the database company.db and its transaction log:


dberase company.db
A message box asks you to confirm that you really want to erase the files. To
erase the files, type y and press ENTER.
$ For more information, see "DROP DATABASE statement" on
page 507 of the book ASA Reference, and "The Erase utility" on page 94 of
the book ASA Reference.

118
Chapter 4 Working with Database Objects

Disconnecting from a database


When you are finished working with a database, you can disconnect from it.
Adaptive Server Anywhere also gives you the ability to disconnect other
users from a given database; for more information about doing this in Sybase
Central, see "Managing connected users" on page 752.
You can obtain the connection-id for a user by using the
connection_property function to request the connection number. The
following statement returns the connection ID of the current connection:
SELECT connection_property( ’number’ )

v To disconnect yourself from a database (Sybase Central):


1 Open the desired server.
2 Select the desired database.
3 On the toolbar, click the Disconnect button.

v To disconnect yourself from a database (SQL):


♦ Execute a DISCONNECT statement.

Example 1 The following statement shows how to use DISCONNECT from Interactive
SQL to disconnect all connections:
DISCONNECT ALL

Example 2 The following statement shows how to use DISCONNECT in Embedded


SQL:
EXEC SQL DISCONNECT :conn_name

v To disconnect other users from a database (SQL):


1 Connect to an existing database with DBA authority.
2 Execute a DROP CONNECTION statement.

Example The following statement drops the connection with ID number 4.


DROP CONNECTION 4
$ For more information, see "DISCONNECT statement" on page 504 of
the book ASA Reference, and "DROP CONNECTION statement" on
page 508 of the book ASA Reference.

119
Working with databases

Setting properties for database objects


Most database objects (including the database itself) have properties that you
can either view or set. Some properties are non-configurable and reflect the
settings chosen when you created the database or object. Other properties are
configurable.
The best way to view and set properties is to use the property sheets in
Sybase Central.
If you are not using Sybase Central, properties can be specified when you
create the object with a CREATE statement. If the object already exists, you
can modify options with a SET OPTION statement.

v To view and edit the properties of a database object (Sybase


Central):
1 In the Sybase Central object tree, open the folder or container in which
the object resides.
2 Right-click the object and choose Properties from the popup menu.
3 Edit the desired properties.
$ For more information, see "Property Sheet Descriptions" on page 1061

Setting database options


Database options are configurable settings that change the way the database
behaves or performs. In Sybase Central, all of these options are grouped
together in the Set Options dialog. In Interactive SQL, you can specify an
option in a SET OPTION statement.

v To set options for a database (Sybase Central):


1 Open the desired server.
2 Right-click the desired database and choose Set Options from the popup
menu.
3 Edit the desired values.

v To set the options for a database (SQL):


♦ Specify the desired properties within a SET OPTION statement.

120
Chapter 4 Working with Database Objects

Tips
With the Set Options dialog, you can also set database options for specific
users and groups.
When you set options for the database itself, you are actually setting
options for the PUBLIC group in that database, because all users and
groups inherit option settings from PUBLIC.

$ For more information, see "Set Options dialog" on page 1036, and
"Database Options" on page 155 of the book ASA Reference.

Setting a consolidated database


In Sybase Central, you can set the consolidated database. For SQL Remote
replication, the consolidated database is the one that serves as the "master"
database in the replication setup. The consolidated database contains all of
the data to be replicated, while its remote databases may only contain their
own subsets of the data. In case of conflict or discrepancy, the consolidated
database is considered to have the primary copy of all data.

v To set a consolidated database (Sybase Central):


1 Open the desired server.
2 Right-click the desired database and choose Properties from the popup
menu.
3 Click the SQL Remote tab.
4 Click the Change button beside the Consolidated Database text box.
5 Configure the desired settings.
$ For more information, see "Consolidated and remote databases" on
page 5 of the book Replication and Synchronization Guide.

Showing system objects in a database


In a database, a table, view, stored procedures, or domain is a system object.
System tables store information about the database itself, while system
views, procedures, and domains largely support Sybase Transact-SQL
compatibility.

121
Working with databases

In Interactive SQL, you cannot show a list of all system objects, but you can
browse the contents of a system table; for more information, see "Showing
system tables" on page 136.

v To show system objects in a database (Sybase Central):


1 Open the desired server.
2 Right-click the desired connected database and choose Filter Objects by
Owner.
3 Enable SYS and dbo, and click OK.
The system tables, system views, system procedures, and system
domains appear in their respective folders (for example, system tables
appear alongside normal tables in the Tables folder).
$ For more information, see "System Tables" on page 991 of the book
ASA Reference.

Logging SQL statements as you work with a database


As you work with a database in Sybase Central, the application automatically
generates SQL statements depending on your actions. You can keep track of
these statements either in a separate window or in a specified file.
When you work with Interactive SQL, you can also log statements that you
execute; for more information, see "Logging commands" on page 172 of the
book Getting Started with ASA.

v To log SQL statements generated by Sybase Central:


1 Right-click the database and choose Log SQL Statements from the
popup menu.
2 In the resulting dialog, specify the desired settings.
$ For more information, see "Log SQL Statements dialog" on page 1037

Starting a database without connecting


With both Sybase Central and Interactive SQL, you can start a database
without connecting to it.

122
Chapter 4 Working with Database Objects

v To start a database on a server without connecting (Sybase


Central):
1 Right-click the desired server and choose Start Database from the popup
menu.
2 In the Start a Database dialog, enter the desired values.
The database appears under the server as a disconnected database.

v To start a database on a server without connecting (SQL):


1 Start Interactive SQL.
2 Execute a START DATABASE statement.

Example Start the database file c:\asa7\sample_2.db as sam2 on the server named
sample.
START DATABASE ’c:\asa7\sample_2.db’
AS sam2
ON sample
$ For more information, see "START DATABASE statement" on
page 620 of the book ASA Reference.

Installing the jConnect meta-data support to an existing database


If a database was created without the jConnect meta-data support, you can
use Sybase Central to install it at a later date.

v To add jConnect meta-data support to an existing database (Sybase


Central):
1 Open the server for the database.
2 Right-click the database and choose Re-install jConnect Meta-data
Support from the popup menu.
3 Follow the instructions of the wizard.

123
Working with tables

Working with tables


When the database is initialized, the only tables in the database are the
system tables, which hold the database schema.
This section describes how to create, alter, and delete tables from a database.
You can execute the examples in Interactive SQL, but the SQL statements
are independent of the administration tool you use.
To make it easier for you to re-create the database schema when necessary,
create command files to define the tables in your database. The command
files should contain the CREATE TABLE and ALTER TABLE statements.

Using the Sybase Central Table Editor


The Table Editor is a tool that you can access through Sybase Central. It
provides a quick way of creating and editing tables and their columns.
With the Table Editor, you can create, edit, and delete columns. You can also
add and remove columns in the primary key.

v To access the Table Editor for creating a new table:


1 Open the Tables folder.
2 In the right pane, double-click Add Table.

v To access the Table Editor for editing an existing table:


1 Open the Tables folder.
2 Right-click the table and choose Edit Columns from the popup menu.

Table Editor Once you have opened the Table Editor, a toolbar (shown below) provides
toolbar you with the fields and buttons for common commands.

124
Chapter 4 Working with Database Objects

The first half of this toolbar shows the name of the current table and its
owner (or creator). For a new table, you can specify new settings in both of
these fields. For an existing table, you can type a new name but you cannot
change the owner.
With the buttons on the second half of the toolbar, you can:
♦ Add a new column. It appears at the bottom of the list of existing
columns.
♦ Delete selected columns
♦ View the Column Properties dialog for the selected column
♦ View the Advanced Table Properties for the entire table
♦ View and change the Table Editor options
♦ Add Java classes to the selected column’s data types
♦ Save the table but keep the Table Editor open
♦ Save the table and close the Table Editor in a single step
As an easy reminder of what these buttons do, you can hold your cursor over
each button to see a popup description.
$ For more information, see "Column properties" on page 1075.

Using the Advanced Table Properties dialog


In the Table Editor, you can use the Advanced Table Properties dialog to set
or inspect a table’s type or to type a comment for the table.

v To view or set advanced table properties:


1 Open the Table Editor.
2 On the toolbar, click the Advanced Table Properties button.

125
Working with tables

3 Edit the desired values.

Dialog components ♦ Base table Designates this table as a base table (one that permanently
holds the data until you delete it). You cannot change this setting for
existing tables; you can only set the table type when you create a new
table.
♦ on DB space Lets you select the database space used by the table.
This option is only available if you create a new table as a base
table.
♦ Global temporary table Designates this table as a global temporary
table (one that holds data for a single connection only). You cannot
change this setting for existing tables; you can only set the table type
when you create a new table.
♦ ON COMMIT Delete rows Sets the global temporary table to
delete its rows when a COMMIT is executed.
♦ ON COMMIT Preserve rows Sets the global temporary table to
preserve its rows when a COMMIT is executed.
♦ Comment Provides a place for you to type a comment (text
description) of this object. For example, you could use this area to
describe the object’s purpose in the system.

Tip
The table type and comment are also shown in the table's property sheet.

$ For more information, see "Property Sheet Descriptions" on


page 1061"Designing Your Database" on page 333

Creating tables
When a database is first created, the only tables in the database are the
system tables, which hold the database schema. You can then create new
tables to hold your actual data, either with SQL statements in Interactive
SQL or with Sybase Central.
There are two types of tables that you can create:
♦ Base table A table that holds persistent data. The table and its data
continue to exist until you explicitly delete the data or drop the table. It
is called a base table to distinguish it from temporary tables and from
views.

126
Chapter 4 Working with Database Objects

♦ Temporary table Data in a temporary table is held for a single


connection only. Global temporary table definitions (but not data) are
kept in the database until dropped. Local temporary table definitions and
data exist for the duration of a single connection only.
Tables consist of rows and columns. Each column carries a particular kind of
information, such as a phone number or a name, while each row specifies a
particular entry.

v To create a table (Sybase Central):


1 Connect to the database.
2 Open the Tables folder.
3 In the right pane, double-click Add Table.
4 In the Table Editor:
♦ Enter a name for the new table.
♦ Select an owner for the table.
♦ Create columns using the Add Column button on the toolbar.
♦ Specify column properties.
5 Save the table and close the Table Editor when finished.

v To create a table (SQL):


1 Connect to the database with DBA authority.
2 Execute a CREATE TABLE statement.

Example The following statement creates a new table to describe qualifications of


employees within a company. The table has columns to hold an identifying
number, a name, and a type (technical or administrative) for each skill.
CREATE TABLE skill (
skill_id INTEGER NOT NULL,
skill_name CHAR( 20 ) NOT NULL,
skill_type CHAR( 20 ) NOT NULL
)

$ For more information, see "CREATE TABLE statement" on page 466


of the book ASA Reference, and"Using the Sybase Central Table Editor" on
page 124.

127
Working with tables

Altering tables
This section describes how to change the structure or column definitions of a
table. For example, you can add columns, change various column attributes,
or delete columns entirely.
In Sybase Central, you can perform these tasks using the buttons on the
Table Editor toolbar. In Interactive SQL, you can perform these tasks with
the ALTER TABLE statement.
If you are working with Sybase Central, you can also manage columns (add
or remove them from the primary key, change their properties, or delete
them) by working with menu commands when you have a column selected in
the Columns folder.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 120.
$ For information on granting and revoking table permissions, see
"Granting permissions on tables" on page 743 and "Revoking user
permissions" on page 750.

Altering tables (Sybase Central)


You can alter tables in Sybase Central using the Table Editor. For example,
you can add or delete columns, change column definitions, or change table or
column properties.

v To alter an existing table (Sybase Central):


1 Connect to the database.
2 Open the Tables folder.
3 Right-click the table you want to alter and choose Edit Columns from
the popup menu.
4 In the Table Editor, make the necessary changes.

Tips
You can also add columns by opening a table’s Columns folder and
double-clicking Add Column.
You can also delete columns by opening a table’s Columns folder, right-
clicking the column, and choosing Delete from the popup menu.

128
Chapter 4 Working with Database Objects

$ For more information, see "Using the Sybase Central Table Editor" on
page 124, and "ALTER TABLE statement" on page 392 of the book
ASA Reference.

Altering tables (SQL)


You can alter tables in Interactive SQL using the ALTER TABLE statement.

v To alter an existing table (SQL):


1 Connect to the database with DBA authority.
2 Execute an ALTER TABLE statement.

Examples The following command adds a column to the skill table to allow space for an
optional description of the skill:
ALTER TABLE skill
ADD skill_description CHAR( 254 )
This statement adds a column called skill_description that holds up to a few
sentences describing the skill.
You can also modify column attributes with the ALTER TABLE statement.
The following statement shortens the skill_description column of the sample
database from a maximum of 254 characters to a maximum of 80:
ALTER TABLE skill
MODIFY skill_description CHAR( 80 )
Any current entries that are longer than 80 characters are trimmed to
conform to the 80-character limit, and a warning appears.
The following statement changes the name of the skill_type column to
classification:
ALTER TABLE skill
RENAME skill_type TO classification
The following statement deletes the classification column.
ALTER TABLE skill
DROP classification
The following statement changes the name of the entire table:
ALTER TABLE skill
RENAME qualification

129
Working with tables

These examples show how to change the structure of the database. The
ALTER TABLE statement can change just about anything pertaining to a
table—you can use it to add or delete foreign keys, change columns from one
type to another, and so on. In all these cases, once you make the change,
stored procedures, views and any other item referring to this column will no
longer work.
$ For more information, see "ALTER TABLE statement" on page 392 of
the book ASA Reference, and "Ensuring Data Integrity" on page 357.

Deleting tables
This section describes how to delete tables from a database. You can use
either Sybase Central or Interactive SQL to perform this task. In Interactive
SQL deleting a table is also called dropping it.
You cannot delete a table that is being used as an article in a SQL Remote
publication. If you try to do this in Sybase Central, an error appears.

v To delete a table (Sybase Central):


1 Connect to the database.
2 Open the Tables folder for that database.
3 Right-click the table and choose Delete from the popup menu.

v To delete a table (SQL):


1 Connect to the database with DBA authority.
2 Execute a DROP TABLE statement.

Example The following DROP TABLE command deletes all the records in the skill
table and then removes the definition of the skill table from the database
DROP TABLE skill
Like the CREATE statement, the DROP statement automatically executes a
COMMIT statement before and after dropping the table. This makes
permanent all changes to the database since the last COMMIT or
ROLLBACK. The drop statement also drops all indexes on the table.
$ For more information, see "DROP statement" on page 505 of the book
ASA Reference.

130
Chapter 4 Working with Database Objects

Browsing the information in tables


To browse the data held within the tables of a database, you can use the
Interactive SQL utility. This utility lets you execute queries to identity the
exact data you want to view. For more information about using these queries,
see "Queries: Selecting Data from a Table" on page 149.
If you are working in Sybase Central, you can right-click a table and choose
View Data from the popup menu. This command opens Interactive SQL with
the table contents showing in the Results pane.

Managing primary keys


This section describes how to create and edit primary keys in your database.
You can use either Sybase Central or Interactive SQL to perform these tasks.
The primary key is a unique identifier that is comprised of a column or
combination of columns with values that do not change over the life of the
data in the row. Because uniqueness is essential to good database design, it is
best to specify a primary key when you define the table.

Column order in multi-column primary keys


Primary key column order is determined by the order of the columns
during table creation. It is not based on the order of the columns as
specified in the primary key declaration.

Managing primary keys (Sybase Central)


In Sybase Central, the primary key of a table is shown in several places:
♦ On the Columns tab of the table’s property sheet.
♦ In the Columns folder of the table.
♦ In the Table Editor.
The primary key columns have special icons to distinguish them from non-
key columns. The lists in both the table property sheet and the Columns
folder show the primary-key columns (along with the non-key columns) in
the order that they were created in the database. This may differ from the
actual ordering of columns in the primary key.

v To create and edit the primary key using a property sheet:


1 Open the Tables folder.
2 Right-click the table and choose Properties from the popup menu.
131
Working with tables

3 Click the Columns tab of the property sheet.


4 Use the buttons to change the primary key.

v To create and edit the primary key using the Columns folder:
1 Open the Tables folder and double-click a table.
2 Open the Columns folder for that table and right-click a column.
3 From the popup menu, do one of the following:
♦ Choose Add to Primary Key if the column is not yet part of the
primary key and you want to add it.
♦ Choose Remove From Primary Key if the column is part of the
primary key and you want to remove it.

v To create and edit the primary key using the Table Editor:
1 Open the Tables folder.
2 Right-click a table and choose Edit Columns from the popup menu.
3 In the Table Editor, click the icons in the Key fields (at the far left of the
Table Editor) to add a column to the primary key or remove it from the
key.
$ For more information, see "Managing foreign keys" on page 133.

Managing primary keys (SQL)


You can create and modify the primary key in Interactive SQL using the
CREATE TABLE and ALTER TABLE statements. These statements let you
set many table attributes, including column constraints and checks.
Columns in the primary key cannot contain NULL values. You must specify
NOT NULL on the column in the primary key.

v To modify the primary key of an existing table (SQL):


1 Connect to the database with DBA authority.
2 Execute a ALTER TABLE statement.

Example 1 The following statement creates the same skill table as before, except that it
adds a primary key:

132
Chapter 4 Working with Database Objects

CREATE TABLE skill (


skill_id INTEGER NOT NULL,
skill_name CHAR( 20 ) NOT NULL,
skill_type CHAR( 20 ) NOT NULL,
primary key( skill_id )
)
The primary key values must be unique for each row in the table which, in
this case, means that you cannot have more than one row with a given
skill_id. Each row in a table is uniquely identified by its primary key.

Example 2 The following statement adds the columns skill_id and skill_type to the
primary key for the skill table:
ALTER TABLE skill (
ADD PRIMARY KEY ( "skill_id", "skill_type" )
)
If a PRIMARY KEY clause is specified in an ALTER TABLE statement, the
table must not already have a primary key that was created by the CREATE
TABLE statement or another ALTER TABLE statement.
Example 3 The following statement removes all columns from the primary key for the
skill table. Before you delete a primary key, make sure you are aware of the
consequences in your database.
ALTER TABLE skill
DELETE PRIMARY KEY

$ For more information, see "ALTER TABLE statement" on page 392 of


the book ASA Reference, and "Managing primary keys (Sybase Central)" on
page 131.

Managing foreign keys


This section describes how to create and edit foreign keys in your database.
You can use either Sybase Central or Interactive SQL to perform these tasks.
Foreign keys are used to relate values in a child table (or foreign table) to
those in a parent table (or primary table). A table may have multiple foreign
keys that refer to multiple parent tables linking various types of information.

Managing foreign keys (Sybase Central)


In Sybase Central, the foreign key of a table is shown in the Foreign Keys
folder (located within the table container).
You cannot create a foreign key in a table if the table contains values for the
foreign columns that can’t be matched to values in the primary table’s
primary key.
133
Working with tables

After you have created a foreign key, you can keep track of them in each
table’s Referenced By folder; this folder shows any foreign tables that
reference the currently selected table.

v To create a new foreign key in a given table (Sybase Central):


1 In the Tables folder, open a table.
2 Open the Foreign Keys folder for that table.
3 In the right panel, double-click Add Foreign Key.
4 Follow the instructions of the wizard.

v To delete a foreign key (Sybase Central):


1 In the Tables folder, open a table.
2 Open the Foreign Keys folder for that table.
3 Right-click the foreign key you want to delete and choose Delete from
the popup menu.

v To show which tables have foreign keys that reference a given table
(Sybase Central):
1 Open the desired table.
2 Open the Referenced By folder.

Tips
When you create a foreign key using the wizard, you can set properties for
the foreign key. To set or change properties after the foreign key is
created, right-click the foreign key and choose Properties from the popup
menu.
You can view the properties of a referencing table by right-clicking the
table and choosing Properties from the popup menu.

$ For more information, see "Table properties" on page 1072.

Managing foreign keys (SQL)


You can create and modify the foreign key in Interactive SQL using the
CREATE TABLE and ALTER TABLE statements. These statements let you
set many table attributes, including column constraints and checks.
A table can only have one primary key defined, but it may have as many
foreign keys as necessary.

134
Chapter 4 Working with Database Objects

v To modify the foreign key of an existing table (SQL):


1 Connect to the database with DBA authority.
2 Execute a ALTER TABLE statement.

Example 1 You can create a table named emp_skill, which holds a description of each
employee’s skill level for each skill in which they are qualified, as follows:

CREATE TABLE emp_skill(


emp_id INTEGER NOT NULL,
skill_id INTEGER NOT NULL,
"skill level" INTEGER NOT NULL,
PRIMARY KEY( emp_id, skill_id ),
FOREIGN KEY REFERENCES employee,
FOREIGN KEY REFERENCES skill
)
The emp_skill table definition has a primary key that consists of two
columns: the emp_id column and the skill_id column. An employee may have
more than one skill, and so appear in several rows, and several employees
may possess a given skill, so that the skill_id may appear several times.
However, there may be no more than one entry for a given employee and
skill combination.
The emp_skill table also has two foreign keys. The foreign key entries
indicate that the emp_id column must contain a valid employee number from
the employee table, and that the skill_id must contain a valid entry from the
skill table.

Example 2 You can add a foreign key called "foreignkey" to the existing table skill and
reference this foreign key to the primary key in the table contact, as follows:
ALTER TABLE skill
ADD FOREIGN KEY "foreignkey" ("skill_id")
REFERENCES "DBA"."contact" ("id")
This example creates a relationship between the skill_id column of the table
skill (the foreign table) and the id column of the table contact (the primary
table). The "DBA" signifies the owner of the table contact.
Example 3 You can specify properties for the foreign key as you create it. For example,
the following statement creates the same foreign key as in Example 2, but it
defines the foreign as NOT NULL along with restrictions for when you
update or delete.
ALTER TABLE skill
ADD NOT NULL FOREIGN KEY "foreignkey" ("skill_id")
REFERENCES "DBA"."contact" ("id")
ON UPDATE RESTRICT
ON DELETE RESTRICT

135
Working with tables

In Sybase Central, you can also specify properties in the foreign key creation
wizard or on the foreign key’s property sheet.
$ For more information, see "ALTER TABLE statement" on page 392 of
the book ASA Reference, and "Managing foreign keys (Sybase Central)" on
page 133.

Copying tables or columns within/between databases


With Sybase Central, you can copy existing tables or columns and insert
them into another location in the same database or into a completely different
database.
$ If you are not using Sybase Central, see one of the following locations:
♦ To insert SELECT statement results into a given location, see "SELECT
statement" on page 601 of the book ASA Reference.
♦ To insert a row or selection of rows from elsewhere in the database into
a table, see "INSERT statement" on page 554 of the book ASA
Reference.

Showing system tables


In a database, a table, view, stored procedure, or domain is a system object.
System tables store information about the database itself, while system
views, procedures, and domains largely support Sybase Transact-SQL
compatibility.
All the information about tables in a database appears in the system tables.
The information is distributed among several tables.

v To show system tables (Sybase Central):


1 Open the desired server.
2 Right-click the desired connected database and choose Filter Objects by
Owner from the popup menu.
3 Select the check box beside SYS and click OK.
The system tables, system views, system procedures, and system
domains appear in their respective folders (for example, system tables
appear alongside normal tables in the Tables folder).

v To browse system tables (SQL):


1 Connect to a database.
136
Chapter 4 Working with Database Objects

2 Execute a SELECT statement, specifying the system table you want to


browse. The system tables are owned by the SYS user ID.

Example Show the contents of the table sys.systable in the Results pane.
SELECT *
FROM SYS.SYSTABLE

$ For more information, see "System Tables" on page 991 of the book
ASA Reference.

137
Working with views

Working with views


Views are computed tables. You can use views to show database users
exactly the information you want to present, in a format you can control.
Similarities Views are similar to the permanent tables of the database (a permanent table
between views and is also called a base table) in many ways:
base tables
♦ You can assign access permissions to views just as to base tables.
♦ You can perform SELECT queries on views.
♦ You can perform UPDATE, INSERT, and DELETE operations on some
views.
♦ You can create views based on other views.

Differences There are some differences between views and permanent tables:
between views and
permanent tables ♦ You cannot create indexes on views.
♦ You cannot perform UPDATE, INSERT, and DELETE operations on all
views.
♦ You cannot assign integrity constraints and keys to views.
♦ Views refer to the information in base tables, but do not hold copies of
that information. Views are recomputed each time you invoke them.

Benefits of tailoring Views let you tailor access to data in the database. Tailoring access serves
access several purposes:
♦ Improved security By allowing access to only the information that is
relevant.
♦ Improved usability By presenting users and application developers
with data in a more easily understood form than in the base tables.
♦ Improved consistency By centralizing in the database the definition
of common queries.

Creating views
When you browse data, a SELECT statement operates on one or more tables
and produces a result set that is also a table. Just like a base table, a result set
from a SELECT query has columns and rows. A view gives a name to a
particular query, and holds the definition in the database system tables.

138
Chapter 4 Working with Database Objects

Suppose you frequently need to list the number of employees in each


department. You can get this list with the following statement:
SELECT dept_ID, count(*)
FROM employee
GROUP BY dept_ID
You can create a view containing the results of this statement using either
Sybase Central or Interactive SQL.

v To create a new view (Sybase Central):


1 Connect to a database.
2 Open the Views folder for that database.
3 In the right pane, double-click Add View (Wizard).
4 Follow the instructions in the wizard. When you’re finished, the Code
Editor automatically opens.
5 Complete the code by entering the table and the columns you want to
use. For the example above, enter employee and dept_ID.
6 From the File menu of the Code Editor, choose Save/Execute in
Database.
New views appear in the Views folder.

v To create a new view (SQL):


1 Connect to a database.
2 Execute a CREATE VIEW statement.

Example Create a view called DepartmentSize that contains the results of the SELECT
statement given at the beginning of this section:
CREATE VIEW DepartmentSize AS
SELECT dept_ID, count(*)
FROM employee
GROUP BY dept_ID
Since the information in a view is not stored separately in the database,
referring to the view executes the associated SELECT statement to retrieve
the appropriate data.
On one hand, this is good because it means that if someone modifies the
employee table, the information in the DepartmentSize view is automatically
brought up to date. On the other hand, complicated SELECT statements may
increase the amount of time SQL requires to find the correct information
every time you use the view.

139
Working with views

$ For more information, see "CREATE VIEW statement" on page 482 of


the book ASA Reference.

Using views
Restrictions on There are some restrictions on the SELECT statements you can use as views.
SELECT In particular, you cannot use an ORDER BY clause in the SELECT query. A
statements characteristic of relational tables is that there is no significance to the
ordering of the rows or columns, and using an ORDER BY clause would
impose an order on the rows of the view. You can use the GROUP BY
clause, subqueries, and joins in view definitions.
To develop a view, tune the SELECT query by itself until it provides exactly
the results you need in the format you want. Once you have the SELECT
query just right, you can add a phrase in front of the query to create the view.
For example,
CREATE VIEW viewname AS

Updating views UPDATE, INSERT, and DELETE statements are allowed on some views,
but not on others, depending on its associated SELECT statement.
You cannot update views containing aggregate functions, such as
COUNT(*). Nor can you update views containing a GROUP BY clause in
the SELECT statement, or views containing a UNION operation. In all these
cases, there is no way to translate the UPDATE into an action on the
underlying tables.
Copying views In Sybase Central, you can copy views between databases. To do so, select
the view in the right pane of Sybase Central and drag it to the Views folder
of another connected database. A new view is then created, and the original
view’s code is copied to it.
Note that only the view code is copied to the new view. The other view
properties, such as permissions, are not copied.

Using the WITH CHECK OPTION clause


Even when INSERT and UPDATE statements are allowed against a view, it
is possible that the inserted or updated rows in the underlying tables may not
meet the requirements for the view itself. For example, the view would have
no new rows even though the INSERT or UPDATE does modify the
underlying tables.
Examples using The following set of examples illustrates the meaning and usefulness of the
the WITH CHECK WITH CHECK OPTION clause. This optional clause is the final clause in
OPTION clause the CREATE VIEW statement.
140
Chapter 4 Working with Database Objects

v To create a view displaying the employees in the sales department


(SQL):
♦ Type the following statements:
CREATE VIEW sales_employee
AS SELECT emp_id,
emp_fname,
emp_lname,
dept_id
FROM employee
WHERE dept_id = 200
The contents of this view are as follows:
SELECT *
FROM sales_employee
They appear in Interactive SQL as follows:

Emp_id Emp_fname Emp_lname Dept_id


129 Philip Chin 200
195 Marc Dill 200
299 Rollin Overbey 200
467 James Klobucher 200
641 Thomas Powell 200

♦ Transfer Philip Chin to the marketing department This view update


causes the entry to vanish from the view, as it no longer meets the view
selection criterion.
UPDATE sales_employee
SET dept_id = 400
WHERE emp_id = 129
♦ List all employees in the sales department Inspect the view.
SELECT *
FROM sales_employee

141
Working with views

Emp_id emp_fname emp_lname dept_id


195 Marc Dill 200
299 Rollin Overbey 200
467 James Klobucher 200
641 Thomas Powell 200
667 Mary Garcia 200

When you create a view using the WITH CHECK OPTION, any UPDATE
or INSERT statement on the view is checked to ensure that the new row
matches the view condition. If it does not, the operation causes an error and
is rejected.
The following modified sales_employee view rejects the update statement,
generating the following error message:
Invalid value for column ’dept_id’ in table ’employee’
♦ Create a view displaying the employees in the sales department
(second attempt) Use WITH CHECK OPTION this time.
CREATE VIEW sales_employee
AS SELECT emp_id, emp_fname, emp_lname, dept_id
FROM employee
WHERE dept_id = 200
WITH CHECK OPTION

The check option is If a view (say V2) is defined on the sales_employee view, any updates or
inherited inserts on V2 that cause the WITH CHECK OPTION criterion on
sales_employee to fail are rejected, even if V2 is defined without a check
option.

Modifying views
You can modify a view using both Sybase Central and Interactive SQL.
When doing so, you cannot rename an existing view directly. Instead, you
must create a new view with the new name, copy the previous code to it, and
then delete the old view.
In Sybase Central, a Code Editor lets you edit the code of views, procedures,
and functions. In Interactive SQL, you can use the ALTER VIEW statement
to modify a view. The ALTER VIEW statement replaces a view definition
with a new definition, but it maintains the permissions on the view.

142
Chapter 4 Working with Database Objects

$ For information on altering database object properties, see "Setting


properties for database objects" on page 120.
$ For information on setting permissions, see "Granting permissions on
tables" on page 743 and "Granting permissions on views" on page 745. For
information about revoking permissions, see "Revoking user permissions" on
page 750.

v To edit a view definition (Sybase Central):


1 Open the Views folder.
2 Right-click the desired view and choose Open from the popup menu.
3 In the Code Editor, edit the view’s code.
4 To execute the code in the database, choose File➤Save View.

v To edit a view definition (SQL):


1 Connect to a database with DBA authority or as the owner of the view.
2 Execute an ALTER VIEW statement.

Example Rename the column names of the DepartmentSize view (described in the
"Creating views" on page 138 section) so that they have more informative
names.
ALTER VIEW DepartmentSize
(Dept_ID, NumEmployees)
AS
SELECT dept_ID, count(*)
FROM Employee
GROUP BY dept_ID

$ For more information, see "ALTER VIEW statement" on page 399 of


the book ASA Reference.

Deleting views
You can delete a view in both Sybase Central and Interactive SQL.

v To delete a view (Sybase Central):


1 Open the Views folder.
2 Right-click the desired view and choose Delete from the popup menu.

143
Working with views

v To delete a view (SQL):


1 Connect to a database with DBA authority or as the owner of the view.
2 Execute a DROP VIEW statement.

Examples Remove a view called DepartmentSize.


DROP VIEW DepartmentSize

$ For more information, see "DROP statement" on page 505 of the book
ASA Reference.

Browsing the information in views


To browse the data held within the views, you can use the Interactive SQL
utility. This utility lets you execute queries to identity the data you want to
view. For more information about using these queries, see "Queries:
Selecting Data from a Table" on page 149.
If you are working in Sybase Central, you can right-click a view on which
you have permission and choose View Data from the popup menu. This
command opens Interactive SQL with the view contents showing in the
Results pane. To browse the view, Interactive SQL executes a select *
from <owner>, <view> statement.

Views in the system tables


All the information about views in a database is held in the system table
SYS.SYSTABLE. The information is presented in a more readable format in
the system view SYS.SYSVIEWS. For more information about these, see
"SYSTABLE system table" on page 1039 of the book ASA Reference, and
"SYSVIEWS system view" on page 1076 of the book ASA Reference.
You can use Interactive SQL to browse the information in these tables. Type
the following statement in the SQL Statements pane to see all the columns in
the SYS.SYSVIEWS view:
SELECT *
FROM SYS.SYSVIEWS
To extract a text file containing the definition of a specific view, use a
statement such as the following:
SELECT viewtext FROM SYS.SYSVIEWS
WHERE viewname = ’DepartmentSize’;
OUTPUT TO viewtext.sql
FORMAT ASCII

144
Chapter 4 Working with Database Objects

Working with indexes


Performance is an important consideration when designing and creating your
database. Indexes can dramatically improve the performance of statements
that search for a specific row or a specific subset of the rows.
When to use An index provides an ordering on the columns of a table. An index is like a
indexes telephone book that initially sorts people by surname, and then sorts identical
surnames by first names. This ordering speeds up searches for phone
numbers for a particular surname, but it does not provide help in finding the
phone number at a particular address. In the same way, a database index is
useful only for searches on a specific column or columns.
The database server automatically tries to use an index when a suitable index
exists and when using one will improve performance.
Indexes get more useful as the size of the table increases. The average time
to find a phone number at a given address increases with the size of the
phone book, while it does not take much longer to find the phone number of,
say, K. Kaminski, in a large phone book than in a small phone book.
Use indexes for Indexes require extra space and may slightly reduce the performance of
frequently- statements that modify the data in the table, such as INSERT, UPDATE, and
searched columns DELETE statements. However, they can improve search performance
dramatically and are highly recommended whenever you search data
frequently.
$ For more information about performance, see "Using indexes to
improve query performance" on page 816.
If a column is already a primary key or foreign key, searches are fast on it
because Adaptive Server Anywhere automatically indexes key columns.
Thus, manually creating an index on a key column is not necessary and
generally not recommended. If a column is only part of a key, an index may
help.

When indexes on primary keys may be useful


An index on a key column may assist performance when a large number
of foreign keys reference a primary key.

Adaptive Server Anywhere automatically uses indexes to improve the


performance of any database statement whenever it can. There is no need to
refer to indexes once they are created. Also, the index is updated
automatically when rows are deleted, updated or inserted.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 120.

145
Working with indexes

Creating indexes
Indexes are created on a specified table. You cannot create an index on a
view. To create an index, you can use either Sybase Central or Interactive
SQL.

v To create a new index for a given table (Sybase Central):


1 Open the Tables folder.
2 Double-click a table.
3 Open the Indexes folder for that table.
4 In the right pane, double-click Add Index.
5 Follow the instructions of the wizard.
All indexes appear in the Indexes folder of the associated table.

v To create a new index for a given table (SQL):


1 Connect to a database with DBA authority or as the owner of the table
on which the index is created.
2 Execute a CREATE INDEX statement.

Example To speed up a search on employee surnames in the sample database, you


could create an index called EmpNames with the following statement:
CREATE INDEX EmpNames
ON employee (emp_lname, emp_fname)

$ For more information, see "CREATE INDEX statement" on page 448


of the book ASA Reference, and "Monitoring and Improving Performance"
on page 799

Validating indexes
You can validate an index to ensure that every row referenced in the index
actually exists in the table. For foreign key indexes, a validation check also
ensures that the corresponding row exists in the primary table, and that their
hash values match. This check complements the validity checking carried out
by the VALIDATE TABLE statement.

v To validate an index (Sybase Central):


1 Connect to a database with DBA authority or as the owner of the table
on which the index is created.

146
Chapter 4 Working with Database Objects

2 Open the Tables folder.


3 Double-click a table.
4 Open the Indexes folder for that table.
5 Right-click the index and choose Validate from the popup menu.

v To validate an index (SQL):


1 Connect to a database with DBA authority or as the owner of the table
on which the index is created.
2 Execute a VALIDATE INDEX statement.

v To validate an index (command line):


1 Open a command prompt.
2 From a command line, run the dbvalid utility.

Examples Validate an index called EmployeeIndex. If you supply a table name instead
of an index name, the primary key index is validated.
VALIDATE INDEX EmployeeIndex
Validate an index called EmployeeIndex. The -i switch specifies that each
object name given is an index.
dbvalid –i EmployeeIndex

$ For more information, see "VALIDATE INDEX statement" on


page 643 of the book ASA Reference, and "The Validation utility" on
page 148 of the book ASA Reference.

Deleting indexes
If an index is no longer required, you can remove it from the database in
Sybase Central or in Interactive SQL.

v To delete an index (Sybase Central):


1 For the desired table, open the Indexes folder.
2 Right-click the desired index and choose Delete from the popup menu.

v To delete an index (SQL):


1 Connect to a database with DBA authority or as the owner of the table
associated with the index.

147
Working with indexes

2 Execute a DROP INDEX statement.

Example The following statement removes the index from the database:
DROP INDEX EmpNames
$ For more information, see "DROP statement" on page 505 of the book
ASA Reference.

Indexes in the system tables


All the information about indexes in a database is held in the system tables
SYS.SYSINDEX and SYS.SYSIXCOL. The information is presented in a
more readable format in the system view SYS.SYSINDEXES. You can use
Sybase Central or Interactive SQL to browse the information in these tables.

148
C H A P T E R 5

Queries: Selecting Data from a Table

About this chapter The SELECT statement retrieves data from the database. You can use it to
retrieve a subset of the rows in one or more tables and to retrieve a subset of
the columns in one or more tables.
This chapter focuses on the basics of single-table SELECT statements.
Advanced uses of SELECT are described later in this manual.
Contents
Topic Page
Query overview 150
The SELECT clause: specifying columns 153
The FROM clause: specifying tables 161
The WHERE clause: specifying rows 162

149
Query overview

Query overview
A query requests data from the database and receives the results. This
process is also known as data retrieval. All SQL queries are expressed using
the SELECT statement.

Queries are made up of clauses


You construct SELECT statements from clauses. In the following SELECT
syntax, each new line is a separate clause. Only the more common clauses
are listed here.
SELECT select-list
[ FROM table-expression ]
[ ON search-condition ]
[ WHERE search-condition ]
[ GROUP BY column-name ]
[ HAVING search-condition ]
[ ORDER BY { expression | integer } ]
The clauses in the SELECT statement are as follows:
♦ The SELECT clause specifies the columns you want to retrieve. It is the
only required clause in the SELECT statement.
♦ The FROM clause specifies the tables from which columns are pulled. It
is required in all queries that retrieve data from tables. In the current
chapter, the table-expression is a single table name. SELECT statements
without FROM clauses have a different meaning, and we ignore them in
this chapter.
♦ The ON clause specifies how tables in the FROM clause are to be
joined. It is used only for multi-table queries and is not discussed in this
chapter.
♦ The WHERE clause specifies the rows in the tables you want to see.
♦ The GROUP BY clause allows you to collect aggregate data.
♦ The HAVING clause specifies rows on which aggregate data is to be
collected.
♦ By default, rows are returned from relational databases in an order that
has no meaning. You can use the ORDER BY clause to sort the rows in
the result set.
Most of the clauses are optional, but if they are included then they must
appear in the correct order.

150
Chapter 5 Queries: Selecting Data from a Table

$ For a complete listing of the syntax of the SELECT statement syntax,


see "SELECT statement" on page 601 of the book ASA Reference.
This chapter discusses only the following set of queries:
♦ Queries with only a single table in the FROM clause. For information on
multi-table queries, see "Joins: Retrieving Data from Several Tables" on
page 195.
♦ Queries with no GROUP BY, HAVING, or ORDER BY clauses. For
information on these, see "Summarizing, Grouping, and Sorting Query
Results" on page 173.

Entering queries
In this manual, SELECT statements and other SQL statements are displayed
with each clause on a separate row, and with the SQL keywords in upper
case. This is not a requirement. You can enter SQL keywords in any case,
and you can break lines at any point.
Keywords and line For example, the following SELECT statement finds the first and last names
breaks of contacts living in California from the Contact table.
SELECT first_name, last_name
FROM Contact
WHERE state = ’CA’
It is equally valid, though not as readable, to enter this statement as follows:
SELECT first_name,
last_name from contact
wHere state
= ’CA’

Case sensitivity of Identifiers (that is, table names, column names, and so on) are case
strings and insensitive in Adaptive Server Anywhere databases.
identifiers
Strings are case sensitive by default, so that ’CA’, ’ca’, ’cA’, and ’Ca’ are
equivalent, but if you create a database as case-sensitive then the case of
strings is significant. The sample database is case insensitive.
Qualifying You can qualify the names of database identifiers if there is ambiguity about
identifiers which object is being referred to. For example, the sample database contains
several tables with a column called city, so you may have to qualify
references to city with the name of the table. In a larger database you may
also have to use the name of the owner of the table to identify the table.
SELECT dba.contact.city
FROM contact
WHERE state = ’CA’

151
Query overview

Since the examples in this chapter involve single-table queries, column


names in syntax models and examples are usually not qualified with the
names of the tables or owners to which they belong.
These elements are left out for readability; it is never wrong to include
qualifiers.
The remaining sections in this chapter analyze the syntax of the SELECT
statement in more detail.

152
Chapter 5 Queries: Selecting Data from a Table

The SELECT clause: specifying columns


The select list The select list commonly consists of a series of column names separated by
commas, or an asterisk as shorthand to represent all columns.
More generally, the select list can include one or more expressions, separated
by commas. The general syntax for the select list looks like this:
SELECT expression [, expression ]...
If any table or column name in the list does not conform to the rules for valid
identifiers, you must enclose the identifier in double quotes.
The select list expressions can include * (all columns), a list of column
names, character strings, column headings, and expressions including
arithmetic operators. You can also include aggregate functions, which are
discussed in "Summarizing, Grouping, and Sorting Query Results" on
page 173.
$ For a complete listing of what expressions can consist of, see
"Expressions" on page 230 of the book ASA Reference.
The following sections provide examples of the kinds of expressions you can
use in a select list.

Selecting all columns from a table


The asterisk (*) has a special meaning in SELECT statements. It stands for
all the column names in all the tables specified by the FROM clause. You
can use it to save typing time and errors when you want to see all the
columns in a table.
When you use SELECT *, the columns are returned in the order in which
they were defined when the table was created.
The syntax for selecting all the columns in a table is:
SELECT *
FROM table-expression
SELECT * finds all the columns currently in a table, so that changes in the
structure of a table such as adding, removing, or renaming columns
automatically modify the results of SELECT *. Listing the columns
individually gives you more precise control over the results.
Example The following statement retrieves all columns in the department table. No
WHERE clause is included; and so this statement retrieves every row in the
table:

153
The SELECT clause: specifying columns

SELECT *
FROM department
The results look like this:

dept_id dept_name dept_head_id


100 R&D 501
200 Sales 904
300 Finance 1293
400 Marketing 1576
500 Shipping 703

You get exactly the same results by listing all the column names in the table
in order after the SELECT keyword:
SELECT dept_id, dept_name, dept_head_id
FROM department
Like a column name, "*" can be qualified with a table name, as in the
following query:
SELECT department.*
FROM department

Selecting specific columns from a table


To SELECT only specific columns in a table, use this syntax:
SELECT column_name [, column_name ]...
FROM table-name
You must separate each column name from the column name that follows it
with a comma, for example:
SELECT emp_lname, emp_fname
FROM employee

Rearranging the The order in which you list the column names determines the order in which
order of columns the columns are displayed. The two following examples show how to specify
column order in a display. Both of them find and display the department
names and identification numbers from all five of the rows in the department
table, but in a different order.
SELECT dept_id, dept_name
FROM department

154
Chapter 5 Queries: Selecting Data from a Table

dept_id dept_name
100 R&D
200 Sales
300 Finance
400 Marketing
500 Shipping

SELECT dept_name, dept_id


FROM department

dept_name dept_id
R&D 100
Sales 200
Finance 300
Marketing 400
Shipping 500

Renaming columns using aliases in query results


Query results consist of a set of columns. By default, the heading for each
column is the expression supplied in the select list.
When query results are displayed, each column’s default heading is the name
given to it when it was created. You can specify a different column heading,
or alias, in one of the following ways:
SELECT column-name AS alias
SELECT column-name alias
SELECT alias = column-name
Providing an alias can produce more readable results. For example, you can
change dept_name to Department in a listing of departments as follows:
SELECT dept_name AS Department,
dept_id AS "Identifying Number"
FROM department

155
The SELECT clause: specifying columns

Department Identifying Number


R&D 100
Sales 200
Finance 300
Marketing 400
Shipping 500

Using spaces and The Identifying Number alias for dept_id is enclosed in double quotes
keywords in alias because it is an identifier. You also use double quotes if you wish to use
keywords in aliases. For example, the following query is invalid without the
quotation marks:
SELECT dept_name AS Department,
dept_id AS "integer"
FROM department
If you wish to ensure compatibility with Adaptive Server Enterprise, you
should use quoted aliases of 30 bytes or less.

Character strings in query results


The SELECT statements you have seen so far produce results that consist
solely of data from the tables in the FROM clause. Strings of characters can
also be displayed in query results by enclosing them in single quotation
marks and separate them from other elements in the select list with commas.
To enclose a quotation mark in a string, you precede it with another
quotation mark.
For example:
SELECT ’The department’’s name is’ AS " ",
Department = dept_name
FROM department

Department
The department’s name is R&D
The department’s name is Sales
The department’s name is Finance
The department’s name is Marketing
The department’s name is Shipping

156
Chapter 5 Queries: Selecting Data from a Table

Computing values in the select list


The expressions in the select list can be more complicated than just column
names or strings. For example, you can perform computations with data from
numeric columns in a select list.
Arithmetic To illustrate the numeric operations you can carry out in the select list, we
operations start with a listing of the names, quantity in stock, and unit price of products
in the sample database. The number of zeroes in the unit_price column is
truncated for readability.
SELECT name, quantity, unit_price
FROM product

name quantity unit_price


Tee Shirt 28 9.00
Tee Shirt 54 14.00
Tee Shirt 75 14.00
Baseball Cap 112 9.00
Baseball Cap 12 10.00
Visor 36 7.00
Visor 28 7.00
Sweatshirt 39 24.00
Sweatshirt 32 24.00
Shorts 80 15.00

Suppose the practice is to replenish the stock of a product when there are ten
items left in stock. The following query lists the number of each product that
must be sold before re-ordering:
SELECT name, quantity - 10
AS "Sell before reorder"
FROM product

name Sell before reorder


Tee Shirt 18
Tee Shirt 44
Tee Shirt 65
Baseball Cap 102
Baseball Cap 2

157
The SELECT clause: specifying columns

name Sell before reorder


Visor 26
Visor 18
Sweatshirt 29
Sweatshirt 22
Shorts 70

You can also combine the values in columns. The following query lists the
total value of each product in stock:
SELECT name,
quantity * unit_price AS "Inventory value"
FROM product

name Inventory value


Tee Shirt 252.00
Tee Shirt 756.00
Tee Shirt 1050.00
Baseball Cap 1008.00
Baseball Cap 120.00
Visor 252.00
Visor 196.00
Sweatshirt 936.00
Sweatshirt 768.00
Shorts 1200.00

Arithmetic operator When there is more than one arithmetic operator in an expression,
precedence multiplication, division, and modulo are calculated first, followed by
subtraction and addition. When all arithmetic operators in an expression have
the same level of precedence, the order of execution is left to right.
Expressions within parentheses take precedence over all other operations.
For example, the following SELECT statement calculates the total value of
each product in inventory, and then subtracts five dollars from that value.
SELECT name, quantity * unit_price - 5
FROM product
To avoid misunderstandings, it is recommended that you use parentheses.
The following query has the same meaning and gives the same results as the
previous one, but some may find it easier to understand:

158
Chapter 5 Queries: Selecting Data from a Table

SELECT name, ( quantity * unit_price ) - 5


FROM product
$ For more information on operator precedence, see "Operator
precedence" on page 228 of the book ASA Reference.
String operations You can concatenate strings using a string concatenation operator. You can
use either || (SQL/92 compliant) or + (supported by Adaptive Server
Enterprise) as the concatenation operator.
The following example illustrates the use of the string concatenation operator
in the select list:
SELECT emp_id, emp_fname || ’ ’ || emp_lname AS Name
FROM employee

emp_id Name
102 Fran Whitney
105 Matthew Cobb
129 Philip Chin
148 Julie Jordan
... ...

Date and time Although you can use operators on date and time columns, this typically
operations involves the use of functions. For information on SQL functions, see "SQL
Functions" on page 303 of the book ASA Reference.

Eliminating duplicate query results


The optional DISTINCT keyword eliminates duplicate rows from the results
of a SELECT statement.
If you do not specify DISTINCT, you get all rows, including duplicates.
Optionally, you can specify ALL before the select list to get all rows. For
compatibility with other implementations of SQL, Adaptive Server syntax
allows the use of ALL to explicitly ask for all rows. ALL is the default.
For example, if you search for all the cities in the contact table without
DISTINCT, you get 60 rows:
SELECT city
FROM contact
You can eliminate the duplicate entries using DISTINCT. The following
query returns only 16 rows.:
SELECT DISTINCT city

159
The SELECT clause: specifying columns

FROM contact

NULL values are The DISTINCT keyword treats NULL values as duplicates of each other. In
not distinct other words, when DISTINCT is included in a SELECT statement, only one
NULL is returned in the results, no matter how many NULL values are
encountered.

160
Chapter 5 Queries: Selecting Data from a Table

The FROM clause: specifying tables


The FROM clause is required in every SELECT statement involving data
from tables or views.
$ The FROM clause can include JOIN conditions linking two or more
tables, and can include joins to other queries (derived tables). For
information on these features, see "Joins: Retrieving Data from Several
Tables" on page 195.
Qualifying table In the FROM clause, the full naming syntax for tables and views is always
names permitted, such as:
SELECT select-list
FROM owner.table_name
Qualifying table and view names is necessary only when there might be
some confusion about the name.
Using correlation You can give table names correlation names to save typing. You assign the
names correlation name in the FROM clause by entering it after the table name, like
this:
SELECT d.dept_id, d.dept_name
FROM Department d
All other references to the Department table, for example in a WHERE
clause, must use the correlation name. Correlation names must conform to
the rules for valid identifiers.

161
The WHERE clause: specifying rows

The WHERE clause: specifying rows


The WHERE clause in a SELECT statement specifies the search conditions
for exactly which rows are retrieved. The general format is:
SELECT select_list
FROM table_list
WHERE search-condition
Search conditions, (also called qualifications or predicates), in the WHERE
clause include the following:
♦ Comparison operators (=, <, >, and so on) For example, you can list
all employees earning more than $50,000:
SELECT emp_lname
FROM employee
WHERE salary > 50000
♦ Ranges (BETWEEN and NOT BETWEEN) For example, you can list
all employees earning between $40,000 and $60,000:
SELECT emp_lname
FROM employee
WHERE salary BETWEEN 40000 AND 60000
♦ Lists (IN, NOT IN) For example, you can list all customers in Ontario,
Quebec, or Manitoba:
SELECT company_name , state
FROM customer
WHERE state IN( ’ON’, ’PQ’, ’MB’)
♦ Character matches (LIKE and NOT LIKE) For example, you can list
all customers whose phone numbers start with 415. (The phone number
is stored as a string in the database):
SELECT company_name , phone
FROM customer
WHERE phone LIKE ’415%’
♦ Unknown values (IS NULL and IS NOT NULL) For example, you
can list all departments with managers:
SELECT dept_name
FROM Department
WHERE dept_head_id IS NOT NULL
♦ Combinations (AND, OR) For example, you can list all employees
earning over $50,000 whose first name begins with the letter A.
SELECT emp_fname, emp_lname
FROM employee
WHERE salary > 50000

162
Chapter 5 Queries: Selecting Data from a Table

AND emp_fname like ’A%’

In addition, the WHERE keyword can introduce the following:


♦ Transact-SQL join conditions Joins are discussed in "Joins:
Retrieving Data from Several Tables" on page 195.
$ The following sections describe how to use WHERE clauses. For a
complete listing of search conditions, see "Search conditions" on page 239 of
the book ASA Reference.

Using comparison operators in the WHERE clause


You can use comparison operators in the WHERE clause. The operators
follow the syntax:
WHERE expression comparison-operator expression
$ For a listing of comparison operators, see "Comparison operators" on
page 225 of the book ASA Reference. For a description of what an expression
can be, see "Expressions" on page 230 of the book ASA Reference.
Notes on ♦ Sort orders In comparing character data, < means earlier in the sort
comparisons order and > means later in the sort order. The sort order is determined by
the collation chosen when the database is created. You can find out the
collation by running the dbinfo command-line utility against the
database:
dbinfo -c "uid=dba;pwd=sql"
You can also find the collation from Sybase Central. It is on the
Extended Information tab of the database property sheet.
♦ Trailing blanks When you create a database, you indicate whether
trailing blanks are to be ignored or not for the purposes of comparison.
By default, databases are created with trailing blanks not ignored. For
example, ’Dirk’ is not the same as ’Dirk ’. You can create databases with
blank padding, so that trailing blanks are ignored. Trailing blanks are
ignored by default in Adaptive Server Enterprise databases.
♦ Comparing dates In comparing dates, < means earlier and > means
later.
♦ Case sensivitivity When you create a database, you indicate whether
string comparisons are case sensitive or not.
By default, databases are created case insensitive. For example, ’Dirk’ is
the same as ’DIRK’. You can create databases to be case sensitive, which
is the default behavior for Adaptive Server Enterprise databases.

163
The WHERE clause: specifying rows

Here are some SELECT statements using comparison operators:


SELECT *
FROM product
WHERE quantity < 20
SELECT E.emp_lname, E.emp_fname
FROM employee E
WHERE emp_lname > ’McBadden’
SELECT id, phone
FROM contact
WHERE state != ’CA’

The NOT operator The NOT operator negates an expression. Either of the following two queries
will find all Tee shirts and baseball caps that cost $10 or less. However, note
the difference in position between the negative logical operator (NOT) and
the negative comparison operator (!>).
SELECT id, name, quantity
FROM product
WHERE (name = ’Tee Shirt’ OR name = ’BaseBall Cap’)
AND NOT unit_price > 10
SELECT id, name, quantity
FROM product
WHERE (name = ’Tee Shirt’ OR name = ’BaseBall Cap’)
AND unit_price !> 10

Using ranges (between and not between) in the WHERE clause


The BETWEEN keyword specifies an inclusive range, in which the lower
value and the upper value are searched for as well as the values they bracket.

v To list all the products with prices between $10 and $15, inclusive:
♦ Enter the following query:
SELECT name, unit_price
FROM product
WHERE unit_price BETWEEN 10 AND 15

name unit_price
Tee Shirt 14.00
Tee Shirt 14.00
Baseball Cap 10.00
Shorts 15.00

164
Chapter 5 Queries: Selecting Data from a Table

You can use NOT BETWEEN to find all the rows that are not inside the
range.

v To list all the products cheaper than $10 or more expensive than
$15:
♦ Enter the following query:
SELECT name, unit_price
FROM product
WHERE unit_price NOT BETWEEN 10 AND 15

name unit_price
Tee Shirt 9.00
Tee Shirt 9.00
Visor 7.00
Visor 7.00
Sweatshirt 24.00
Sweatshirt 24.00

Using lists in the WHERE clause


The IN keyword allows you to select values that match any one of a list of
values. The expression can be a constant or a column name, and the list can
be a set of constants or, more commonly, a subquery.
For example, without in, if you want a list of the names and states of all the
contacts who live in Ontario, Manitoba, or Quebec, you can type this query:
SELECT company_name , state
FROM customer
WHERE state = ’ON’ OR state = ’MB’ OR state = ’PQ’
However, you get the same results if you use IN. The items following the IN
keyword must be separated by commas and enclosed in parentheses. Put
single quotes around character, date, or time values. For example:
SELECT company_name , state
FROM customer
WHERE state IN( ’ON’, ’MB’, ’PQ’)
Perhaps the most important use for the IN keyword is in nested queries, also
called subqueries.

165
The WHERE clause: specifying rows

Matching character strings in the WHERE clause


The LIKE keyword indicates that the following character string is a matching
pattern. LIKE is used with character, binary, or date and time data.
The syntax for like is:
{ WHERE | HAVING } expression [ NOT ] LIKE match-expression
The expression to be matched is compared to a match-expression that can
include these special symbols:

Symbols Meaning
% Matches any string of 0 or more characters
_ Matches any one character
[specifier] The specifier in the brackets may take the following forms:
♦ Range A range is of the form rangespec1-rangespec2, where
rangespec1 indicates the start of a range of characters, the hyphen
indicates a range, and rangespec2 indicates the end of a range of
characters
♦ Set A set can be comprised of any discrete set of values, in any order.
For example, [a2bR].
Note that the range [a-f], and the sets [abcdef] and [fcbdae] return the same
set of values.
[^specifier] The caret symbol (^) preceding a specifier indicates non-inclusion. [^a-f]
means not in the range a-f; [^a2bR] means not a, 2, b, or R.

You can match the column data to constants, variables, or other columns that
contain the wildcard characters shown in the table. When using constants,
you should enclose the match strings and character strings in single quotes.
Examples All the following examples use LIKE with the last_name column in the
Contact table. Queries are of the form:
SELECT last_name
FROM contact
WHERE last_name LIKE match-expression
The first example would be entered as
SELECT last_name
FROM contact
WHERE last_name LIKE ’Mc%’

166
Chapter 5 Queries: Selecting Data from a Table

Match Description Returns


expression
’Mc%’ Search for every name that begins with the McEvoy
letters Mc
’%er’ Search for every name that ends with er Brier, Miller,
Weaver, Rayner
’%en%’ Search for every name containing the letters Pettengill,
en. Lencki, Cohen
’_ish’ Search for every four-letter name ending in Fish
ish.
’Br[iy][ae]r’ Search for Brier, Bryer, Briar, or Bryar. Brier
’[M-Z]owell’ Search for all names ending with owell that Powell
begin with a single letter in the range M to Z.
’M[^c]%’ Search for all names beginning with M’ that Moore, Mulley,
do not have c as the second letter Miller, Masalsky

Wildcards require Wildcard characters used without LIKE are interpreted as literals rather than
LIKE as a pattern: they represent exactly their own values. The following query
attempts to find any phone numbers that consist of the four characters 415%
only. It does not find phone numbers that start with 415.
SELECT phone
FROM Contact
WHERE phone = ’415%’

Using LIKE with You can use LIKE on date and time fields as well as on character data. When
date and time you use LIKE with date and time values, the dates are converted to the
values standard DATETIME format, and then to VARCHAR.
One feature of using LIKE when searching for DATETIME values is that,
since date and time entries may contain a variety of date parts, an equality
test has to be written carefully in order to succeed.
For example, if you insert the value 9:20 and the current date into a column
named arrival_time, the clause:
WHERE arrival_time = ’9:20’
fails to find the value, because the entry holds the date as well as the time.
However, the clause below would find the 9:20 value:
WHERE arrival_time LIKE ’%9:20%’

Using NOT LIKE With NOT LIKE, you can use the same wildcard characters that you can use
with LIKE. To find all the phone numbers in the Contact table that do not
have 415 as the area code, you can use either of these queries:

167
The WHERE clause: specifying rows

SELECT phone
FROM Contact
WHERE phone NOT LIKE ’415%’
SELECT phone
FROM Contact
WHERE NOT phone LIKE ’415%’

Character strings and quotation marks


When you enter or search for character and date data, you must enclose it in
single quotation marks, as in the following example.
SELECT first_name, last_name
FROM contact
WHERE first_name = ’John’
If the quoted_identifier database option is set to OFF (it is ON by default),
you can also use double quotes around character or date data.

v To set the quoted_identifier option off for the current user ID:
♦ Type the following command:
SET OPTION quoted_identifier = ’OFF’

The quoted_identifier option is provided for compatibility with Adaptive


Server Enterprise. By default, the Adaptive Server Enterprise option is
quoted_identifier OFF and the Adaptive Server Anywhere option is
quoted_identifier ON.

Quotation marks in There are two ways to specify literal quotations within a character entry. The
strings first method is to use two consecutive quotation marks. For example, if you
have begun a character entry with a single quotation mark and want to
include a single quotation mark as part of the entry, use two single quotation
marks:
’I don’’t understand.’
With double quotation marks (quoted_identifier OFF):
"He said, ""It is not really confusing."""
The second method, applicable only with quoted_identifier OFF, is to enclose
a quotation in the other kind of quotation mark. In other words, surround an
entry containing double quotation marks with single quotation marks, or vice
versa. Here are some examples:
’George said, "There must be a better way."’
"Isn’t there a better way?"
’George asked, "Isn’’t there a better way?"’

168
Chapter 5 Queries: Selecting Data from a Table

Unknown Values: NULL


A NULL in a column means that the user or application has made no entry in
that column. A data value for the column is unknown or not available
NULL does not mean the same as zero (numerical values) or blank
(character values). Rather, NULL values allow you to distinguish between a
deliberate entry of zero for numeric columns or blank for character columns
and a non-entry, which is NULL for both numeric and character columns.
Entering NULL NULL can be entered in a column where NULL values are permitted, as
specified in the create table statement, in two ways:
♦ Default If no data is entered, and the column has no other default
setting, NULL is entered.
♦ Explicit entry You can explicitly enter the value NULL by typing the
word NULL (without quotation marks).
If the word NULL is typed in a character column with quotation marks,
it is treated as data, not as a null value.
For example, the dept_head_id column of the department table allows nulls.
You can enter two rows for departments with no manager as follows:
INSERT INTO department (dept_id, dept_name)
VALUES (201, ’Eastern Sales’)
INSERT INTO department
VALUES (202, ’Western Sales’, null)

When NULLs are When NULLS are retrieved, displays of query results in Interactive SQL
retrieved show (NULL) in the appropriate position:
SELECT *
FROM department

dept_id dept_name dept_head_id


100 R&D 501
200 Sales 904
300 Finance 1293
400 Marketing 1576
500 Shipping 703
201 Eastern Sales (NULL)
202 Western Sales (NULL)

169
The WHERE clause: specifying rows

Testing a column for NULL


You can use IS NULL in search conditions to compare column values to
NULL and to select them or perform a particular action based on the results
of the comparison. Only columns that return a value of TRUE are selected or
result in the specified action; those that return FALSE or UNKNOWN do
not.
The following example selects only rows for which unit_price is less than
$15 or is NULL:
SELECT quantity , unit_price
FROM product
WHERE unit_price < 15
OR unit_price IS NULL
The result of comparing any value to NULL is UNKNOWN, since it is not
possible to determine whether NULL is equal (or not equal) to a given value
or to another NULL.
There are some conditions that never return true, so that queries using these
conditions do not return result sets. For example, the following comparison
can never be determined to be true, since NULL means having an unknown
value:
WHERE column1 > NULL
This logic also applies when you use two column names in a WHERE
clause, that is, when you join two tables. A clause containing the condition
WHERE column1 = column2
does not return rows where the columns contain NULL.
You can also find NULL or non-NULL with this pattern:
WHERE column_name IS [NOT] NULL
For example:
WHERE advance < $5000
OR advance IS NULL
$ For more information, see "NULL value" on page 260 of the book ASA
Reference.

Properties of NULL
The following list expands on the properties of NULL.
♦ The difference between FALSE and UNKNOWN Although neither
FALSE nor UNKNOWN returns values, there is an important logical
difference between FALSE and UNKNOWN, because the opposite of
false ("not false") is true. For example,
170
Chapter 5 Queries: Selecting Data from a Table

1 = 2
evaluates to false and its opposite,
1 != 2
evaluates to true. But "not unknown" is still unknown. If null values are
included in a comparison, you cannot negate the expression to get the
opposite set of rows or the opposite truth value.
♦ Substituting a value for NULLs Use the ISNULL built-in function to
substitute a particular value for nulls. The substitution is made only for
display purposes; actual column values are not affected. The syntax is:
ISNULL( expression, value )
For example, use the following statement to select all the rows from test,
and display all the null values in column t1 with the value unknown.
SELECT ISNULL(t1, ’unknown’)
FROM test
♦ Expressions that evaluate to NULL An expression with an arithmetic
or bitwise operator evaluates to NULL if any of the operands are null.
For example:
1 + column1
evaluates to NULL if column1 is NULL.
♦ Concatenating strings and NULL If you concatenate a string and
NULL, the expression evaluates to the string. For example:
SELECT ’abc’ || NULL || ’def’
returns the string abcdef.

Connecting conditions with logical operators


The logical operators AND, OR, and NOT are used to connect search
conditions in WHERE clauses.
Using AND The AND operator joins two or more conditions and returns results only
when all of the conditions are true. For example, the following query finds
only the rows in which the contact’s last name is Purcell and the contact’s
first name is Beth. It does not find the row for Beth Glassmann.
SELECT *
FROM contact
WHERE first_name = ’Beth’
AND last_name = ’Purcell’

171
The WHERE clause: specifying rows

Using OR The OR operator also connects two or more conditions, but it returns results
when any of the conditions is true. The following query searches for rows
containing variants of Elizabeth in the first_name column.
SELECT *
FROM contact
WHERE first_name = ’Beth’
OR first_name = ’Liz’

Using NOT The NOT operator negates the expression that follows it. The following
query lists all the contacts who do not live in California:
SELECT *
FROM contact
WHERE NOT state = ’CA’
When more than one logical operator is used in a statement, AND operators
are normally evaluated before OR operators. You can change the order of
execution with parentheses. For example:
SELECT *
FROM contact
WHERE ( city = ’Lexington’
OR city = ’Burlington’ )
AND state = ’MA’

172
C H A P T E R 6

Summarizing, Grouping, and Sorting Query


Results

About this chapter Aggregate functions display summaries of the values in specified columns.
You can also use the GROUP BY clause, HAVING clause, and ORDER BY
clause to group and sort the results of queries using aggregate functions, and
the UNION operator to combine the results of queries.
This chapter describes how to group and sort query results.
Contents
Topic Page
Summarizing query results using aggregate functions 174
The GROUP BY clause: organizing query results into groups 178
Understanding GROUP BY 180
The HAVING clause: selecting groups of data 184
The ORDER BY clause: sorting query results 187
The UNION operation: combining queries 190
Standards and compatibility 192

173
Summarizing query results using aggregate functions

Summarizing query results using aggregate


functions
You can apply aggregate functions to all the rows in a table, to a subset of
the table specified by a WHERE clause, or to one or more groups of rows in
the table. From each set of rows to which an aggregate function is applied,
Adaptive Server Anywhere generates a single value.
Example
v To calculate the total payroll, from the annual salaries in the
employee table:
♦ Enter the following query:
SELECT SUM(salary)
FROM employee

To use the aggregate functions, you must give the function name followed by
an expression on whose values it will operate. The expression, which is the
salary column in this example, is the function’s argument and must be
specified inside parentheses.
The following aggregate functions are available:
♦ avg (expression ) The mean of the supplied expression over the
returned rows.
♦ count ( expression ) The number of rows in the supplied group where
the expression is not NULL.
♦ count(*) The number of rows in each group.
♦ list (string-expr) A string containing a comma-separated list
composed of all the values for string-expr in each group of rows.
♦ max (expression ) The maximum value of the expression, over the
returned rows.
♦ min (expression ) The minimum value of the expression, over the
returned rows.
♦ sum(expression ) The sum of the expression, over the returned rows.
You can use the optional keyword DISTINCT with AVG, SUM, LIST, and
COUNT to eliminate duplicate values before the aggregate function is
applied.
The expression to which the syntax statement refers is usually a column
name. It can also be a more general expression.

174
Chapter 6 Summarizing, Grouping, and Sorting Query Results

For example, with this statement you can find what the average price of all
products would be if one dollar were added to each price:
SELECT AVG (unit_price + 1)
FROM product

Where you can use aggregate functions


The aggregate functions can be used in a select list, as in the previous
examples, or in the HAVING clause of a select statement that includes a
GROUP BY clause.
$ For information about the HAVING clause, see "The HAVING clause:
selecting groups of data" on page 184.
You cannot use aggregate functions in a WHERE clause or in a JOIN
condition. However, a SELECT statement with aggregate functions in its
select list often includes a WHERE clause that restricts the rows to which the
aggregate is applied.
If a SELECT statement includes a WHERE clause, but not a GROUP BY
clause, an aggregate function produces a single value for the subset of rows
that the WHERE clause specifies.
Whenever an aggregate function is used in a SELECT statement that does
not include a GROUP BY clause, it produces a single value, called a scalar
aggregate. This is true whether it is operating on all the rows in a table or on
a subset of rows defined by a where clause.
You can use more than one aggregate function in the same select list, and
produce more than one scalar aggregate in a single SELECT statement.

Aggregate functions and data types


There are some aggregate functions that have meaning only for certain kinds
of data. For example, you can use SUM and AVG with numeric columns
only.
However, you can use MIN to find the lowest value—the one closest to the
beginning of the alphabet—in a character type column:
SELECT MIN(last_lname)
FROM contact

175
Summarizing query results using aggregate functions

Using count (*)


The COUNT(*) function does not require an expression as an argument
because, by definition, it does not use information about any particular
column. The COUNT(*) function finds the total number of rows in a table.
This statement finds the total number of employees:
SELECT COUNT(*)
FROM employee
COUNT(*) returns the number of rows in the specified table without
eliminating duplicates. It counts each row separately, including rows that
contain NULL.
Like other aggregate functions, you can combine count(*) with other
aggregates in the select list, with where clauses, and so on:
SELECT count(*), AVG(unit_price)
FROM product
WHERE unit_price > 10

count(*) avg(unit_price)
5 18.200

Using aggregate functions with DISTINCT


The DISTINCT keyword is optional with SUM, AVG, and COUNT. When
you use DISTINCT, duplicate values are eliminated before calculating the
sum, average, or count.
For example, to find the number of different cities in which there are
contacts, type:
SELECT count(DISTINCT city)
FROM contact

count(distinct city)
16

Aggregate functions and NULL


Any NULLS in the column on which the aggregate function is operating are
ignored for the purposes of the function except COUNT(*), which includes
them. If all the values in a column are NULL, COUNT(column_name)
returns 0.
176
Chapter 6 Summarizing, Grouping, and Sorting Query Results

If no rows meet the conditions specified in the WHERE clause, COUNT


returns a value of 0. The other functions all return NULL. Here are
examples:
SELECT COUNT (DISTINCT name)
FROM product
WHERE unit_price > 50

count(DISTINCT name)
0

SELECT AVG(unit_price)
FROM product
WHERE unit_price > 50

AVG ( unit_price)
( NULL )

177
The GROUP BY clause: organizing query results into groups

The GROUP BY clause: organizing query results


into groups
The GROUP BY clause divides the output of a table into groups. You can
GROUP BY one or more column names, or by the results of computed
columns using numeric data types in an expression.

Using GROUP BY with aggregate functions


A GROUP BY clause almost always appears in statements that include
aggregate functions, in which case the aggregate produces a value for each
group. These values are called vector aggregates. (Remember that a scalar
aggregate is a single value produced by an aggregate function without a
GROUP BY clause.)
Example
v To list the average price of each kind of product:
♦ Enter the following command:
SELECT name, AVG(unit_price) AS Price
FROM product
GROUP BY name

name Price
Baseball Cap 9.500
Shorts 15.000
Sweatshirt 24.000
Tee Shirt 12.333
Visor 7.000

The summary values (vector aggregates) produced by SELECT statements


with aggregates and a GROUP BY appear as columns in each row of the
results. By contrast, the summary values (scalar aggregates) produced by
queries with aggregates and no GROUP BY also appear as columns, but with
only one row. For example:
SELECT AVG(unit_price)
FROM product

AVG(unit_price)
13.300000

178
Chapter 6 Summarizing, Grouping, and Sorting Query Results

179
Understanding GROUP BY

Understanding GROUP BY
Understanding which queries are valid and which are not can be difficult
when the query involves a GROUP BY clause. This section describes a way
to think about queries with GROUP BY so that you may understand the
results and the validity of queries better.

How queries with GROUP BY are executed


Consider a single-table query of the following form:
SELECT select-list
FROM table
WHERE where-search-condition
GROUP BY group-by-expression
HAVING having-search-condition
This query can be thought of as being executed in the following manner:
1 Apply the WHERE clause This generates an intermediate result that
contains only some of the rows of the table.

Table Intermediate
result

WHERE
clause

2 Partition the result into groups This action generates an intermediate


result with one row for each group as dictated by the GROUP BY
clause. Each generated row contains the group-by-expression for each
group, and the computed aggregate functions in the select-list and
having-search-condition.

180
Chapter 6 Summarizing, Grouping, and Sorting Query Results

Second
Intermediate
intermediate
result
result

GROUP BY
clause

3 Apply the HAVING clause Any rows from this second intermediate
result that do not meet the criteria of the HAVING clause are removed at
this point.
4 Project out the results to display This action takes from step 3 only
those columns that need to be displayed in the result set of the query –
that is, it takes only those columns corresponding to the expressions
from the select-list.

Second
intermediate Final
result result

Projection

This process makes requirements on queries with a GROUP BY clause:


♦ The WHERE clause is evaluated first. Therefore, any aggregate
functions are evaluated only over those rows that satisfy the WHERE
clause.
♦ The final result set is built from the second intermediate result, which
holds the partitioned rows. The second intermediate result holds rows
corresponding to the group-by-expression. Therefore, if an expression
that is not an aggregate function appears in the select-list, then it must
also appear in the group-by-expression. No function evaluation can be
carried out during the projection step.
♦ An expression can be included in the group-by-expression but not in the
select-list. It is projected out in the result.

181
Understanding GROUP BY

GROUP BY with multiple columns


You can list more than one expression in the GROUP BY clause in order to
nest groups—that is, you can group a table by any combination of
expressions.

v To list the average price of products, grouped first by name and


then by size:
♦ Enter the following command:
SELECT name, size, AVG(unit_price)
FROM product
GROUP BY name, size

name size AVG(unit_price)


Tee Shirt Small 9.000
Tee Shirt Medium 14.000
Tee Shirt One size fits all 14.000
Baseball Cap One size fits all 9.500
Visor One size fits all 7.000
Sweatshirt Large 24.000
Shorts Medium 15.000

Columns in A Sybase extension to the SQL/92 standard that is supported by both


GROUP BY that Adaptive Server Enterprise and Adaptive Server Anywhere is to allow
are not in the expressions to the GROUP BY clause that are not in the select list. For
select list example, the following query lists the number of contacts in each city:
SELECT state, count(id)
FROM contact
GROUP BY state, city

WHERE clause and GROUP BY


You can use a WHERE clause in a statement with GROUP BY. The
WHERE clause is evaluated before the GROUP BY clause. Rows that do not
satisfy the conditions in the WHERE clause are eliminated before any
grouping is done. Here is an example:
SELECT name, AVG(unit_price)
FROM product
WHERE id > 400
GROUP BY name

182
Chapter 6 Summarizing, Grouping, and Sorting Query Results

Only the rows with id values of more than 400 are included in the groups that
are used to produce the query results.

An example
The following query illustrates the use of WHERE, GROUP BY, and
HAVING clauses in one query:
SELECT name, SUM(quantity)
FROM product
WHERE name LIKE ’%shirt%’
GROUP BY name
HAVING SUM(quantity) > 100

name SUM(quantity)
Tee Shirt 157

In this example:
1 The WHERE clause includes only rows that have a name including the
word shirt (Tee Shirt, Sweatshirt).
2 The GROUP BY clause collects the rows with a common name.
3 The SUM aggregate calculates the total quantity of products available
for each group.
4 The HAVING clause excludes from the final results the groups whose
inventory totals do not exceed 100.

183
The HAVING clause: selecting groups of data

The HAVING clause: selecting groups of data


The HAVING clause restricts the rows returned by a query. It sets conditions
for the GROUP BY clause similar to the way in which WHERE sets
conditions for the SELECT clause.
The HAVING clause search conditions are identical to WHERE search
conditions except that WHERE search conditions cannot include aggregates,
while HAVING search conditions often do. The example below is legal:
HAVING AVG(unit_price) > 20
But this example is not:
WHERE AVG(unit_price) > 20

Using HAVING This statement is an example of simple use of the HAVING clause with an
with aggregate aggregate function.
functions
v To list those products available in more than one size or color:
♦ You need a query to group the rows in the product table by name, but
eliminate the groups that include only one distinct product:
SELECT name
FROM product
GROUP BY name
HAVING COUNT(*) > 1

name
Baseball Cap
Sweatshirt
Tee Shirt
Visor

Using HAVING The HAVING clause can also be used without aggregates.
without aggregate
functions v To list all product names that start with letter B:
♦ The following query groups the products, and then restricts the result set
to only those groups for which the name starts with B.
SELECT name
FROM product
GROUP BY name
HAVING name LIKE ’B%’

184
Chapter 6 Summarizing, Grouping, and Sorting Query Results

name
Baseball Cap

More than one More than one condition can be included in the HAVING clause. They are
condition in combined with the AND, OR, or NOT operators, as the following example
HAVING shows.

v To list those products available in more than one size or color, for
which one version costs more than $10:
♦ You need a query to group the rows in the product table by name, but
eliminate the groups that include only one distinct product, and
eliminate those groups for which the maximum unit price is under $10:
SELECT name
FROM product
GROUP BY name
HAVING COUNT(*) > 1
AND MAX(unit_price) > 10

name
Sweatshirt
Tee Shirt

SQL extension Some of the previous HAVING examples adhere to the SQL/92 standard,
which specifies that expressions in a HAVING clause must have a single
value, and must be in the select list or GROUP BY clause. However,
Adaptive Server Anywhere and Adaptive Server Enterprise support
extensions to HAVING that allow aggregate functions not in the select list
and not in the GROUP BY clause as the previous example.
Outer references in A column reference in an aggregate function is called an outer reference.
aggregate When an aggregate function contains an outer reference, then the aggregate
functions function must appear in a subquery of a HAVING clause. The following
example illustrates this case.

v To list those products shipped during the year 1993, for which the
maximum shipped quantity is greater than the available quantity of
that product:
♦ Enter the following query:

185
The HAVING clause: selecting groups of data

SELECT p.id, p.name


FROM product p, sales_order_items s
WHERE p.id = s.product_id
AND ship_date >= ’1993-01-01’
AND ’ship_date <= ’1993-12-31’
GROUP BY p.id
HAVING EXISTS
SELECT * FROM product.p
WHERE MAX (s.quantity) > p.quantity
AND p.id = s.product_id

id name
301 Tee Shirt
301 Tee Shirt
401 Baseball Cap
500 Visor
501 Visor
600 Sweatshirt
601 Sweatshirt

186
Chapter 6 Summarizing, Grouping, and Sorting Query Results

The ORDER BY clause: sorting query results


The ORDER BY clause allows sorting of query results by one or more
columns. Each sort can be ascending (ASC) or descending (DESC). If
neither is specified, ASC is assumed.
A simple example The following query returns results ordered by name:
SELECT id, name
FROM product
ORDER BY name

id Name
400 Baseball Cap
401 Baseball Cap
700 Shorts
600 Sweatshirt
601 Sweatshirt
300 Tee Shirt
301 Tee Shirt
302 Tee Shirt
500 Visor
501 Visor

Sorting by more If you name more than one column in the ORDER BY clause, the sorts are
than one column nested.
The following statement sorts the shirts in the product table first by name in
ascending order, then by quantity (descending) within each name:
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY name, quantity DESC

187
The ORDER BY clause: sorting query results

id name Quantity
600 Sweatshirt Baseball Cap
601 Sweatshirt Baseball Cap
302 Tee Shirt Shorts
301 Tee Shirt Sweatshirt
300 Tee Shirt Sweatshirt

Using the column You can use the position number of a column in a select list instead of the
position column name. Column names and select list numbers can be mixed. Both of
the following statements produce the same results as the preceding one.
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, 3 DESC
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, quantity DESC
Most versions of SQL require that ORDER BY items appear in the select
list, but Adaptive Server Anywhere has no such restriction. The following
query orders the results by quantity, although that column does not appear in
the select list.
SELECT id, name
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, quantity DESC

ORDER BY and With ORDER BY, NULL comes before all other values, whether the sort
NULL order is ascending or descending.
ORDER BY and The effects of an ORDER BY clause on mixed-case data depend on the
case sensitivity database collation and case sensitivity specified when the database is created.

Retrieving the first few rows of a query


You can limit the results of a query to the first few rows returned using the
FIRST or TOP keywords. While you can use these with any query, they are
most useful with queries that use the ORDER BY clause.
Examples ♦ The following query returns information about the first employee sorted
by last name:

188
Chapter 6 Summarizing, Grouping, and Sorting Query Results

SELECT FIRST *
FROM employee
ORDER BY emp_lname
♦ The following query returns the first five employees sorted by last name
comes earliest in the alphabet:
SELECT TOP 5 *
FROM employee
ORDER BY emp_lname

ORDER BY and GROUP BY


You can use an ORDER BY clause to order the results of a GROUP BY in a
particular way.
Example
v To find the average price of each type of book and order the results
by average price:
♦ Enter the following statement:
SELECT name, AVG(unit_price)
FROM product
GROUP BY name
ORDER BY AVG(unit_price)

Name AVG(unit_price)
Visor 7.000
Baseball Cap 9.500
Tee Shirt 12.333
Shorts 15.000
Sweatshirt 24.000

189
The UNION operation: combining queries

The UNION operation: combining queries


The UNION operator combines the results of two or more queries into a
single result set.
By default, the UNION operator removes duplicate rows from the result set.
If you use the ALL option, duplicates are not removed. The columns in the
result set have the same names as the columns in the first table referenced.
Any number of union operators may be used. For example:
x UNION y UNION z
By default, a statement containing multiple UNION operators is evaluated
from left to right. Parentheses may be used to specify the order of evaluation.
For example, the following two expressions are not equivalent, due to the
way that duplicate rows are removed from result sets:
x UNION ALL (y UNION z)
(x UNION ALL y) UNION z
In the first expression, duplicates are eliminated in the UNION between y
and z. In the UNION between that set and x, duplicates are not eliminated. In
the second expression, duplicates are included in the union between x and y,
but are then eliminated in the subsequent union with z.
Guidelines for The following are guidelines to observe when you use union statements:
UNION queries
♦ Same number of items in the select lists All select lists in the union
statement must have the same number of expressions (such as column
names, arithmetic expressions, and aggregate functions). The following
statement is invalid because the first select list is longer than the second:
-- This is an example of an invalid statement
SELECT stor_id, city, state
FROM stores
UNION
SELECT stor_id, city
FROM stores_east
♦ Data types must match Corresponding expressions in the SELECT
lists must be of the same data type, or an implicit data conversion must
be possible between the two data types, or an explicit conversion should
be supplied.
For example, a UNION is not possible between a column of the CHAR
data type and one of the INT data type, unless an explicit conversion is
supplied. However, a union is possible between a column of the
MONEY data type and one of the INT data type.

190
Chapter 6 Summarizing, Grouping, and Sorting Query Results

♦ Column ordering You must place corresponding expressions in the


individual queries of a UNION statement in the same order, because
UNION compares the expressions one to one in the order given in the
individual queries in the SELECT clauses.
♦ Multiple unions You can string several UNION operations together,
as in the following example:
SELECT city AS Cities
FROM contact
UNION
SELECT city
FROM customer
UNION
SELECT city
FROM employee
Only one ORDER BY clause is permitted, at the end of the statement.
That is, no individual SELECT statement in a UNION query may
contain an ORDER BY clause.
♦ Column headings The column names in the table resulting from a
UNION are taken from the first individual query in the statement. If you
want to define a new column heading for the result set, you must do so
in the first query, as in the following example:
SELECT city AS Cities
FROM contact
UNION
SELECT city
FROM customer
In the following query, the column heading remains as city, as it is
defined in the first query of the UNION statement.
SELECT city
FROM contact
UNION
SELECT city AS Cities
FROM customer
♦ You can use a single ORDER BY clause at the end of the list of queries,
but you must use integers rather than column names, as in the following
example:
SELECT Cities = city
FROM contact
UNION
SELECT city
FROM customer
ORDER BY 1

191
Standards and compatibility

Standards and compatibility


This section describes standards and compatibility aspects of the Adaptive
Server Anywhere GROUP BY clause.

GROUP BY and the SQL/92 standard


The SQL/92 standard for GROUP BY requires the following:
♦ A column used in an expression of the SELECT clause must be in the
GROUP BY clause. Otherwise, the expression using that column is an
aggregate function.
♦ A GROUP BY expression can only contain column names from the
select list, but not those used only as arguments for vector aggregates.
The results of a standard GROUP BY with vector aggregate functions
produce one row with one value per group.
Adaptive Server Anywhere and Adaptive Server Enterprise support
extensions to HAVING that allow aggregate functions not in the select list
and not in the GROUP BY clause.

Compatibility with Adaptive Server Enterprise


Adaptive Server Enterprise supports several extensions to the GROUP BY
clause that are not supported in Adaptive Server Anywhere. These include
the following:
♦ Non-grouped columns in the select list Adaptive Server Enterprise
permits column names in the select list that do not appear in the group
by clause. For example, the following is valid in Adaptive Server
Enterprise:
SELECT name, unit_price
FROM product
GROUP BY name
This syntax is not supported in Adaptive Server Anywhere.
♦ Nested aggregate functions The following query, which nests a
vector aggregate inside a scalar aggregate, is valid in Adaptive Server
Enterprise but not in Adaptive Server Anywhere:
SELECT MAX(AVG(unit_price))
FROM product
GROUP BY name

192
Chapter 6 Summarizing, Grouping, and Sorting Query Results

♦ GROUP BY and ALL Adaptive Server Anywhere does not support the
use of ALL in the GROUP BY clause.
♦ HAVING with no GROUP BY Adaptive Server Anywhere does not
support the use of HAVING with no GROUP BY clause unless all the
expressions in the select and having clauses are aggregate functions. For
example, the following query is supported in Adaptive Server Anywhere
because the functions MAX and COUNT are aggregate functions:
SELECT MAX(unit_price)
FROM product
HAVING COUNT(*) > 8;
♦ HAVING conditions Adaptive Server Enterprise supports extensions
to HAVING that allow non-aggregate functions not in the select list and
not in the GROUP BY clause. Only aggregate functions of this type are
allowed in Adaptive Server Anywhere.
♦ DISTINCT with ORDER BY or GROUP BY Adaptive Server
Enterprise permits the use of columns in the ORDER BY or GROUP
BY clause that do not appear in the select list, even in SELECT
DISTINCT queries. This can lead to repeated values in the SELECT
DISTINCT result set. Adaptive Server Anywhere does not support this
behavior.
♦ Column names in UNIONS Adaptive Server Enterprise permits the
use of columns in the ORDER BY clause in unions of queries. In
Adaptive Server Anywhere, the ORDER BY clause must use an integer
to mark the column by which the results are being ordered.

193
Standards and compatibility

194
C H A P T E R 7

Joins: Retrieving Data from Several Tables

About this chapter When you create a database, you normalize the data by placing information
specific to different objects in different tables, rather than in one large table
with many redundant entries.
A join operation recreates a larger table using the information from two or
more tables (or views). Using different joins, you can construct a variety of
these virtual tables, each suited to a particular task.
Before your start This chapter assumes some knowledge of queries and the syntax of the select
statement. Information about queries appears in "Queries: Selecting Data
from a Table" on page 149.
Contents
Topic Page
How joins work 196
How joins are structured 198
Key joins 200
Natural joins 202
Joins using comparisons 203
Inner, left-outer, and right-outer joins 205
Self-joins and correlation names 209
Cross joins 211
How joins are processed 214
Joining more than two tables 216
Joins involving derived tables 219
Transact-SQL outer joins 220

195
How joins work

How joins work


A relational database stores information about different types of objects in
different tables. For example, information particular to employees appears in
one table, and information that pertains to departments in another. The
employee table contains information such as an employee’s name and
address. The department table contains information about one department,
such as the name of the department and who the department head is.
Most questions can only be answered using a combination of information
from the various tables. For example, you may want to answer the question
"Who manages the Sales department?" To find the name of this person, you
must identify the correct person using information from the department table,
then look up that person’s name in the employee table.
Joins are a means of answering such questions by forming a new virtual table
that includes information from multiple tables. For example, you could
create a list of the department heads by combining the information contained
in the employee table and the department table. You specify which tables
contain the information you need using the FROM clause.
To make the join useful, you must combine the correct columns of each
table. To list department heads, each row of the combined table should
contain the name of a department and the name of the employee who
manages it. You control how columns are matched in the composite table
either by specifying a particular type of join operation or by using the ON
phrase.

Joins and the relational model


The join operation is the hallmark of the relational model of database
management. More than any other feature, the join distinguishes relational
database management systems from other types of database management
systems.
In structured database management systems, often known as network and
hierarchical systems, relationships between data values are predefined. Once
a database has been set up, it is difficult to make queries about unanticipated
relationships among the data.
In a relational database management system, on the other hand, relationships
among data values are left unstated in the definition of a database. They
become explicit when you manipulate the data — for example, when you
query the database — not when you create it. You can ask any question that
comes to mind about the data stored in the database, regardless of what your
intentions were when the database was set up.

196
Chapter 7 Joins: Retrieving Data from Several Tables

According to the rules of good database design, called normalization rules,


each table should describe one kind of entity—a person, place, event, or
thing. That is why, when you want to compare information about two or
more kinds of entities, you need the join operation. You discover
relationships among data stored in different tables by joining the different
tables.
A corollary of this rule is that the join operation gives you unlimited
flexibility in adding new kinds of data to your database. You can always
create a new table that contains data about a different kind of entity. If the
new table has a field with values similar to those in some field of an existing
table or tables, it can be linked to those other tables by joining.

197
How joins are structured

How joins are structured


A join operation may appear within a variety of statements, such as within
the FROM clause of a SELECT statement. The columns named after the
FROM keyword are the columns to be included in the query results, in the
desired order.
When two or more tables contain a column with the same name, for example,
if the product table and the sales_order_items table in the sample database
both contain a column named id, you must qualify the column name
explicitly to avoid ambiguity. If only one table uses a particular column
name, the column name alone suffices.
SELECT product.id, sales_order_items.id, size
FROM …
You do not have to qualify the column name size because there is no
ambiguity about the table to which it belongs. However, these qualifiers
often make your statement clearer, so it is a good idea to get in the habit of
including them.
As in any SELECT statement, column names in the select list and table
names in the FROM clause must be separated by commas.
$ For information about queries use a single table, see "Queries:
Selecting Data from a Table" on page 149.

The FROM clause


Use the FROM clause to specify which tables and views to join. You can
name any two or more tables or views.
Join operators Adaptive Server Anywhere provides four join operations:
♦ key joins
♦ natural joins
♦ joins using a condition, such as equality
♦ cross joins
Key joins, natural joins and joins on a condition may be of type inner, left-
outer, or right-outer. These join types differ in the way they treat rows that
have no matching row in the other table.

198
Chapter 7 Joins: Retrieving Data from Several Tables

Data types in join columns


The columns being joined must have the same or compatible data types. Use
the convert function when comparing columns whose data types cannot be
implicitly converted.
If the datatypes used in the join are compatible, Adaptive Server Anywhere
automatically converts them. For example, Adaptive Server Anywhere
converts among any of the numeric type columns, such as INT or FLOAT,
and among any of the character type and date columns, such as CHAR or
VARCHAR.
$ For the details of datatype conversions, see "Data type conversions" on
page 293 of the book ASA Reference.

199
Key joins

Key joins
The simplest way to join tables is to connect them using the foreign key
relationships built into the database. This method is particularly economical
in syntax and especially efficient.
Answer the question, "Which orders has Beth Reiser placed?"
SELECT customer.fname, customer.lname,
sales_order.id, sales_order.order_date
FROM customer KEY JOIN sales_order
WHERE customer.fname = ’Beth’
AND customer.lname = ’Reiser’

fname lname id order_date


Beth Reiser 2142 1993-01-22
Beth Reiser 2318 1993-09-04
Beth Reiser 2338 1993-09-24
Beth Reiser 2449 1993-12-14
Beth Reiser 2562 1994-03-17
Beth Reiser 2585 1994-04-08
Beth Reiser 2002 1993-03-20

Use the key join wherever possible. A key join is valid if and only if exactly
one foreign key is identified between the two tables. Otherwise, an error
indicating the ambiguity results. Some constraints on these joins mean that
they will not always be an available option.
♦ A foreign-key relationship must exist in the database. You cannot use a
key join to join two tables that are not related through a foreign key.
♦ Only one foreign key relationship can exist between the two tables. If
more than one such relationship exists, Adaptive Server Anywhere
cannot decide which relationship to use and generates an error indicating
the ambiguity. You cannot specify the suitable foreign key in your
statement since the syntax of the SQL language does not provide a
means to do so.
♦ A suitable foreign key relationship must exist. You may need to create a
join using particular columns. A foreign-key relationship between the
two tables may not suit your purpose.

200
Chapter 7 Joins: Retrieving Data from Several Tables

Key joins are the Key join is the default join type in Adaptive Server Anywhere. Anywhere
default performs a key join if you do not specify the type of join explicitly, using a
keyword such as KEY or NATURAL, or by including an ON phrase.
For example, Adaptive Server Anywhere performs a key join when it
encounters the following statement.
SELECT *
FROM product JOIN sales_order_items
Similarly, the following join fails because there are two foreign key
relationships between these tables.
SELECT *
FROM employee JOIN department

201
Natural joins

Natural joins
A natural join matches the rows from two tables by comparing the values
from columns, one in each table, that have the same name. It restricts the
results by comparing the values of columns in the two tables with the same
column name. An error results if there is no common column name.
For example, you can join the employee and department tables using a
natural join because they have only one column name in common, namely
the dept_id column.
SELECT emp_fname, emp_lname, dept_name
FROM employee NATURAL JOIN department
ORDER BY dept_name, emp_lname, emp_fname

Emp_fname emp_lname dept_name


Janet Bigelow Finance
Kristen Coe Finance
James Coleman Finance
Jo Ann Davidson Finance
Denis Higgins Finance
Julie Jordan Finance
John Letiecq Finance
Jennifer Litton Finance
Mary Anne Shea Finance
Alex Ahmed Marketing
Irene Barletta Marketing
Barbara Blaikie Marketing

202
Chapter 7 Joins: Retrieving Data from Several Tables

Joins using comparisons


You can create a join using a join condition instead of using a KEY or
NATURAL join. You specify a join condition by inserting an ON phrase
immediately adjacent to the join to which it applies.
Natural joins and key joins use generated join conditions; that is, the
keyword KEY or NATURAL indicates a restriction on the join results. For a
natural join, the generated join condition is based on the names of columns in
the tables being joined. For a key join, the condition is based on a foreign
key relationship between the two tables.
In the sample database, the following are logically equivalent:
SELECT *
FROM sales_order JOIN customer
ON sales_order.cust_id = customer.id
SELECT *
FROM sales_order KEY JOIN customer
The first uses a join condition, the second a KEY join. The following two are
also equivalent:
SELECT *
FROM department JOIN employee
ON department.dept_id = employee.dept_id
SELECT *
FROM department NATURAL JOIN employee
When you join two tables, the columns you compare must have the same or
compatible data types.
Join types There are several types of join conditions. The most common condition, the
equijoin, is based on equality. The following query lists each order number
and the name of the customer who placed the order.
SELECT sales_order.id, customer.fname, customer.lname
FROM sales_order JOIN customer
ON sales_order.cust_id = customer.id
The condition for joining the values in two columns does not need to be
equality (=). You can use any of the other comparison operators: not equal
(<>), greater than (>), less than (<), greater than or equal to (>=), and less
than or equal to (<=).
$ For more information about join types, see "Comparison operators" on
page 225 of the book ASA Reference.

203
Joins using comparisons

Using the WHERE clause in join statements


You can use the WHERE clause to determine which rows to include in the
results. In this role, the WHERE clause acts exactly as it does when using a
single table, selecting only the rows that interest you.
The WHERE clause can also specify the connection between the tables and
views named in the FROM clause. In this role, it acts somewhat like the ON
phrase. In fact in the case of inner joins, the behavior is identical. However,
in outer joins, the same condition can produce different results if moved from
an ON phrase to the WHERE clause, because null values are treated
differently in these two contexts. The ON phrase allows you to isolate the
join constraints and can make your join statement easier to read.

204
Chapter 7 Joins: Retrieving Data from Several Tables

Inner, left-outer, and right-outer joins


Inner joins and outer joins differ in their treatment of rows that have no
match in the other table: rows appear in an inner join only if both tables
contain at least one row that satisfies the join condition.
Because inner joins are the default, you do not need to specify the INNER
keyword explicitly. Should you wish to use it for clarity, place it
immediately before the JOIN keyword.
For example, each row of
SELECT fname, lname, order_date
FROM customer
KEY INNER JOIN sales_order
ORDER BY order_date
contains the information from one customer row and one sales_order row,
satisfying the KEY join condition. If a particular customer has placed no
orders, the join will contain no information about that customer.

Fname Lname order_date


Hardy Mums 1993-01-02
Tommie Wooten 1993-01-03
Aram Najarian 1993-01-03
Alfredo Margolis 1993-01-06
Elmo Smythe 1993-01-06
Malcolm Naddem 1993-01-07

Because inner joins are the default, you obtain the same result using the
following clause.
FROM customer JOIN sales_order
By contrast, an outer join contains rows whether or not a row exists in the
opposite table to satisfy the join condition. Use the keywords LEFT or
RIGHT to identify the table that is to appear in its entirety.
♦ A LEFT OUTER JOIN contains every row in the left-hand table.
♦ A RIGHT OUTER JOIN contains every row in the right-hand table.
For example, the outer join

205
Inner, left-outer, and right-outer joins

SELECT fname, lname, order_date


FROM customer
KEY LEFT OUTER JOIN sales_order
ORDER BY order_date
includes all customers, whether or not they have placed an order. If a
particular customer has placed no orders, each column in the join
corresponding to order information contains the NULL value.

Fname lname order_date


Lewis N. Clark (NULL)
Jack Johnson (NULL)
Jane Doe (NULL)
John Glenn (NULL)
Dominic Johansen (NULL)
Stanley Jue (NULL)
Harry Jones (NULL)
Marie Curie (NULL)
Elizibeth Bordon (NULL)
Len Manager (NULL)
Tony Antolini (NULL)
Tom Cruz (NULL)
Janice O Toole (NULL)
Stevie Nickolas (NULL)
Philipe Fernandez (NULL)
Jennifer Stutzman (NULL)
William Thompson (NULL)
Hardy Mums 1993-01-02
Tommie Wooten 1993-01-03
Aram Najarian 1993-01-03
… … …

The keywords INNER, LEFT OUTER, and RIGHT OUTER may appear as
modifiers in key joins, natural joins, and joins that use a comparison. These
modifiers do not apply to cross joins.

206
Chapter 7 Joins: Retrieving Data from Several Tables

Outer joins and join conditions


A common mistake is to place a join condition, which should appear in an
ON phrase, in a WHERE clause. Here, the same condition often produces
different results. This difference is best explained through a conceptual
explanation of the way Adaptive Server Anywhere processes a select
statement.
1 First, Adaptive Server Anywhere logically completes all joins. When
doing so, it uses only conditions placed within an ON phrase. When the
values in one table are missing or null-valued, the behavior depends
upon the type of join: inner, left-outer, or right-outer.
2 Once the join is complete, Adaptive Server Anywhere logically deletes
those rows for which the condition within the WHERE clause evaluates
to either FALSE or UNKNOWN.
Because conditions in an ON phrase are treated differently than those in a
WHERE clause, moving a condition from one to the other usually converts
the join to an inner join, regardless of the type of join specified.
With INNER JOINS, specifying a join condition is equivalent to adding the
join condition to the WHERE clause. However, the same is not true for
OUTER JOINS.
For example, the following statement causes a left-outer join.
SELECT *
FROM customer LEFT OUTER JOIN sales_order
ON customer.id = sales_order.cust_id
In contrast, the following two statements both create inner joins and select
the same set of rows.
SELECT *
FROM customer KEY LEFT OUTER JOIN sales_order
WHERE customer.id = sales_order.cust_id
SELECT *
FROM customer INNER JOIN sales_order
ON customer.id = sales_order.cust_id
The first of these two statements can be thought of as follows: First, left-
outer join the customer table to the sales_order table. For those customers
who have not yet placed an order, fill the sales order fields with nulls. Next,
select those rows in which the customer id values are equal. For those
customers who have not placed orders, these values will be NULL. Since
comparing any value to NULL results in the special value UNKNOWN,
these rows are eliminated and the statement reduces to an inner join.

207
Inner, left-outer, and right-outer joins

$ This methodology describes the logical effect of the statements you


type, not how Adaptive Server Anywhere goes about processing them. For
further information, see "How joins are processed" on page 214.

208
Chapter 7 Joins: Retrieving Data from Several Tables

Self-joins and correlation names


Joins can compare values within the same column, or two different columns
of a single table. These joins are called self-joins. For example, you can
create a list of all the employees and the name of each person’s manager by
joining the employee table to itself.
In such a join, you cannot distinguish the columns by the conventional means
because the join contains two copies of every column. For example, suppose
you want to create a table of employees that includes the names of their
managers. The following query is syntactically incorrect and does not answer
this question.
SELECT *
FROM employee JOIN employee
ON employee.manager_id = employee.emp_id

Use correlation To distinguish an individual instance of a table, use a correlation name. A


names to correlation name is an alias for an instance of a table or view. You define a
distinguish correlation name in the FROM clause. Once defined, you must use the
instances of a table correlation name in place of the table name elsewhere within your statement,
including the selection list, wherever you refer to that instance of the table.
The following statement uses the correlation names report and manager to
distinguish the two instances of the employee table and so correctly creates
the list of employees and their managers.
SELECT report.emp_fname, report.emp_lname,
manager.emp_fname, manager.emp_lname
FROM employee AS report JOIN employee AS manager
ON report.manager_id = manager.emp_id
ORDER BY report.emp_lname, report.emp_fname
This statement produces the result shown partially below. The employee
names appear in the two left-hand columns and the names of their managers
on the right.

Emp_fname emp_lname emp_fname emp_lname


Alex Ahmed Scott Evans
Joseph Barker Jose Martinez
Irene Barletta Scott Evans
Jeannette Bertrand Jose Martinez
Janet Bigelow Mary Anne Shea
Barbara Blaikie Scott Evans

209
Self-joins and correlation names

Emp_fname emp_lname emp_fname emp_lname


Jane Braun Jose Martinez
Robert Breault David Scott
Matthew Bucceri Scott Evans
Joyce Butterfield Scott Evans
… … … …

Using correlation Choose short, concise correlation names to make your statements easier to
names read. In many cases, names only one or two characters in length suffice.
While you must use correlation names for a self-join to distinguish multiple
instances of a table, they can make many other statements more readable too.
For example, the statement
SELECT customer.fname, customer.lname,
sales_order.id, sales_order.order_date
FROM customer KEY JOIN sales_order
WHERE customer.fname = ’Beth’
AND customer.lname = ’Reiser’
becomes more compact if you use the correlation name c for customer and
so for sales_order:
SELECT c.fname, c.lname, so.id, so.order_date
FROM customer AS c KEY JOIN sales_order AS so
WHERE c.fname = ’Beth’
AND c.lname = ’Reiser’
For brevity, you can even eliminate the keyword AS. It is redundant because
the syntax of the SQL language identifies the correlation names: they are
separated from the corresponding table name by only a space, not a comma.
SELECT c.fname, c.lname, so.id, so.order_date
FROM customer c KEY JOIN sales_order so
WHERE c.fname = ’Beth’
AND c.lname = ’Reiser’

$ For further details of the rules regarding correlation names and


instances of a table within a FROM clause, see "Joining more than two
tables" on page 216.

210
Chapter 7 Joins: Retrieving Data from Several Tables

Cross joins
As for other types of joins, each row in a cross join is a combination of one
row from the first table and one row from the second table. Unlike other
joins, a cross join contains no restrictions. All possible combinations of rows
are present.
Each row of the first table appears exactly once with each row of the second
table. Hence, the number of rows in the join is the product of the number of
rows in the individual tables.
Inner and outer Except in the presence of additional restrictions in the WHERE clause, all
modifiers do not rows of both tables always appear in the result. Thus, the keywords INNER,
apply to cross joins LEFT OUTER and RIGHT OUTER are not applicable to cross joins.
The query
SELECT *
FROM table1 CROSS JOIN table2
has a result set as follows:
♦ As long as table1 is not the same name as table2:
♦ A row in the result set includes all columns in table1 and all
columns in table2.
♦ There is one row in the result set for each combination of a row in
table1 and a row in table2. If table1 has n rows and table2 has n
rows, the query returns n x m rows.
♦ If table1 is the same table as table2, and neither is given a correlation
name, the result set is simply the rows of table1.

Self-joins and cross joins


The following self-join produces a list of pairs of employees. Each employee
name appears in combination with every employee name.
SELECT a.emp_fname, a.emp_lname,
b.emp_fname, b.emp_lname
FROM employee AS a CROSS JOIN employee AS b

211
Cross joins

Emp_fname emp_lname Emp_fname emp_lname


Fran Whitney Fran Whitney
Matthew Cobb Fran Whitney
Philip Chin Fran Whitney
Julie Jordan Fran Whitney
Robert Breault Fran Whitney
Melissa Espinoza Fran Whitney
Jeannette Bertrand Fran Whitney

Since the employee table has 75 rows, this join contains 75 x 75 = 5625
rows. It includes, as well, rows that list each employee with themselves. For
example, it contains the row

emp_fname emp_lname emp_fname emp_lname


Fran Whitney Fran Whitney

If you want to exclude these rows, use the following command.


SELECT a.emp_fname, a.emp_lname,
b.emp_fname, b.emp_lname
FROM employee AS a CROSS JOIN employee AS b
WHERE a.emp_id !=b.emp_id
Without these rows, the join contains 75 x 74 = 5550 rows.
This new join contains rows that pair each employee with every other
employee, but because each pair of names can appear in two possible orders,
each pair appears twice. For example, the result of the above join contains
the following two rows.

emp_fname emp_lname emp_fname emp_lname


Matthew Cobb Fran Whitney
Fran Whitney Matthew Cobb

If the order of the names is not important, you can produce a list of the
(75 x 74)/2 = 2775 unique pairs.
SELECT a.emp_fname, a.emp_lname,
b.emp_fname, b.emp_lname
FROM employee AS a CROSS JOIN employee AS b
WHERE a.emp_id < b.emp_id

212
Chapter 7 Joins: Retrieving Data from Several Tables

This statement eliminates duplicate lines by selecting only those rows in


which the emp_id of employee a is less than that of employee b.
$ For more information, see "Self-joins and correlation names" on
page 209.

213
How joins are processed

How joins are processed


Knowing how joins are processed helps to understand them—and to figure
out why, when you incorrectly state a join, you sometimes get unexpected
results. This section describes the processing of joins in conceptual terms.
When executing your statements, Adaptive Server Anywhere uses a
sophisticated strategy to obtain the same results by more efficient means.
1 Processing a join uses the FROM clause to form the Cartesian product of
the tables—all the possible combinations of the rows from each of the
tables. The number of rows in a Cartesian product is the product of the
number of rows in the individual tables. This Cartesian product contains
rows composed of all columns from all tables.
2 Next, select the rows you want using conditions in the WHERE clause.
Whereas you may include NULL values for missing rows using a left- or
right-outer join, Adaptive Server Anywhere selects rows only if the
conditions evaluate to TRUE. It omits rows if the conditions evaluate to
either FALSE or UNKNOWN.
3 If you include a GROUP BY clause, the rows are partitioned according
to your conditions. Next, rows are selected from these partitions
according to any conditions in the HAVING clause.
4 If the statement includes an ORDER BY clause, then Adaptive Server
Anywhere uses it to order the remaining rows. When you do not specify
an ordering, make no assumptions regarding the order of the rows.
5 Finally, Adaptive Server Anywhere returns those columns you specified
in your select statement.

Tips
Adaptive Server Anywhere accepts a wide range of syntax. This
flexibility means that most queries result in an answer, but sometimes not
the one you intended. The following precautions may help you avoid this
peril.
1 Always use correlation names.
2 Try eliminating a WHERE clause when testing a new statement.
3 Avoid mixing inner joins with left-outer or right-outer joins.
4 Examine the plan for your query—does it include all the tables?

214
Chapter 7 Joins: Retrieving Data from Several Tables

Performance considerations
Generally, Adaptive Server Anywhere prefers to process joins by selecting
information in one table, then performing an indexed look-up to get the rows
it needs from another. Anywhere carefully optimizes each of your statements
before executing it. As long as your statement correctly identifies the
information you want, it usually doesn’t matter what syntax you use.
In particular, Adaptive Server Anywhere is free to reconstruct your statement
to any form that is semantically equivalent. It almost always does so, to help
compute your result efficiently. You can determine the result of a statement
using the above methods, but Anywhere usually obtains the result by another
means.
Adaptive Server Anywhere improves performance using indexes whenever
doing so improves performance. Columns that are part of a primary or
secondary key are indexed automatically. Other columns are not. Creating
additional indexes on columns involved in a join, either as part of a join
condition or in a where clause, can improve performance dramatically.
$ For further performance tips, see "Monitoring and Improving
Performance" on page 799.

215
Joining more than two tables

Joining more than two tables


To carry out many queries, you need to join more than two tables. Here, you
have two options at your disposal.
The following statement answers the question "What items are listed on
order number 2015?"
SELECT product.name, size, sales_order_items.quantity
FROM sales_order
KEY JOIN sales_order_items
KEY JOIN product
WHERE sales_order.id = 2015

id Name size quantity


300 Tee Shirt Small 24
301 Tee Shirt Medium 24
302 Tee Shirt One size fits all 24
700 Shorts Medium 24

When you want to join a number of tables sequentially, the above syntax
makes a lot of sense. However, sometimes you need to join a single table to
several others that surround it.

Star joins
Some joins must join a single table to several others around it. This type of
join is called a star join.
As an example, create a list the names of the customers who have placed
orders with Rollin Overbey.
SELECT c.fname, c.lname, o.order_date
FROM sales_order AS o KEY JOIN customer AS c,
sales_order AS o KEY JOIN employee AS e
WHERE e.emp_fname = ’Rollin’ AND e.emp_lname = ’Overbey’
ORDER BY o.order_date
Notice that one of the tables in the FROM clause, employee, does not
contribute any columns to the results. Nor do any of the columns that are
joined—such as customer.id or employee.id—appear in the results.
Nonetheless, this join is possible only using the employee table in the FROM
clause.

216
Chapter 7 Joins: Retrieving Data from Several Tables

Fname lname order_date


Tommie Wooten 1993-01-03
Michael Agliori 1993-01-08
Salton Pepper 1993-01-17
Tommie Wooten 1993-01-23
Michael Agliori 1993-01-24

The following statement uses a star join around the sales_order table. The
result is a list showing all the customers and the total quantity of each type of
product they have ordered. Some customers have not placed orders, so the
other values for these customers are NULL. In addition, it shows the name of
the manager of the sales person through whom they placed the orders.
SELECT c.fname, p.name, SUM(i.quantity), m.emp_fname
FROM sales_order o
KEY LEFT OUTER JOIN sales_order_items i
KEY LEFT OUTER JOIN product p, customer c,
sales_order o
KEY LEFT OUTER JOIN employee e
LEFT OUTER JOIN employee m
ON e.manager_id = m.emp_id
WHERE c.state = ’CA’
GROUP BY c.fname, p.name, m.emp_fname
ORDER BY SUM(i.quantity) DESC, c.fname
Note the following details of this statement:
♦ The join centers on the sales_order table.
♦ The keyword AS is optional and has been omitted.
♦ All joins must be outer joins to keep in the result set the customers who
haven’t placed any orders.
♦ The condition e.manager_id = m.emp_id must be placed in the ON
clause instead of the WHERE clause. The result of this statement would
be inner join if this condition moved into the WHERE clause.
♦ The query is syntactically correct in Adaptive Server Anywhere only if
the EXTENDED_JOIN_SYNTAX option is ON.
$ For more information about the EXTENDED_JOIN_SYNTAX option,
see "Database Options" on page 155 of the book ASA Reference.
The statement produces the results partially shown in the table below.

217
Joining more than two tables

Fname name SUM(i.quantity) emp_fname


Harry (NULL) (NULL) (NULL)
Jane (NULL) (NULL) (NULL)
Philipe (NULL) (NULL) (NULL)
Sheng Baseball Cap 240 Moira
Laura Tee Shirt 192 Moira
Moe Tee Shirt 192 Moira
Leilani Sweatshirt 132 Moira
Almen Baseball Cap 108 Moira
… … … …

218
Chapter 7 Joins: Retrieving Data from Several Tables

Joins involving derived tables


You can nest queries within a FROM clause in derived tables. Using
derived tables, you can perform grouping of groups or construct a join with a
group, without having to create a view.
In the following example, the inner SELECT statement (enclosed in
parentheses) creates a derived table, grouped by customer id values. The
outer SELECT statement assigns this table the correlation name
sales_order_counts and joins it to the customer table using a join condition.
SELECT lname, fname, number_of_orders
FROM customer join
( SELECT cust_id, count(*)
FROM sales_order
GROUP BY cust_id )
AS sales_order_counts (cust_id, number_of_orders)
ON (customer.id = sales_order_counts.cust_id)
WHERE number_of_orders > 3
The result is a table of the names of those customers who have placed more
than three orders, including the number of orders each has placed.

219
Transact-SQL outer joins

Transact-SQL outer joins


Joins that include all rows, regardless of whether or not they match the join
condition, are called outer joins. Adaptive Server Anywhere supports both
left and right outer joins via the LEFT OUTER and RIGHT OUTER
keywords. For compatibility with Adaptive Server Enterprise, Anywhere
supports the Transact-SQL-language counterparts of these keywords.
In the Transact-SQL dialect, you create joins by separating table names with
commas in the FROM clause. The join conditions appear in the WHERE
clause, rather than in the ON phrase. Special conditional operators indicate
the type of join.

Transact-SQL left-outer joins


The left outer join operator, *=, selects all rows from the left hand table that
meet the statement’s restrictions. The right hand table generates values if
there is a match on the join condition. Otherwise, the second table generates
null values.
For example, the following left outer join lists all customers and finds their
order dates (if any):
SELECT fname, lname, order_date
FROM customer, sales_order
WHERE customer.id *= sales_order.cust_id
ORDER BY order_date

Preserved and For an outer join, a table is either preserved or null-supplying. If the join
null-supplying operator is *=, the second table is the null-supplying table; if the join
tables operator is =*, the first table is the null-supplying table.
In addition to using it in the outer join, you can compare a column from the
preserved table to a constant. For example, you can use the following
statement to find information about customers in California.
SELECT fname, lname, order_date
FROM customer, sales_order
WHERE customer.state = ’CA’
AND customer.id *= sales_order.cust_id
ORDER BY order_date
However, the null-supplying table in a Transact-SQL outer join cannot also
participate in another regular or outer join.

220
Chapter 7 Joins: Retrieving Data from Several Tables

Bit columns
Since bit columns do not permit null values, a value of 0 appears in an
outer join when the bit column is in the null-supplying table, and this table
generates NULL values.

Transact-SQL right-outer joins


The right outer join, =*, selects all rows from the second table that meet the
statement’s restrictions. The first table generates values if there is a match on
the join condition. Otherwise, the first table generates null values.
The right outer join is specified with the comparison operator =*, which
indicates that all the rows in the second table are to be included in the results,
regardless of whether there is matching data in the first table.
Substituting this operator in the outer join query shown earlier gives this
result:
SELECT fname, lname, order_date
FROM sales_order, customer
WHERE sales_order.cust_id =* customer.id
ORDER BY order_date

Transact-SQL outer join restrictions


There are several restrictions for Transact-SQL outer joins:
♦ You cannot mix SQL/92 syntax and Transact-SQL outer join syntax in a
single query. This applies to views used by a query also: if a view is
defined using one dialect for an outer join, you must use the same dialect
for any outer-join queries on that view.
♦ A null-supplying table cannot participate in both a Transact-SQL outer
join and a regular join or two outerjoins. For example, the following
WHERE clause is not allowed:
WHERE R.x *= S.x
AND S.y = T.y
When you cannot rewrite your query to avoid using a table in both an
outer join and a regular join clause, you must divide your statement into
two separate queries, or use only SQL/92 syntax.
♦ You cannot use a subquery that contains a join condition involving the
null-supplying table of an outer join. For example, the following
WHERE clause is not allowed:
WHERE R.x *= S.y

221
Transact-SQL outer joins

AND EXISTS ( SELECT *


FROM T
WHERE T.x = S.x )
♦ If you submit a query with an outer join and a qualification on a column
from the null-supplying table of the outer join, the results may not be
what you expect. The qualification in the query does not restrict the
number of rows returned, but rather affects which rows in the result set
contain the null value. For rows that do not meet the qualification, a null
value appears in the null-supplying table’s columns of those rows.

Views used with Transact-SQL outer joins


If you define a view with an outer join, and then query the view with a
qualification on a column from the null-supplying table of the outer join, the
results may not be what you expect. The query returns all rows from the null-
supplying table. Rows that do not meet the qualification show a NULL value
in the appropriate columns of those rows.
The following rules determine what types of updates you can make to
columns through views that contain outer joins:
♦ INSERT and DELETE statements are not allowed on outer join views.
♦ UPDATE statements are allowed on outer join views. If the view is
defined WITH CHECK option, the update fails if any of the affected
columns appears in the WHERE clause in an expression that includes
columns from more than one table.

How NULL affects Transact-SQL joins


NULL values in tables or views being joined will never match each other.
Since bit columns do not permit NULLs, a value of 0 appears in an outer join
when the null-supplying table generates NULL rows.
The result of comparing a NULL value with any other NULL value is
FALSE. Because null values represent unknown or inapplicable values,
Transact-SQL has no reason to believe that one unknown value matches
another.

222
C H A P T E R 8

Using Subqueries

About this chapter When you create a query, you use WHERE and HAVING clauses to restrict
the rows the query displays.
Sometimes, the rows you select depend on information stored in more than
one table. A subquery in the WHERE or HAVING clause allows you to
select rows from one table according to specifications obtained from another
table. Additional ways to do this can be found in "Joins: Retrieving Data
from Several Tables" on page 195
Before your start This chapter assumes some knowledge of queries and the syntax of the select
statement. Information about queries appears in "Queries: Selecting Data
from a Table" on page 149.
Contents
Topic Page
What is a subquery? 224
Using Subqueries in the WHERE clause 225
Subqueries in the HAVING clause 226
Subquery comparison test 228
Quantified comparison tests with ANY and ALL 230
Testing set membership with IN conditions 233
Existence test 235
Outer references 237
Subqueries and joins 238
Nested subqueries 241
How subqueries work 243

223
What is a subquery?

What is a subquery?
A relational database stores information about different types of objects in
different tables. For example, you should store information particular to
products in one table, and information that pertains to sales orders in another.
The product table contains the information about the various products. The
sales order items table contains information about customers’ orders.
In general, only the simplest questions can be answered using only one table.
For example, if the company reorders products when there are fewer than 50
of them in stock, then it is possible to answer the question "Which products
are nearly out of stock?" with this query:
SELECT id, name, description, quantity
FROM product
WHERE quantity < 50
However, if "nearly out of stock" depends on how many items of each type
the typical customer orders, the number "50" will have to be replaced by a
value obtained from the sales_order_items table.
Structure of the A subquery is structured like a regular query, and appears in the main query’s
subquery WHERE or HAVING clause. In the above example, for instance, you can
use a subquery to select the average number of items that a customer orders,
and then use that figure in the main query to find products that are nearly out
of stock. The following query finds the names and descriptions of the
products which number less than double the average number of items of each
type that a customer orders.
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
)
SQL subqueries always appear in the WHERE or HAVING clauses of the
main query. In the WHERE clause, they help select the rows from the tables
listed in the FROM clause that appear in the query results. In the HAVING
clause, they help select the row groups, as specified by the main query’s
GROUP BY clause, that appear in the query results.

224
Chapter 8 Using Subqueries

Using Subqueries in the WHERE clause


Subqueries in the WHERE clause work as part of the row selection process.
You use a subquery in the WHERE clause when the criteria you use to select
rows depend on the results of another table.
Example Find the products whose in-stock quantities are less than double the average
ordered quantity.
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)
This is a two-step query: first, find the average number of items requested
per order; and then find which products in stock number less than double that
quantity.
The query in two The quantity column of the sales_order_items table stores the number of
steps items requested per item type, customer, and order. The subquery is
SELECT avg(quantity)
FROM sales_order_items
It returns the average quantity of items in the sales_order_items table,
which is the number 25.851413.
The next query returns the names and descriptions of the items whose in-
stock quantities are less than twice the previously-extracted value
SELECT name, description
FROM product
WHERE quantity < 2*25.851413
Using a subquery combines the two steps into a single operation.
Purpose of a A subquery in the WHERE clause is part of a search condition. The chapter
subquery in the "Queries: Selecting Data from a Table" on page 149 describes simple search
WHERE clause conditions you can use in the WHERE clause.

225
Subqueries in the HAVING clause

Subqueries in the HAVING clause


Although you usually use subqueries as search conditions in the WHERE
clause, sometimes you can also use them in the HAVING clause of a query.
When a subquery appears in the HAVING clause, like any expression in the
HAVING clause, it is used as part of the row group selection.
Here is a request that lends itself naturally to a query with a subquery in the
HAVING clause: "Which products’ average in-stock quantity is less than
double the average number of each item ordered per customer?"
Example SELECT name, avg(quantity)
FROM product
GROUP BY name
HAVING avg(quantity) > 2* (
SELECT avg(quantity)
FROM sales_order_items
)

name avg(quantity)
Baseball Cap 62.000000
Shorts 80.000000
Tee Shirt 52.333333

The query executes as follows:


♦ The subquery calculates the average quantity of items in the
sales_order_items table.
♦ The main query then goes through the product table, calculating the
average quantity product, grouping by product name.
♦ The HAVING clause then checks if each average quantity is more than
double the quantity found by the subquery. If so, the main query returns
that row group; otherwise, it doesn’t.
♦ The SELECT clause produces one summary row for each group,
showing the name of each product and its in-stock average quantity.
You can also use outer references in a HAVING clause, as shown in this
request, a slight variation on the one above:
Example "Find the product ID numbers and line ID numbers of those products whose
average ordered quantities is more than half the in-stock quantities of those
products."

226
Chapter 8 Using Subqueries

SELECT prod_id, line_id


FROM sales_order_items
GROUP BY prod_id, line_id
HAVING 2* avg(quantity) > (
SELECT quantity
FROM product
WHERE product.id = sales_order_items.prod_id)

Prod_id line_id
300 1
401 2
500 1
501 2
600 1
… …

In this example, the subquery must produce the in-stock quantity of the
product corresponding to the row group being tested by the HAVING clause.
The subquery selects records for that particular product, using the outer
reference sales_order_items.prod_id.
A subquery with a This query uses the comparison ">", suggesting that the subquery must
comparison returns return exactly one value. In this case, it does. Since the id field of the
a single value product table is a primary key, there is only one record in the product table
corresponding to any particular product id.

Subquery tests
The chapter "Queries: Selecting Data from a Table" on page 149 describes
simple search conditions you can use in the HAVING clause. Since a
subquery is just an expression that appears in the WHERE or HAVING
clauses, the search conditions on subqueries may look familiar.
They include:
♦ Subquery comparison test Compares the value of an expression to a
single value produced by the subquery for each record in the table(s) in
the main query.
♦ Quantified comparison test Compares the value of an expression to
each of the set of values produced by a subquery.
♦ Subquery set membership test Checks if the value of an expression
matches one of the set of values produced by a subquery.
♦ Existence test Checks if the subquery produces any rows.
227
Subquery comparison test

Subquery comparison test


The subquery comparison test (=, <>, <. <=, >, >=) is a modified version of
the simple comparison test; the only difference between the two is that in the
former, the expression following the operator is a subquery. This test is used
to compare a value from a row in the main query to a single value produced
by the subquery.
Example This query contains an example of a subquery comparison test:
SELECT name, description, quantity
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)

name description quantity


Tee Shirt Tank Top 28
Baseball Cap Wool cap 12
Visor Cloth Visor 36
Visor Plastic Visor 28
Sweatshirt Hooded Sweatshirt 39
Sweatshirt Zipped Sweatshirt 32

The following subquery retrieves a single value – the average quantity of


items of each type per customer's order — from the sales_order_items table.
SELECT avg(quantity)
FROM sales_order_items
Then the main query compares the quantity of each in-stock item to that
value.
A subquery in a A subquery in a comparison test must return exactly one value. Consider this
comparison test query, whose subquery extracts two columns from the sales_order_items
returns one value table:
SELECT name, description, quantity
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity), max (quantity)
FROM sales_order_items)
It returns the error Subquery allowed only one select list item.
Similarly, the following query returns multiple values from the quantity
column – one for each row in the sales_order_items table.

228
Chapter 8 Using Subqueries

SELECT name, description, quantity


FROM product
WHERE quantity < 2 * (
SELECT quantity
FROM sales_order_items)
It returns the error Subquery cannot return more than one result.
The subquery must The subquery comparison test allows a subquery only on the right side of the
appear to the right comparison operator. Thus the comparison
of a comparison
main-query-expression < subquery
operator
is acceptable, but the comparison
subquery < main-query-expression
is not acceptable.

229
Quantified comparison tests with ANY and ALL

Quantified comparison tests with ANY and ALL


The quantified comparison test has two categories, the ALL test and the
ANY test:

The ANY test


The ANY test, used in conjunction with one of the SQL comparison
operators (=, <>, <, <=, >, >=), compares a single value to the column of
data values produced by the subquery. To perform the test, SQL uses the
specified comparison operator to compare the test value to each data value in
the column. If any of the comparisons yields a TRUE result, the ANY test
returns TRUE.
A subquery used with ANY must return a single column.
Example Find the order and customer ID’s of those orders placed after the first
product of the order #2005 was shipped.
SELECT id, cust_id
FROM sales_order
WHERE order_date > ANY (
SELECT ship_date
FROM sales_order_items
WHERE id=2005)

Id Cust_id
2006 105
2007 106
2008 107
2009 108
… …

In executing this query, the main query tests the order dates for each order
against the shipping dates of every product of the order #2005. If an order
date is greater than the shipping date for one shipment of order #2005, then
that id and customer id from the sales_order table are part of the result set.
The ANY test is thus analogous to the OR operator: the above query can be
read, "Was this sales order placed after the first product of the order #2005
was shipped, or after the second product of order #2005 was shipped, or…"

230
Chapter 8 Using Subqueries

The ANY operator can be a bit confusing. It is tempting to read the query as
Understanding the "Return those orders placed after any products of order #2005 were shipped".
ANY operator But this means the query will return the order ID’s and customer ID’s for the
orders placed after all products of order #2005 were shipped – which is not
what the query does!
Instead, try reading the query like this: "Return the order and customer ID's
for those orders placed after at least one product of order #2005 was
shipped." Using the keyword SOME may provide a more intuitive way to
phrase the query. The following query is equivalent to the previous query.
SELECT id, cust_id
FROM sales_order
WHERE order_date > SOME (
SELECT ship_date
FROM sales_order_items
WHERE id=2005)
The keyword SOME is equivalent to the keyword ANY.
Notes about the There are two additional important characteristics of the ANY test:
ANY operator ♦ Empty subquery result set If the subquery produces an empty result
set, the ANY test returns FALSE. This makes sense, since if there are no
results, then it is not true that at least one result satisfies the comparison
test.
♦ NULL values in subquery result set Assume that there is at least one
NULL value in the subquery result set. If the comparison test is FALSE
for all non-NULL data values in the result set, the ANY search returns
FALSE. This is because in this situation, you cannot conclusively state
whether there is a value for the subquery for which the comparison test
holds. There may or may not be a value, depending on the "correct"
values for the NULL data in the result set.

The ALL test


Like the ANY test, the ALL test is used in conjunction with one of the six
SQL comparison operators (=, <>, <, <=, >, >=) to compare a single value to
the data values produced by the subquery. To perform the test, SQL uses the
specified comparison operator to compare the test value to each data value in
the result set. If all of the comparisons yield TRUE results, the ALL test
returns TRUE.
Example Here is a request naturally handled with the ALL test: "Find the order and
customer ID's of those orders placed after all products of order #2001 were
shipped."

231
Quantified comparison tests with ANY and ALL

SELECT id, cust_id


FROM sales_order
WHERE order_date > ALL (
SELECT ship_date
FROM sales_order_items
WHERE id=2001)

Id cust_id
2002 102
2003 103
2004 104
2005 101
… …

In executing this query, the main query tests the order dates for each order
against the shipping dates of every product of order #2001. If an order date is
greater than the shipping date for every shipment of order #2001, then the id
and customer id from the sales_order table are part of the result set. The
ALL test is thus analogous to the AND operator: the above query can be
read, "Was this sales order placed before the first product of order #2001 was
shipped, and before the second product of order #2001 was shipped, and…"
Notes about the There are three additional important characteristics of the ALL test:
ALL operator ♦ Empty subquery result set If the subquery produces an empty result
set, the ALL test returns TRUE. This makes sense, since if there are no
results, then it is true that the comparison test holds for every value in
the result set.
♦ NULL values in subquery result set Assume that there is at least one
NULL value in the subquery result set. If the comparison test is FALSE
for all non NULL data values in the result set, the ALL search returns
FALSE. In this situation you cannot conclusively state whether the
comparison test holds for every value in the subquery result set; it may
or may not, depending on the "correct" values for the NULL data.
♦ Negating the ALL test The following expressions are not equivalent.
NOT a = ALL (subquery)
a <> ALL (subquery)
$ This is explained in detail in "Quantified comparison test" on
page 245.

232
Chapter 8 Using Subqueries

Testing set membership with IN conditions


You can use the subquery set membership test to compare a value from the
main query to more than one value in the subquery.
The subquery set membership test compares a single data value for each row
in the main query to the single column of data values produced by the
subquery. If the data value from the main query matches one of the data
values in the column, the subquery returns TRUE.
Example Select the names of the employees who head the Shipping or Finance
departments:
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name =
’Shipping’))

emp_fname Emp_lname
Mary Anne Shea
Jose Martinez

The subquery in this example


SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ OR dept_name = ’Shipping’)
extracts from the department table the id numbers that correspond to the
heads of the Shipping and Finance departments. The main query then returns
the names of the employees whose id numbers match one of the two found
by the subquery.
Set membership The subquery set membership test is equivalent to the =ANY test. The
test is equivalent to following query is equivalent to the query from the above example.
=ANY test SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id =ANY (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name =
’Shipping’))

233
Testing set membership with IN conditions

Negation of the set You can also use the subquery set membership test to extract those rows
membership test whose column values are not equal to any of those produced by a subquery.
To negate a set membership test, insert the word NOT in front of the
keyword IN.
Example The subquery in this query returns the first and last names of the employees
that are not heads of the Finance or Shipping departments.
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id NOT IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ OR dept_name =
’Shipping’))

234
Chapter 8 Using Subqueries

Existence test
Subqueries used in the subquery comparison test and set membership test
both return data values from the subquery table. Sometimes, however, you
may be more concerned with whether the subquery returns any results, rather
than which results. The existence test (EXISTS) checks whether a subquery
produces any rows of query results. If the subquery produces one or more
rows of results, the EXISTS test returns TRUE. Otherwise, it returns FALSE.
Example Here is an example of a request expressed using a subquery: "Which
customers placed orders after July 13, 1994?"
SELECT fname, lname
FROM customer
WHERE EXISTS (
SELECT *
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id))

Fname lname
Grover Pendelton
Ling Ling Andrews
Bubba Murphy
Almen de Joie

Explanation of the Here, for each row in the customer table, the subquery checks if that
existence test customer ID corresponds to one that has placed an order after July 13, 1994.
If it does, the query extracts the first and last names of that customer from
the main table.
The EXISTS test does not use the results of the subquery; it just checks if the
subquery produces any rows. So the existence test applied to the following
two subqueries return the same results:
SELECT *
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id)
SELECT ship_date
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id)

235
Existence test

It does not matter which columns from the sales_order table appear in the
SELECT statement, though by convention, the "SELECT *" notation is used.
Negating the You can reverse the logic of the EXISTS test using the NOT EXISTS form.
existence test In this case, the test returns TRUE if the subquery produces no rows, and
FALSE otherwise.
Correlated You may have noticed that the subquery contains a reference to the id
subqueries column from the customer table. References to columns or expressions in
the main table(s) are called outer references and the subquery is said to be
correlated. Conceptually, SQL processes the above query by going through
the customer table, and performing the subquery for each customer. If the
order date in the sales_order table is after July 13, 1994, and the customer
ID in the customer and sales_order tables match, then the first and last
names from the customer table appear. Since the subquery references the
main query, the subquery in this section, unlike those from previous sections,
returns an error if you attempt to run it by itself.

236
Chapter 8 Using Subqueries

Outer references
Within the body of a subquery, it is often necessary to refer to the value of a
column in the active row of the main query. Consider the following query:
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
WHERE product.id = sales_order_items.prod_id)
This query extracts the names and descriptions of the products whose in-
stock quantities are less than double the average ordered quantity of that
product — specifically, the product being tested by the WHERE clause in the
main query. The subquery does this by scanning the sales_order_items
table. But the product.id column in the WHERE clause of the subquery
refers to a column in the table named in the FROM clause of the main query
— not the subquery. As SQL moves through each row of the product table,
it uses the id value of the current row when it evaluates the WHERE clause
of the subquery.
Description of an The product.id column in this subquery is an example of an outer
outer reference reference. A subquery that uses an outer reference is a correlated
subquery. An outer reference is a column name that does not refer to any of
the columns in any of the tables in the FROM clause of the subquery.
Instead, the column name refers to a column of a table specified in the
FROM clause of the main query. As the above example shows, the value of a
column in an outer reference comes from the row currently being tested by
the main query.

237
Subqueries and joins

Subqueries and joins


The subquery optimizer automatically rewrites as joins many of the queries
that make use of subqueries.
Example Consider the request, "When did Mrs. Clarke and Suresh place their orders,
and by which sales representatives?" It can be handled by the following
query:
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id IN (
SELECT id
FROM customer
WHERE lname = ’Clarke’ OR fname = ’Suresh’)

Order_date sales_rep
1994-01-05 1596
1993-01-27 667
1993-11-11 467
1994-02-04 195
1994-02-19 195
1994-04-02 299
1993-11-09 129
1994-01-29 690
1994-05-25 299

The subquery yields a list of customer ID’s that correspond to the two
customers whose names are listed in the WHERE clause, and the main query
finds the order dates and sales representatives corresponding to those two
people’s orders.
Replacing a The same question can be answered using joins. Here is an alternative form
subquery with a of the query, using a two-table join:
join SELECT order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)

238
Chapter 8 Using Subqueries

This form of the query joins the sales_order table to the customer table to
find the orders for each customer, and then returns only those records for
Suresh and Mrs. Clarke.
Some joins cannot Both of these queries find the correct order dates and sales representatives,
be written as and neither is more right than the other. Many people will find the subquery
subqueries form more natural, because the request doesn’t ask for any information about
customer ID’s, and because it might seem odd to join the sales_order and
customer tables together to answer the question.
If, however, the request changes to include some information from the
customer table, the subquery form no longer works. For example, the
request "When did Mrs. Clarke and Suresh place their orders, and by which
representatives, and what are their full names?", it is necessary to include the
customer table in the main WHERE clause:
SELECT fname, lname, order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)

fname Lname order_date sales_rep


Belinda Clarke 1994-01-05 1596
Belinda Clarke 1993-01-27 667
Belinda Clarke 1993-11-11 467
Belinda Clarke 1994-02-04 195
Belinda Clarke 1994-02-19 195
Suresh Naidu 1994-04-02 299
Suresh Naidu 1993-11-09 129
Suresh Naidu 1994-01-29 690
Suresh Naidu 1994-05-25 299

Some subqueries Similarly, there are cases where a subquery will work but a join will not. For
cannot be written example:
as joins SELECT name, description, quantity
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)

239
Subqueries and joins

Name Description quantity


Tee Shirt Tank Top 28
Baseball Cap Wool cap 12
Visor Cloth Visor 36
… … …

In this case, the inner query is a summary query and the outer query is not, so
there is no way to combine the two queries by a simple join.
$ For more information on joins, see "Queries: Selecting Data from a
Table" on page 149.

240
Chapter 8 Using Subqueries

Nested subqueries
As we have seen, subqueries always appear in the HAVING clause or the
WHERE clause of a query. A subquery may itself contain a WHERE clause
and/or a HAVING clause, and, consequently, a subquery may appear in
another subquery. Subqueries inside other subqueries are called nested
subqueries.
Examples List the order IDs and line IDs of those orders shipped on the same day when
any item in the fees department was ordered.
SELECT id, line_id
FROM sales_order_items
WHERE ship_date = ANY (
SELECT order_date
FROM sales_order
WHERE fin_code_id IN (
SELECT code
FROM fin_code
WHERE (description = ’Fees’)))

Id line_id
2001 1
2001 2
2001 3
2002 1
2002 2
… …

Explanation of the ♦ In this example, the innermost subquery produces a column of financial
nested subqueries codes whose descriptions are "Fees":
SELECT code
FROM fin_code
WHERE (description = ’Fees’)
♦ The next subquery finds the order dates of the items whose codes match
one of the codes selected in the innermost subquery:
SELECT order_date
FROM sales_order
WHERE fin_code_id IN (subquery)
♦ Finally, the outermost query finds the order ID’s and line ID’s of the
orders shipped on one of the dates found in the subquery.

241
Nested subqueries

SELECT id, line_id


FROM sales_order_items
WHERE ship_date = ANY (subquery)

Nested subqueries can also have more than three levels. Though there is no
maximum number of levels, queries with three or more levels take
considerably longer to run than do smaller queries.

242
Chapter 8 Using Subqueries

How subqueries work


Understanding which queries are valid and which ones aren’t can be
complicated when a query contains a subquery. Similarly, figuring out what
a multi-level query does can also be very involved, and it helps to understand
how the database server processes subqueries. For general information about
processing queries, see "Summarizing, Grouping, and Sorting Query
Results" on page 173.

Correlated subqueries
In a simple query, the database server evaluates and processes the query’s
WHERE clause once for each row of the query. Sometimes, though, the
subquery returns only one result, making it unnecessary for the database
server to evaluate it more than once for the entire result set.
Uncorrelated Consider this query:
subqueries SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)
In this example, the subquery calculates exactly one value: the average
quantity from the sales_order_items table. In evaluating the query, the
database server computes this value once, and compares each value in the
quantity field of the product table to it to determine whether to select the
corresponding row.
Correlated When a subquery contains an outer reference, you cannot use this shortcut.
subqueries For instance, the subquery in the query
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
WHERE product.id=sales_order_items.prod_id)
returns a value dependent upon the active row in the product table. Such
subqueries are called correlated subqueries. In these cases, the subquery
might return a different value for each row of the outer query, making it
necessary for the database server to perform more than one evaluation.

243
How subqueries work

Converting subqueries in the WHERE clause to joins


In general, a query using joins executes faster than a multi-level query. For
this reason, whenever possible, the Adaptive Server Anywhere query
optimizer converts a multi-level query to a query using joins. The conversion
is carried out without any user action. This section describes which
subqueries can be converted to joins so you can understand the performace
of queries in your database.
Example The question "When did Mrs. Clarke and Suresh place their orders, and by
which sales representatives?" can be written as a two-level query:
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id IN (
SELECT id
FROM customer
WHERE lname = ’Clarke’ OR fname = ’Suresh’)
An alternate, and equally correct way to write the query uses joins:
SELECT fname, lname, order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
The criteria that must be satisfied in order for a multi-level query to be able
to be rewritten with joins differ for the various types of operators. Recall that
when a subquery appears in the query’s WHERE clause, it is of the form
SELECT select-list
FROM table
WHERE
[NOT] expression comparison-operator (subquery) |
[NOT] expression comparison-operator ANY / SOME (subquery) |
[NOT] expression comparison-operator ALL (subquery) |
[NOT] expression IN (subquery) |
[NOT] EXISTS (subquery)
GROUP BY group-by-expression
HAVING search-condition
Whether a subquery can be converted to a join depends on a number of
factors, such as the type of operator and the structures of the query and of the
subquery.

244
Chapter 8 Using Subqueries

Comparison operators
A subquery that follows a comparison operator (=, <>, <, <=, >, >=) must
satisfy certain conditions if it is to be converted into a join. Subqueries that
follow comparison operators in general are valid only if they return exactly
one value for each row of the main query. In addition to this criterion, a
subquery is converted to a join only if the subquery
♦ does not contain a GROUP BY clause
♦ does not contain the keyword DISTINCT
♦ is not a UNION query
♦ is not an aggregate query
Example Suppose the request "When were Suresh’s products ordered, and by which
sales representative?" were phrased as the subquery
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id = (
SELECT id
FROM customer
WHERE fname = ’Suresh’)
This query satisfies the criteria, and therefore, it would be converted to a
query using a join:
SELECT order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
However, the request, "Find the products whose in-stock quantities are less
than double the average ordered quantity" cannot be converted to a join, as
the subquery contains the aggregate function avg:
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)

Quantified comparison test


A subquery that follows one of the keywords ALL, ANY and SOME is
converted into a join only if it satisfies certain criteria.
♦ The main query does not contain a GROUP BY clause, and is not an
aggregate query, or the subquery returns exactly one value.

245
How subqueries work

♦ The subquery does not contain a GROUP BY clause.


♦ The subquery does not contain the keyword DISTINCT.
♦ The subquery is not a UNION query.
♦ The subquery is not an aggregate query.
♦ The conjunct ’expression comparison-operator ANY/SOME (subquery)’
must be negated.
♦ The conjunct ’expression comparison-operator ALL (subquery)’ must not
be negated.
The first four of these conditions are relatively straightforward.
Example The request "When did Ms. Clarke and Suresh place their orders, and by
which sales representatives?" can be handled in subquery form:
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id = ANY (
SELECT id
FROM customer
WHERE lname = ’Clarke’ OR fname = ’Suresh’)
Alternately, it can be phrased in join form
SELECT fname, lname, order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
However, the request, "When did Ms. Clarke, Suresh, and any employee who
is also a customer, place their orders?" would be phrased as a union query,
and thus cannot be converted to a join:
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id = ANY (
SELECT id
FROM customer
WHERE lname = ’Clarke’ OR fname = ’Suresh’
UNION
SELECT id
FROM employee)
Similarly, the request "Find the order IDs and customer IDs of those orders
that were not placed after all products of order #2001 were shipped," is
naturally expressed with a subquery:

246
Chapter 8 Using Subqueries

SELECT id, cust_id


FROM sales_order
WHERE NOT order_date > ALL (
SELECT ship_date
FROM sales_order_items
WHERE id=2001)
It would be converted to the join:
SELECT sales_order.id, cust_id
FROM sales_order, sales_order_items
WHERE (sales_order_items.id=2001) and (order_date <=
ship_date)
However, the request "Find the order IDs and customer IDs of those orders
not shipped after the first shipping dates of all the products" would be
phrased as the aggregate query:
SELECT id, cust_id
FROM sales_order
WHERE NOT order_date > ALL (
SELECT first (ship_date)
FROM sales_order_items )
Therefore, it would not be converted to a join.
Negating The fifth criterion is a little more puzzling: queries of the form
subqueries with the SELECT select-list
ANY and ALL FROM table
operators WHERE NOT expression comparison-operator ALL (subquery)
are converted to joins, as are queries of the form
SELECT select-list
FROM table
WHERE expression comparison-operator ANY (subquery)
but the queries
SELECT select-list
FROM table
WHERE expression comparison-operator ALL (subquery)
and
SELECT select-list
FROM table
WHERE NOT expression comparison-operator ANY (subquery)
are not.
Logical This is because the first two queries are in fact equivalent, as are the last two.
equivalence of Recall that the any operator is analogous to the OR operator, but with a
ANY and ALL variable number of arguments; and that the ALL operator is similarly
expressions analogous to the AND operator. Just as the expression

247
How subqueries work

NOT ((X > A) AND (X > B))


is equivalent to the expression
(X <= A) OR (X <= B)
the expression
NOT order_date > ALL (
SELECT first (ship_date)
FROM sales_order_items )
is equivalent to the expression
order_date <= ANY (
SELECT first (ship_date)
FROM sales_order_items )

Negating the ANY In general, the expression


and ALL NOT column-name operator ANY (subquery)
expressions
is equivalent to the expression
column-name inverse-operator ALL (subquery)
and the expression
NOT column-name operator ALL (subquery)
is equivalent to the expression
column-name inverse-operator ANY (subquery)
where inverse-operator is obtained by negating operator, as shown in the
table:
Table of operators The following table lists the inverse of each operator.
and their inverses
Operator inverse-operator
= <>
< =>
> =<
=< >
=> <
<> =

248
Chapter 8 Using Subqueries

Set membership test


A query containing a subquery that follows the keyword IN is converted into
a join only if:
♦ The main query does not contain a GROUP BY clause, and is not an
aggregate query, or the subquery returns exactly one value.
♦ The subquery does not contain a GROUP BY clause.
♦ The subquery does not contain the keyword DISTINCT.
♦ The subquery is not a UNION query.
♦ The subquery is not an aggregate query.
♦ The conjunct ’expression IN (subquery)’ must not be negated.
Example So, the request "Find the names of the employees who are also department
heads", expressed by the query:
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name = ’Shipping’))

would be converted to a joined query, as it satisfies the conditions. However,


the request, "Find the names of the employees who are either department
heads or customers" would not be converted to a join if it were expressed by
the UNION query
A UNION query SELECT emp_fname, emp_lname
following the IN FROM employee
operator can’t be WHERE emp_id IN (
SELECT dept_head_id
converted FROM department
WHERE (dept_name=’Finance’ or dept_name = ’Shipping’)
UNION
SELECT cust_id
FROM sales_order)
Similarly, the request "Find the names of employees who are not department
heads" is formulated as the negated subquery
SELECT emp_fname, emp_lname
FROM employee
WHERE NOT emp_id IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ OR dept_name = ’Shipping’))

and would not be converted.

249
How subqueries work

The conditions that must be fulfilled for a subquery that follows the IN
keyword and the ANY keyword to be converted to a join are identical. This
is not a coincidence, and the reason for this is that the expression
A query with an IN WHERE column-name IN (subquery)
operator can be is logically equivalent to the expression
converted to one
with an ANY WHERE column-name = ANY (subquery)
operator So the query
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name = ’Shipping’))

is equivalent to the query


SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id = ANY (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name = ’Shipping’))

Conceptually, Adaptive Server Anywhere converts a query with the IN


operator to one with an ANY operator, and decides accordingly whether to
convert the subquery to a join.

Existence test
A subquery that follows the keyword EXISTS is converted to a join only if it
satisfies the following two conditions:
♦ The main query does not contain a GROUP BY clause, and is not an
aggregate query, or the subquery returns exactly one value.
♦ The conjunct ’EXISTS (subquery)’ is not negated.
♦ The subquery is correlated; that is, it contains an outer reference.
Example Therefore, the request, "Which customers placed orders after July 13,
1994?", which can be formulated by this query whose non-negated subquery
contains the outer reference customer.id = sales_order.cust_id, could be
converted to a join.

250
Chapter 8 Using Subqueries

SELECT fname, lname


FROM customer
WHERE EXISTS (
SELECT *
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id))
The EXISTS keyword essentially tells the database server to check for empty
result sets. When using inner joins, the database server automatically
displays only the rows where there is data from all of the tables in the FROM
clause. So, this query returns the same rows as does the one with the
subquery:
SELECT fname, lname
FROM customer, sales_order
WHERE (sales_order.order_date > ’1994-07-13’) AND
(customer.id = sales_order.cust_id).

251
How subqueries work

252
C H A P T E R 9

Adding, Changing, and Deleting Data

About this chapter This chapter describes how to modify the data in a database.
Most of the chapter is devoted to the INSERT, UPDATE, and DELETE
statements, as well as statements for bulk loading and unloading.
Contents
Topic Page
Data modification statements 254
Adding data using INSERT 255
Changing data using UPDATE 259
Deleting data using DELETE 261

253
Data modification statements

Data modification statements


The statements you use to add, change, or delete data are called data
modification statements. The most common such statements include:
♦ Insert adds new rows to a table
♦ Update changes existing rows in a table
♦ Delete removes specific rows from a table
Any single INSERT, UPDATE, or DELETE statement changes the data in
only one table or view.
In addition to the common statements, the LOAD TABLE and TRUNCATE
TABLE statements are especially useful for bulk loading and deleting of
data.
Sometimes, the data modification statements are collectively known as the
data modificaton language (DML) part of SQL.

Permissions for data modification


You can only execute data modification statements if you have the proper
permissions on the database tables you want to modify. The database
administrator and the owners of database objects use the GRANT and
REVOKE statements to decide who has access to which data modification
functions.
$ Permissions can be granted to individual users, groups, or the public
group. For more information on permissions, see "Managing User IDs and
Permissions" on page 735.

Transactions and data modification


When you modify data, the transaction log stores a copy of the old and new
state of each row affected by each data modification statement. This means
that if you begin a transaction, realize you have made a mistake, and roll the
transaction back, you also restore the database to its previous condition.
$ For more information about transactions, see "Using Transactions and
Isolation Levels" on page 381.

254
Chapter 9 Adding, Changing, and Deleting Data

Adding data using INSERT


You add rows to the database using the INSERT statement. The INSERT
statement has two forms: you can use the VALUES keyword or a SELECT
statement:
INSERT using The VALUES keyword specifies values for some or all of the columns in a
values new row. A simplified version of the syntax for the INSERT statement using
the VALUES keyword is:
INSERT [ INTO ] table-name [ ( column-name, ... ) ]
VALUES ( expression , ... )
You can omit the list of column names if you provide a value for each
column in the table, in the order in which they appear when you execute a
query using SELECT *.
INSERT from You can use a SELECT statement in an INSERT statement to pull values
SELECT from one or more tables. A simplified version of the syntax for the insert
statement using a select statement is:
INSERT [ INTO ] table-name ( column-name, ... )
select-statement

Inserting values into all columns of a row


The following INSERT statement adds a new row to the department table,
giving a value for every column in the row:
INSERT INTO department
VALUES ( 702, ’Eastern Sales’, 902 )

Notes ♦ Enter the values in the same order as the column names in the original
CREATE TABLE statement, that is, first the ID number, then the name,
then the department head ID.
♦ Surround the values by parentheses.
♦ Enclose all character data in single quotes.
♦ Use a separate insert statement for each row you add.

255
Adding data using INSERT

Inserting values into specific columns


You can add data to some columns in a row by specifying only those
columns and their values. Define all other columns not included in the
column list must to allow NULL or have defaults. If you skip a column that
has a default value, the default appears in that column.
Adding data in only two columns, for example, dept_id and dept_name,
requires a statement like this:
INSERT INTO department (dept_id, dept_name)
VALUES ( 703, ’Western Sales’ )
The dept_head_id column has no default, but can allow NULL. A NULL is
assigned to that column.
The order in which you list the column names must match the order in which
you list the values. The following example produces the same results as the
previous one:
INSERT INTO department (dept_name, dept_id )
VALUES (’Western Sales’, 703)

Values for When you specify values for only some of the columns in a row, one of four
unspecified things can happen to the columns with no values specified:
columns
♦ NULL entered NULL appears if the column allows NULL and no
default value exists for the column.
♦ A default value entered The default value appears if a default exists
for the column.
♦ A unique, sequential value entered A unique, sequential value
appears if the column has the AUTOINCREMENT default or the
IDENTITY property.
♦ INSERT rejected, and an error message appears An error message
appears if the column does not allow NULL and no default exists.
By default, columns allow NULL unless you explicitly state NOT NULL in
the column definition when creating tables. You can alter the default using
the ALLOW_NULLS_BY_DEFAULT option.
Restricting column You can create constraints for a column or domain. Constraints govern the
data using kind of data you can or cannot add.
constraints
$ For information on constraints, see "Using table and column
constraints" on page 367.
Explicitly inserting You can explicitly insert NULL into a column by entering NULL. Do not
NULL enclose this in quotes, or it will be taken as a string.

256
Chapter 9 Adding, Changing, and Deleting Data

For example, the following statement explicitly inserts NULL into the
dept_head_id column:
INSERT INTO department
VALUES (703, ’Western Sales’, NULL )

Using defaults to You can define a column so that, even though the column receives no value,
supply values a default value automatically appears whenever a row is inserted. You do this
by supplying a default for the column.
$ For information about defaults, see "Using column defaults" on
page 362.

Adding new rows with SELECT


To pull values into a table from one or more other tables, you can use a
SELECT clause in the INSERT statement. The select clause can insert values
into some or all of the columns in a row.
Inserting values for only some columns can come in handy when you want to
take some values from an existing table. Then, you can use update to add the
values for the other columns.
Before inserting values for some, but not all, columns in a table, make sure
that either a default exists, or you specify NULL for the columns for which
you are not inserting values. Otherwise, an error appears.
When you insert rows from one table into another, the two tables must have
compatible structures—that is, the matching columns must be either the same
data types or data types between which Adaptive Server automatically
converts.
Example If the columns are in the same order in their create table statements, you do
not need to specify column names in either table. Suppose you have a table
named newproduct that contains some rows of product information in the
same format as in the product table. To add to product all the rows in
newproduct:
INSERT product
SELECT *
FROM newproduct
You can use expressions in a SELECT statement inside an INSERT
statement.
Inserting data into You can use the SELECT statement to add data to some, but not all, columns
some columns in a row just as you do with the VALUES clause. Simply specify the
columns to which you want to add data in the INSERT clause.

257
Adding data using INSERT

Inserting Data from You can insert data into a table based on other data in the same table.
the Same Table Essentially, this means copying all or part of a row.
For example, you can insert new products, based on existing products, into
the product table. The following statement adds new Extra Large Tee Shirts
(of Tank Top, V-neck, and Crew Neck varieties) into the product table. The
identification number is ten greater than the existing sized shirt:
INSERT INTO product
SELECT id+ 10, name, description,
’Extra large’, color, 50, unit_price
FROM product
WHERE name = ’Tee Shirt’

Inserting documents and images


If you want to store documents or images in LONG BINARY columns in
your database, you can write an application that reads the contents of the file
into a variable, and supplies that variable as a value for an INSERT
statement.
$ For information about adding INSERT statements to applications, see
"How to use prepared statements" on page 266.
You can also use the xp_read_file system function to insert file contents into
a table. This function is useful if you want to insert file contents from
Interactive SQL, or some other environment that does not provide a full
programming language.
DBA authority is required to use this external function.
Example In this example, you create a table, and insert an image into a column of the
table. You can carry out these steps from Interactive SQL.
1 Create a table to hold some images.
CREATE TABLE pictures
( c1 INT DEFAULT AUTOINCREMENT PRIMARY KEY,
filename VARCHAR(254),
picture LONG BINARY )
2 Insert the contents of portrait.gif , in the current working directory of the
database server, into the table.
INSERT INTO pictures (filename, picture)
VALUES ( ’portrait.gif’,
xp_read_file( ’portrait.gif’ ) )

$ For more information, see "xp_read_file system procedure" on


page 985 of the book ASA Reference.

258
Chapter 9 Adding, Changing, and Deleting Data

Changing data using UPDATE


You can use the UPDATE statement, followed by the name of the table or
view, to change single rows, groups of rows, or all rows in a table. As in all
data modification statements, you can change the data in only one table or
view at a time.
The UPDATE statement specifies the row or rows you want changed and the
new data. The new data can be a constant or an expression that you specify
or data pulled from other tables.
If an UPDATE statement violates an integrity constraint, the update does not
take place and an error message appears. For example, if one of the values
being added is the wrong data type, or if it violates a constraint defined for
one of the columns or data types involved, the update does not take place.
UPDATE syntax A simplified version of the UPDATE syntax is:
UPDATE table-name
SET column_name = expression
WHERE search-condition
If the company Newton Ent. (in the customer table of the sample database) is
taken over by Einstein, Inc., you can update the name of the company using a
statement such as the following:
UPDATE customer
SET company_name = ’Einstein, Inc.’
WHERE company_name = ’Newton Ent.’
You can use any expression in the WHERE clause. If you are not sure how
the company name was entered, you could try updating any company called
Newton, with a statement such as the following:
UPDATE customer
SET company_name = ’Einstein, Inc.’
WHERE company_name LIKE ’Newton%’
The search condition need not refer to the column being updated. The
company ID for Newton Entertainments is 109. As the ID value is the
primary key for the table, you could be sure of updating the correct row
using the following statement:
UPDATE customer
SET company_name = ’Einstein, Inc.’
WHERE id = 109

The SET clause The SET clause specifies the columns to be updated, and their new values.
The WHERE clause determines the row or rows to be updated. If you do not
have a WHERE clause, the specified columns of all rows are updated with
the values given in the SET clause.

259
Changing data using UPDATE

You can provide any expression of the correct data type in the SET clause.
The WHERE The WHERE clause specifies the rows to be updated. For example, the
clause following statement replaces the One Size Fits All Tee Shirt with an Extra
Large Tee Shirt
UPDATE product
SET size = ’Extra Large’
WHERE name = ’Tee Shirt’
AND size = ’One Size Fits All’

The FROM clause You can use a FROM clause to pull data from one or more tables into the
table you are updating.

260
Chapter 9 Adding, Changing, and Deleting Data

Deleting data using DELETE


Simple DELETE statements have the following form:
DELETE [ FROM ] table-name
WHERE column-name = expression
You can also use a more complex form, as follows
DELETE [ FROM ] table-name
FROM table-list
WHERE search-condition

The WHERE Use the WHERE clause to specify which rows to remove. If no WHERE
clause clause appears, the DELETE statement remove all rows in the table.
The FROM clause The FROM clause in the second position of a DELETE statement is a special
feature allowing you to select data from a table or tables and delete
corresponding data from the first-named table. The rows you select in the
FROM clause specify the conditions for the delete.
Example This example uses the sample database. To execute the statements in the
example, you should set the option WAIT_FOR_COMMIT to OFF. The
following statement does this for the current connection only:
SET TEMPORARY OPTION WAIT_FOR_COMMIT = ’OFF’
This allows you to delete rows even if they contain primary keys referenced
by a foreign key, but does not permit a COMMIT unless the corresponding
foreign key is deleted also.
The following view displays products and the value of that product that has
been sold:
CREATE VIEW ProductPopularity as
SELECT product.id,
SUM(product.unit_price * sales_order_items.quantity)
as "Value Sold"
FROM product JOIN sales_order_items
ON product.id = sales_order_items.prod_id
GROUP BY product.id
Using this view, you can delete those products which have sold less than
$20,000 from the product table.
DELETE
FROM product
FROM product NATURAL JOIN ProductPopularity
WHERE "Value Sold" < 20000
You should roll back your changes when you have completed the example:
ROLLBACK

261
Deleting data using DELETE

Deleting all rows from a table


You can use the TRUNCATE TABLE statement as a fast method of deleting
all the rows in a table. It is faster than a DELETE statement with no
conditions, because the delete logs each change, while the transaction log
does not record truncate table operations individually.
The table definition for a table emptied with the TRUNCATE TABLE
statement remains in the database, along with its indexes and other
associated objects, unless you enter a DROP TABLE statement.
You cannot use TRUNCATE TABLE if another table has rows that
reference it through a referential integrity constraint. Delete the rows from
the foreign table, or truncate the foreign table and then truncate the primary
table.
TRUNCATE The syntax of truncate table is:
TABLE syntax
TRUNCATE TABLE table-name
For example, to remove all the data in the sales_order table, type the
following:
TRUNCATE TABLE sales_order
A TRUNCATE TABLE statement does not fire triggers defined on the table.

262
C H A P T E R 1 0

Using SQL in Applications

About this chapter Previous chapters have described SQL statements as you execute them in
Interactive SQL or in some other interactive utility.
When you include SQL statements in an application there are other questions
you need to ask. For example, how does your application handle query result
sets? How can you make your application efficient?
While many aspects of database application development depend on your
application development tool, database interface, and programming
language, there are some common problems and principles that affect
multiple aspects of database application development. This chapter describes
some principles common to most or all interfaces and provides a few
pointers for more information. It does not provide a detailed guide for
programming using any one interface.
Contents
Topic Page
Executing SQL statements in applications 264
Preparing statements 266
Introduction to cursors 269
Types of cursor 272
Working with cursors 275
Describing result sets 281
Controlling transactions in applications 283

263
Executing SQL statements in applications

Executing SQL statements in applications


The way you include SQL statements in your application depends on the
application development tool and programming interface you use.
♦ ODBC If you are writing directly to the ODBC programming interface,
your SQL statements appear in function calls. For example, the
following C function call executes a DELETE statement:
SQLExecDirect( stmt,
"DELETE FROM employee
WHERE emp_id = 105",
SQL_NTS );
♦ JDBC If you are using the JDBC programming interface, you can
execute SQL statements by invoking methods of the statement object.
For example:
stmt.executeUpdate(
"DELETE FROM employee
WHERE emp_id = 105" );
♦ Embedded SQL If you are using Embedded SQL, you prefix your C
language SQL statements with the keyword EXEC SQL. The code is
then run through a preprocessor before compiling. For example:
EXEC SQL EXECUTE IMMEDIATE
’DELETE FROM employee
WHERE emp_id = 105’;
♦ Sybase Open Client If you use the Sybase Open Client interface, your
SQL statements appear in function calls. For example, the following pair
of calls executes a DELETE statement:
ret = ct_command(cmd, CS_LANG_CMD,
"DELETE FROM employee
WHERE emp_id=105"
CS_NULLTERM,
CS_UNUSED);
ret = ct_send(cmd);
♦ Application Development Tools Application development tools such
as the members of the Sybase Enterprise Application Studio family
provide their own SQL objects, which use either ODBC (PowerBuilder,
Power++) or JDBC (Power J) under the covers.

$ For more For detailed information on how to include SQL in your application, see your
information development tool documentation. If you are using ODBC or JDBC, consult
the software development kit for those interfaces.

264
Chapter 10 Using SQL in Applications

For a detailed description of Embedded SQL programming, see "The


Embedded SQL Interface" on page 7 of the book ASA Programming
Interfaces Guide.
Applications inside In many ways, stored procedures and triggers act as applications or parts of
the server applications running inside the server. You can use many of the techniques
here in stored procedures also. Stored procedures use statements very similar
to Embedded SQL statements.
$ For information about stored procedures and triggres, see "Using
Procedures, Triggers, and Batches" on page 435.
Java classes in the database can use the JDBC interface in the same way as
Java applications outside the server. This chapter discusses some aspects of
JDBC. For other information on using JDBC, see "Data Access Using
JDBC" on page 591.

265
Preparing statements

Preparing statements
Each time a statement is sent to a database, the server must first prepare the
statement. Preparing the statement can include:
♦ Parsing the statement and transforming it into an internal form.
♦ Verifying the correctness of all references to database objects by
checking, for example, that columns named in a query actually exist.
♦ Causing the query optimizer to generate an access plan if the statement
involves joins or subqueries,.
♦ Executing the statement after all these steps have been carried out.

Reusing prepared If you find yourself using the same statement repeatedly, for example,
statements can inserting many rows into a table, repeatedly preparing the statement causes a
improve significant and unnecessary overhead. To remove this overhead, some
performance database programming interfaces provide ways of using prepared
statements. Generally, using these methods requires the following steps:
1 Prepare the statement In this step you generally provide the
statement with some placeholder character instead of the values.
2 Repeatedly execute the prepared statement In this step you supply
values to be used each time the statement is executed. The statement
does not have to be prepared each time.
3 Drop the statement In this step you free the resources associated with
the prepared statement. Some programming interfaces handle this step
automatically.

Do not prepare In general, you should not prepare statements if you’ll only execute them
statements that are once. There is a slight performance penalty for separate preparation and
used only once execution, and it introduces an unnecessary complexity into your application.
In some interfaces, however, you do need to prepare a statement to associate
it with a cursor. For information about cursors, see "Introduction to cursors"
on page 269.
The calls for preparing and executing statements are not a part of SQL, and
they differ from interface to interface. Each of the Adaptive Server
Anywhere programming interfaces provides a method for using prepared
statements.

How to use prepared statements


This section provides a brief overview of how to use prepared statements.

266
Chapter 10 Using SQL in Applications

v To use a prepared statement:


1 Prepare the statement.
2 Set up bound parameters, which will hold values in the statement.
3 Assign values to the bound parameters in the statement.
4 Execute the statement.
5 Repeat steps 3 and 4 as needed.
6 Drop the statement when finished. This step is not required in JDBC, as
Java’s garbage collection mechanisms handle this for you.
The general procedure is the same, but the details vary from interface to
interface. Comparing how to use prepared statements in different interfaces
illustrates this point.

v To use a prepared statement in embedded SQL:


1 Prepare the statement using the EXEC SQL PREPARE command.
2 Assign values to the parameters in the statement.
3 Execute the statement using the EXE SQL EXECUTE command.
4 Free the resources associated with the statement using the EXEC SQL
DROP command.
The general procedure is the same, but the details vary from interface to
interface. Comparing how to use prepared statements in different interfaces
illustrates this point.
Using prepared
statements in v To use a prepared statement in ODBC:
ODBC 1 Prepare the statement using SQLPrepare.
2 Bind the statement parameters using SQLBindParameter.
3 Execute the statement using SQLExecute.
4 Drop the statement using SQLFreeStmt.
$ For more information, see "Using prepared statements" on page 139 of
the book ASA Programming Interfaces Guide and the ODBC SDK
documentation.
To use a prepared You can use prepared statements with JDBC both from a client application
statement with and inside the server.
JDBC

267
Preparing statements

v To use a prepared statement in JDBC:


1 Prepare the statement using the prepareStatement method of the
connection object. This returns a prepared statement object.
2 Set the statement parameters using the appropriate setType methods of
the prepared statement object. Here, Type is the data type assigned.
3 Execute the statement using the appropriate method of the prepared
statement object. For inserts, updates, and deletes this is the
executeUpdate method.
$ For more information on using prepared statements in JDBC, see
"Using prepared statements for more efficient access" on page 609.
To use a prepared
statement with v To use a prepared statement in Open Client:
Sybase Open 1 Prepare the statement using the ct_dynamic function, with a
Client CS_PREPARE type parameter.
2 Set statement parameters using ct_param.
3 Execute the statement using ct_dynamic with a CS_EXECUTE type
parameter.
4 Free the resources associated with the statement using ct_dynamic with
a CS_DEALLOC type parameter.
$ For more information on using prepared statements in Open Client, see
"Using SQL in Open Client applications" on page 171 of the book ASA
Programming Interfaces Guide.

268
Chapter 10 Using SQL in Applications

Introduction to cursors
When you execute a query in an application, the result set consists of a
number of rows. In general, you do not know how many rows you are going
to receive before you execute the query. Cursors provide a way of handling
query result sets in applications.
The way you use cursors, and the kinds of cursors available to you, depend
on the programming interface you use. JDBC 1.0 provides rudimentary
handling of result sets, while ODBC and Embedded SQL have many
different kinds of cursors. Open Client cursors can only move forward
through a result set.
$ For information on the kinds of cursors available through different
programming interfaces, see "Availability of cursors" on page 273.

What is a cursor?
A cursor is a symbolic name associated with a SELECT statement or stored
procedure that returns a result set. It consists of a cursor result set (the set of
rows resulting from the execution of a query associated with the cursor) and
a cursor position (a pointer to one row within the cursor result set).
A cursor is like a handle on the result set of a SELECT statement. It enables
you to examine and possibly manipulate one row at a time. In Adaptive
Server Anywhere, cursors support forward and backward movement through
the query results.

269
Introduction to cursors

Absolute row Absolute row


from start from end

0 Before first row -n - 1

1 -n

2 -n + 1

3 -n +2

n- 2 -3

n- 1 -2

n -1
After last row
n+ 1 0

What you can do with cursors


With cursors, you can do the following:
♦ Loop over the results of a query.
♦ Carry out inserts, updates, and deletes at any point within a result set.
♦ Some programming interfaces allow you to use special features to tune
the way result sets return to your application, providing substantial
performance benefits for your application.

Steps in using a cursor


Using a cursor in Embedded SQL is different than using a cursor in other
interfaces.

v To use a cursor in Embedded SQL:


1 Prepare a statement Cursors generally use a statement handle rather
than a string. You need to prepare a statement to have a handle
available.
2 Declare the cursor Each cursor refers to a single SELECT or CALL
statement. When you declare a cursor, you state the name of the cursor
and the statement it refers to.

270
Chapter 10 Using SQL in Applications

3 Open the cursor In the case of a CALL statement, opening the cursor
executes the query up to the point where the first row is about to be
obtained.
4 Fetch results Although simple fetch operations move the cursor to
the next row in the result set, Adaptive Server Anywhere permits more
complicated movement around the result set. How you declare the
cursor determines which fetch operations are available to you.
5 Close the cursor When you have finished with the cursor, close it.
6 Free the statement To free the memory associated with the cursor
and its associated statement you need to free the statement.

v To use a cursor in ODBC or Open Client:


1 Execute a statement Execute a statement using the usual method for
the interface. You can prepare and then execute the statement, or you
can execute the statement directly.
2 Test to see if the statement returns a result set A cursor is
implicitly opened when a statement that creates a result set is executed.
When the cursor is opened, it is positioned before the first row of the
result set.
3 Fetch results Although simple fetch operations move the cursor to
the next row in the result set, Adaptive Server Anywhere permits more
complicated movement around the result set.
4 Close the cursor When you have finished with the cursor, close it to
free associated resources.
5 Free the statement If you used a prepared statement, free it to reclaim
memory.

Prefetching rows In some cases, the interface library may carry out performance optimizations
under the covers (such as prefetching results) so these steps in the client
application may not correspond exactly to software operations.

271
Types of cursor

Types of cursor
You can choose from several kinds of cursors in Adaptive Server Anywhere
when you declare the cursor. Cursors have the following properties:
♦ Unique or non-unique Declaring a cursor to be unique forces the
query to return all the columns required to uniquely identify each row.
Often this means returning all the columns in the primary key. Any
columns required but not specified are added to the result set. The
default cursor type is non-unique.
♦ Read only or updatable A cursor declared as read only may not be
used in an UPDATE (positioned) or a DELETE (positioned) operation.
The default cursor type us updatable.
♦ Scrollability You can declare cursors to behave different ways as you
move through the result set.
♦ No Scroll Declaring a cursor NO SCROLL restricts fetching
operations to fetching the next row or the same row again. You
cannot rely on prefetches with no scroll cursors, so performance
may be compromised.
♦ Dynamic scroll cursors With DYNAMIC SCROLL cursors you
can carry out more flexible fetching operations. You can move
backwards and forwards in the result set, or move to an absolute
position.
♦ Scroll cursors Similar to DYNAMIC SCROLL cursors,
SCROLL cursors behave differently when the rows in the cursor are
modified or deleted after the first time the row is read. SCROLL
cursors have more predictable behavior when other connections
make changes to the database.
♦ Insensitive cursors Also called STATIC cursors in ODBC, a
cursor declared INSENSITIVE has its membership fixed when it is
opened; and a temporary table is created with a copy of all the
original rows. Fetching from an INSENSITIVE cursor does not see
the effect of any other operation from a different cursor. It does see
the effect of operations on the same cursor. Also, ROLLBACK or
ROLLBACK TO SAVEPOINT do not affect INSENSITIVE
cursors; these operations do not change the cursor contents.
It is easier to write an application using INSENSITIVE cursors,
since you only have to worry about changes you make explicitly to
the cursor. You do not have to worry about actions taken by other
users or by other parts of your application.

272
Chapter 10 Using SQL in Applications

INSENSITIVE cursors can be expensive if the cursor defines a


large result set.

Availability of cursors
Not all interfaces provide support for all kinds of cursors.
♦ JDBC 1.1 does not use cursors, although the ResultSet object does have
a next method that allows you to scroll through the results of a query in
the client application. JDBC 2 does provide cursor operations.
♦ ODBC supports all kinds of cursors.
ODBC provides a cursor type called a BLOCK cursor. When you use a
BLOCK cursor, you can use SQLFetchScroll or SQLExtendedFetch
to fetch a block of rows, rather than a single row. Block cursors behave
identically to ESQL ARRAY fetches.
♦ Embedded SQL supports all the kinds of cursors.
♦ Sybase Open Client supports only NO SCROLL cursors. Also, a severe
performance penalty results when using updateable, non-unique cursors.

Choosing a cursor type — dynamic vs scroll cursors


SCROLL cursors remember both rows and row positions within a cursor, so
your application can be assured that these positions remain unchanged. If
your program or another connection deletes one of these rows, it creates a
"hole" in the cursor. If you fetch the row at this "hole" with a SCROLL
cursor, you receive an error, indicating that there is no current row, and the
cursor is left positioned on the "hole". In contrast, a DYNAMIC SCROLL
cursor just skips the "hole" and retrieves the next row.
DYNAMIC SCROLL cursors are more efficient than SCROLL cursors
because they store less information. Therefore, use DYNAMIC SCROLL
cursors unless you require the consistent behavior of SCROLL cursors.
SCROLL cursors retain the result of previous fetch requests in a temporary
table, so that if an application attempts to retrieve the same row more than
once, it sees the original row image. This isolates the application from the
affects of concurrent updates. For complex queries in a SCROLL cursor, the
temporary table requirements can be significant, and this can impact
performance. Queries that may produce slow SCROLL cursor performance
include those with the following characteristics:
♦ Two or more conditions on the same column with an OR operator (such
as X = 5 OR X = 4).

273
Types of cursor

♦ UNION ALL
♦ DISTINCT
♦ GROUP BY
♦ A subselect in the select list
♦ A subquery in the WHERE or HAVING clause
Example For example, an application could remember that Cobb is the second row in
the cursor for the following query:
SELECT emp_lname
FROM employee
If someone deletes the first employee (Whitney) while the SCROLL cursor is
still open, a FETCH ABSOLUTE 2 still positions on Cobb while FETCH
ABSOLUTE 1 returns an error. Similarly, if the cursor is on Cobb, FETCH
PREVIOUS will return the Row Not Found error.
In addition, a fetch on a SCROLL cursor returns the warning
SQLE_ROW_UPDATED (104) if the row has changed since last reading.
The warning only happens once. Subsequent fetches of the same row do not
produce the warning.
Similarly, an UPDATE (positioned) or DELETE (positioned) statement on a
row modified since it was last fetched returns the
SQLE_ROW_UPDATED_SINCE_READ error. An application must fetch the
row again for the UPDATE or DELETE on a SCROLL cursor to work.
An update to any column causes the warning/error, even if the column is not
referenced by the cursor. For example, a cursor on a query returning
emp_lname would report the update even if only the salary column were
modified.

No warnings or errors in bulk operations mode


These update warning and error conditions do not occur in bulk operations
mode (-b database server command-line switch).

274
Chapter 10 Using SQL in Applications

Working with cursors


This section describes how to carry out different kinds of operations using
cursors.

Cursor positioning
A cursor can be positioned at one of three places:
♦ On a row
♦ Before the first row
♦ After the last row

Absolute row Absolute row


from start from end

0 Before first row -n - 1

1 -n

2 -n + 1

3 -n +2

n- 2 -3

n- 1 -2

n -1
After last row
n+ 1 0

When a cursor is opened, it appears before the first row. You can move the
cursor position using the FETCH command (see "FETCH statement" on
page 523 of the book ASA Reference) to an absolute position from the start or
the end of the query results (using FETCH ABSOLUTE, FETCH FIRST, or
FETCH LAST), or to a position relative to the current cursor position (using
FETCH RELATIVE, FETCH PRIOR, or FETCH NEXT). The NEXT
keyword is the default qualifier for the FETCH statement.

275
Working with cursors

The number of row positions you can fetch in a cursor is governed by the
size of an integer. You can fetch rows numbered up to number 2147483646,
which is one less than the value that can be held in an integer. When using
negative numbers (rows from the end) you can fetch down to one more than
the largest negative value that can be held in an integer.
You can use special positioned versions of the UPDATE and DELETE
statements to update or delete the row at the current position of the cursor. If
the cursor is positioned before the first row or after the last row, a No current
row of cursor error will be returned.

Cursor positioning problems


Inserts and some updates to DYNAMIC SCROLL cursors can cause
problems with cursor positioning. The server will not put inserted rows at
a predictable position within a cursor unless there is an ORDER BY
clause on the SELECT statement. In some cases, the inserted row does not
appear at all until the cursor is closed and opened again.
With Adaptive Server Anywhere, this occurs if a temporary table had to
be created to open the cursor (see "Temporary tables used in query
processing" on page 824 for a description).
The UPDATE statement may cause a row to move in the cursor. This
happens if the cursor has an ORDER BY clause that uses an existing
index (a temporary table is not created). Using STATIC SCROLL cursors
alleviates these problems but requires more memory and processing.

Configuring cursors on opening


You can configure the following aspects of cursor behavior when you open
the cursor:
♦ Isolation level You can explicitly set the isolation level of operations
on a cursor to be different from the current isolation level of the
transaction.
♦ Holding By default, cursors in Embedded SQL close at the end of a
transaction. Opening a cursor with hold allows you to keep it open until
the end of a connection, or until you explicitly close it. ODBC, JDBC
and Open Client leave cursors open at the end of transactions by default.

276
Chapter 10 Using SQL in Applications

Fetching rows through a cursor


The simplest way of processing the result set of a query using a cursor is to
loop through all the rows of the result set until there are no more rows. You
can create a loop by:
1 Declaring and opening the cursor (Embedded SQL), or executing a
statement that returns a result set (ODBC).
2 Continue to fetch the next row until you get a Row Not Found indication.
3 Closing the cursor.
How step 2 of this operation is carried out depends on the interface you use.
For example:
♦ In ODBC SQLFetch, SQLExtendedFetch or SQLFetchScroll
advances the cursor to the next row and returns the data.
$ For information on using cursors in ODBC, see "Working with
result sets" on page 141 of the book ASA Programming Interfaces
Guide.
♦ In JDBC, the next method of the ResultSet object advances the cursor
and returns the data.
$ For information on using the ResultSet object in JDBC, see
"Queries using JDBC" on page 607.
♦ In Embedded SQL, the FETCH statement carries out the same operation.
$ For information on using cursors in Embedded SQL, see "Cursors
in Embedded SQL" on page 33 of the book ASA Programming
Interfaces Guide.
♦ In Open Client, ct_fetch advances the cursor to the next row and returns
the data.
$ For information on using cursors in Open Client applications, see
"Using cursors" on page 172 of the book ASA Programming Interfaces
Guide.

Fetching multiple rows


This section discusses how fetching multiple rows at a time can improve
performance.

277
Working with cursors

Multiple-row fetching should not be confused with prefetching rows, which


is described in the next section. Multiple row fetching is done by the
application, while prefetching is transparent to the application, and provides
a similar performance gain.
Multiple-row Some interfaces provide methods for fetching more than one row at a time
fetches into the next several fields in an array. Generally, the fewer separate fetch
operations you execute, the fewer individual requests the server must
respond to, and the better the performance. Multiple-row fetches are also
sometimes called wide fetches. Cursors that use multiple-row fetches are
sometimes called block cursors or fat cursors.
Using multiple-row ♦ In ODBC, you can set the number of rows that will be returned on each
fetching call to SQLFetchScroll or SQLExtendedFetch by setting the
SQL_ROWSET_SIZE attribute.
♦ In Embedded SQL, the FETCH statement uses an ARRAY clause to
control the number of rows fetched at a time.
♦ Open Client and JDBC do not support multi-row fetches. They do use
prefetching.

Prefetching rows
Prefetches and multiple-row fetches are different. Prefetches can be carried
out without explicit instructions from the client application. Prefetching
retrieves rows from the server into a buffer on the client side, but does not
make those rows available to the client application until the application
fetches the appropriate row.
By default, the Adaptive Server Anywhere client library prefetches multiple
rows whenever an application fetches a single row. The Adaptive Server
Anywhere client library stores the additional rows in a buffer.
Prefetching assists performance by cutting down on client/server traffic, and
increases throughput by making many rows available without a separate
request to the server for each row or block of rows.
$ For information on controlling prefetches, see "PREFETCH option" on
page 206 of the book ASA Reference.
Controlling ♦ The PREFETCH option controls whether or not prefetching occurs. You
prefetching from an can set the PREFETCH option to ON or OFF for a single connection. By
application default it is set to ON.
♦ In Embedded SQL, you can control prefetching on a per-cursor basis
when you open a cursor, on an individual FETCH operation using the
BLOCK clause.

278
Chapter 10 Using SQL in Applications

The application can specify a maximum number of rows contained in a


single fetch from the server by specifying the BLOCK clause. For
example, if you are fetching and displaying 5 rows at a time, you could
use BLOCK 5. Specifying BLOCK 0 fetches 1 record at a time and also
causes a FETCH RELATIVE 0 to always fetch the row from the server
again.
Although you can also turn off prefetch by setting a connection
parameter on the application, it is more efficient to set BLOCK=0 than
to set the PREFETCH option to OFF. For more information, see
"PREFETCH option" on page 206 of the book ASA Reference
♦ In Open Client, you can control prefetching behavior using ct_cursor
with CS_CURSOR_ROWS after the cursor is declared, but before it is
opened.

Fetching with scrollable cursors


ODBC and Embedded SQL provide methods for using scrollable and
DYNAMIC cursors. These methods allow you to move several rows forward
at a time, or to move backwards through the result set.
The JDBC or Open Client interfaces do not support scrollable cursors.
Prefetching does not apply to scrollable operations. For example, fetching a
row in the reverse direction does not prefetch several previous rows.

Modifying rows through a cursor


Cursors can do more than just read result sets from a query. You can also
modify data in the database while processing a cursor. These operations are
commonly called positioned update and delete operations, or put operations
if the action is an insert.
Not all query result sets allow positioned updates and deletes. If you carry
out a query on a non-updateable view, then no changes occur to the
underlying tables. Also, if the query involves a join, then you must specify
which table you wish to delete from, or which columns you wish to update,
when you carry out the operations.
Insertions through a cursor can only be executed if any non-inserted columns
in the table allow NULL or have defaults.
ODBC, Embedded SQL, and Open Client permit data modification using
cursors, but JDBC 1.1 does not. With Open Client, you can delete and update
rows, but you can only insert rows on a single-table query.

279
Working with cursors

Which table are If you attempt a positioned delete on a cursor, the table from which rows are
rows deleted from? deleted is determined as follows:
1 If no FROM clause is included in the delete statement, the cursor must
be on a single table only.
2 If the cursor is for a joined query (including using a view containing a
join), then the FROM clause must be used. Only the current row of the
specified table is deleted. The other tables involved in the join are not
affected.
3 If a FROM clause is included, and no table owner is specified, the table-
spec value is first matched against any correlation names.
$ For more information, see the "FROM clause" on page 532 of the
book ASA Reference.
4 If a correlation name exists, the table-spec value is identified with the
correlation name.
5 If a correlation name does not exist, the table-spec value must be
unambiguously identifiable as a table name in the cursor.
6 If a FROM clause is included, and a table owner is specified, the table-
spec value must be unambiguously identifiable as a table name in the
cursor.
7 The positioned DELETE statement can be used on a cursor open on a
view as long as the view is updateable.

Canceling cursor operations


You can cancel a request through an interface function. From Interactive
SQL, you can cancel a request by pressing the Interrupt SQL Statement
button on the toolbar (or by choosing Stop in the SQL menu).
If you cancel a request that is carrying out a cursor operation, the position of
the cursor is indeterminate. After canceling the request, you must locate the
cursor by its absolute position, or close it.

Bookmarks and cursors


ODBC provides bookmarks, or values used to identify rows in a cursor.
Adaptive Server Anywhere supports bookmarks for all kinds of cursors
except DYNAMIC cursors.

280
Chapter 10 Using SQL in Applications

Describing result sets


Some applications build SQL statements which cannot be completely
specified in the application. In some cases, for example, statements depend
on a response from the user before the application knows exactly what
information to retrieve, such as when a reporting application allows a user to
select which columns to display.
In such a case, the application needs a method for retrieving information
about both the nature of the result set and the contents of the result set. The
information about the nature of the result set, called a descriptor, identifies
the data structure, including the number and type of columns expected to be
returned. Once the application has determined the nature of the result set,
retrieving the contents is straightforward.
This result set metadata (information about the nature and content of the
data) is manipulated using descriptors. Obtaining and managing the result set
metadata is called describing.
Since cursors generally produce result sets, descriptors and cursors are
closely linked, although some interfaces hide the use of descriptors from the
user. Typically, statements needing descriptors are either SELECT
statements or stored procedures that return result sets.
A sequence for using a descriptor with a cursor-based operation is as
follows:
1 Allocate the descriptor. This may be done implicitly, although some
interfaces allow explicit allocation as well.
2 Prepare the statement.
3 Describe the statement. If the statement is a stored procedure call or
batch, and the result set is not defined by a result clause in the procedure
definition, then the describe should occur after opening the cursor.
4 Declare and open a cursor for the statement (Embedded SQL) or execute
the statement.
5 Get the descriptor and modify the allocated area if necessary. This is
often done implicitly.
6 Fetch and process the statement results.
7 Deallocate the descriptor.
8 Close the cursor.
9 Drop the statement. Some interfaces do this automatically.

281
Describing result sets

Implementation ♦ In Embedded SQL, a SQLDA (SQL Descriptor Area) structure holds the
notes descriptor information.
$ For more information, see "The SQL descriptor area (SQLDA)" on
page 45 of the book ASA Programming Interfaces Guide.
♦ In ODBC, a descriptor handle allocated using SQLAllocHandle
provides access to the fields of a descriptor. You can manipulate these
fields using SQLSetDescRec, SQLSetDescField, SQLGetDescRec,
and SQLGetDescField.
Alternatively, you can use SQLDescribeCol and SQLColAttributes to
obtain column information.
♦ In Open Client, you can use ct_dynamic to prepare a statement and
ct_describe to describe the result set of the statement. However, you can
also use ct_command to send a SQL statement without preparing it
first, and use ct_results to handle the returned rows one by one. This is
the more common way of operating in Open Client application
development.
♦ In JDBC, the java.sql.ResultSetMetaData class provides information
about result sets.
♦ You can also use descriptors for sending data to the engine (for example,
with the INSERT statement), however, this is a different kind of
descriptor than for result sets.
$ For more information about input and output parameters of the
DESCRIBE statement, see the "DESCRIBE statement" on page 500 of
the book ASA Reference.

282
Chapter 10 Using SQL in Applications

Controlling transactions in applications


Transactions are sets of atomic SQL statements. Either all statements in the
transaction are executed, or none. This section describes a few aspects of
transactions in applications.
$ For more information about transactions, see "Using Transactions and
Isolation Levels" on page 381.

Setting autocommit or manual commit mode


Some database programming interfaces have an autocommit mode (also
called unchained mode). You control autocommit behavior in the database
with the CHAINED database option, or in some database interfaces, by
setting an autocommit interface option.
If you wish to use transactions in your applications, you must use manual
commit mode (also called chained mode). If you are using ODBC, you
must turn autocommit mode off.
In Adaptive Server Anywhere, CHAINED=ON by default (manual commit
mode). You can set the current connection to operate in autocommit mode by
setting the CHAINED database option to OFF.
Using the In autocommit mode, and if CHAINED=OFF, the database treats each
CHAINED statement as a transaction, and automatically commits each after execution.
database option
There is a distinct difference between the engine’s CHAINED mode and
ODBC autocommit mode. For example, if CHAINED=OFF in a procedure
call, each statement in the procedure is committed. However, if
CHAINED=ON and ODBC autocommit=ON, the ODBC driver issues a
commit after the entire procedure finishes executing.
The performance and behavior of your application may change, depending
on whether or not you are running in an autocommit mode. Autocommit is
not recommended for most purposes.
Setting autocommit ♦ ODBC By default, ODBC operates in autocommit mode. You can turn
mode this mode off using the SQL_ATTR_AUTOCOMMIT connection
attribute. ODBC autocommit and CHAINED options are independent of
each other.
♦ JDBC By default, JDBC operates in autocommit mode. You can turn
this mode off using the setAutoCommit method of the connection
object:
conn.setAutoCommit( false );

283
Controlling transactions in applications

JDBC autocommit and CHAINED options are independent of each


other.
♦ Embedded SQL Embedded SQL uses the setting of the user’s
CHAINED option to govern the transaction behavior. By default, this
option is ON (manual commit).
♦ Open Client By default, a connection made through Open Client sets
the CHAINED mode to OFF. You can change this behavior by setting
the CHAINED database option to ON in your application (after
connecting).

Controlling the isolation level


You can set the isolation level of a current connection using the
ISOLATION_LEVEL database option.
Some interfaces, such as ODBC, allow you to set the isolation level for a
connection at connection time. You can reset this level later using the
ISOLATION_LEVEL database option.

Cursors and transactions


In general, a cursor closes when a COMMIT is performed. There are two
exceptions to this behavior:
♦ The CLOSE_ON_ENDTRANS database option is set to OFF.
♦ A cursor is opened WITH HOLD, which is the default with Open Client
and JDBC.
If either of these two cases is true, the cursor remains open on a COMMIT.
ROLLBACK and If a transaction rolls back, then cursors close except for those cursors opened
cursors WITH HOLD. However, don’t rely on the contents of any cursor after a
rollback.
The draft ISO SQL3 standard states that on a rollback, all cursors (even those
cursors opened WITH HOLD) should close. You can obtain this behavior by
setting the ANSI_CLOSE_CURSORS_AT_ROLLBACK option to ON.
Savepoints If a transaction rolls back to a savepoint, and if the
ANSI_CLOSE_CURSORS_AT_ROLLBACK option is ON, then all cursors
(even those cursors opened WITH HOLD) opened after the SAVEPOINT
close.

284
Chapter 10 Using SQL in Applications

Cursors and You can change the isolation level of a connection during a transaction using
isolation levels the SET OPTION statement to alter the ISOLATION_LEVEL option.
However, this change affects only closed cursors.

285
Controlling transactions in applications

286
C H A P T E R 1 1

International Languages and Character


Sets

About this chapter This chapter describes how to configure your Adaptive Server Anywhere
installation to handle international language issues.
Contents
Topic Page
Introduction to international languages and character sets 288
Understanding character sets in software 291
Understanding locales 297
Understanding collations 303
Understanding character set translation 311
Collation internals 314
International language and character set tasks 319

287
Introduction to international languages and character sets

Introduction to international languages and


character sets
This section provides an introduction to the issues you may face when
working in an environment that uses more than one character set, or when
using languages other than English.
When you create a database, you specify a collating sequence or collation to
be used by the database. A collation is a combination of a character set and
a sort order for characters in the database.

Adaptive Server Anywhere international features


Adaptive Server Anywhere provides two sets of features that are of particular
interest when setting up databases for languages.
♦ Collations You can choose from a wide selection of supplied
collations when you create a database. By creating your database with
the proper collation, you ensure proper sorting of data.
Whenever the database compares strings, sorts strings, or carries out
other string operations such as case conversion, it does so using the
collation sequence. The database carries out sorting and string
comparison when statements such as the following are executed:
♦ Queries with an ORDER BY clause.
♦ Expressions that use string functions, such as LOCATE, SIMILAR,
SOUNDEX.
♦ Conditions using the LIKE keyword.
♦ Queries that use index lookups on character data.
The database also uses collations to identify valid or unique identifiers
(column names and so on).
♦ Character set translation You can set up Adaptive Server Anywhere
to convert data between the character set encoding on your server and
client systems, thus maintaining the integrity of your data even in mixed
character set environments.
Character set translation is provided between client and server, and also
by the ODBC driver. The Adaptive Server Anywhere ODBC driver
provides OEM to ANSI character set translation and Unicode support.

288
Chapter 11 International Languages and Character Sets

Using the default collation


If you use the default actions when creating a database, the Database
Creation utility infers a collation from the character set used by the operating
system on the machine at which you create the database.
$ For information on how to find the default collation in your
environment, see "Finding the default collation" on page 319.
If all the machines in your environment share the same character set, then
this choice of collation ensures that all the character data in the database and
in client applications is represented in the same manner. As long as the
collation provides a proper character set and sort order for your data, using
this default setting is a simple way of ensuring that characters are represented
consistently throughout the system.
If it is not possible to set up your system in this default manner, you need to
decide which collation to use in your database, and whether to use character
set translation to ensure that data is exchanged consistently between the
pieces of your database system. This chapter provides the information you
need to make and implement these decisions.

Character set questions and answers


The following table identifies where you can find answers to questions.

To answer the question... Consider reading...


How do I set up my computing "Configuring your character set
environment to treat character sets environment" on page 319
properly?
How do I decide which collation to use "Understanding collations" on
for my database? page 303
How are characters represented in "Understanding character sets in
software, and Adaptive Server Anywhere software" on page 291
in particular?
What collations does Adaptive Server "Supplied collations" on page 303
Anywhere provide?
How do I ensure that error and "Character translation for database
informational messages sent from the messages" on page 311
database server to client applications are
sent in the proper language and character
set for my application?

289
Introduction to international languages and character sets

To answer the question... Consider reading...


I have a different character set on client "Starting a database server using
machines from that in use in the character set translation" on page 323
database. How can I get characters to be
exchanged properly between client and
server?
What character sets can I use for "Connection strings and character
connection strings? sets" on page 312
How do I create a collation that is "Creating a database with a custom
different from the supplied ones? collation" on page 326
How do I change the collation sequence "Changing a database from one
of an existing database? collation to another" on page 327.
How do I create a database for Windows "Creating databases for Windows CE"
CE? on page 305

290
Chapter 11 International Languages and Character Sets

Understanding character sets in software


This section provides general information about software issues related to
international languages and character sets.

Pieces in the character set puzzle


There are several distinct aspects to character storage and display by
computer software:
♦ Each piece of software works with a character set. A character set is a
set of symbols, including letters, digits, spaces and other symbols.
♦ To handle these characters, each piece of software employs a character
set encoding, in which each character is mapped onto one or more bytes
of information, typically represented as hexadecimal numbers. This
encoding is also called a code page.
♦ Database servers, which sort characters (for example, list names
alphabetically), use a collation. A collation is a combination of a
character encoding (a map between characters and hexadecimal
numbers) and a sort order for the characters. There may be more than
one sort order for each character set; for example, a case-sensitive order
and a case-insensitive order, or two languages may sort characters in a
different order.
♦ Characters are printed or displayed on a screen using a font, which is a
mapping between characters in the character set and their appearance.
Fonts are handled by the operating system.
♦ Operating systems also use a keyboard mapping to map keys or key
combinations on the keyboard to characters in the character set.

Language issues in client/server computing


Database users working at client applications may see or access strings from
the following sources:
♦ Data in the database Strings and other text data are stored in the
database. The database server processes these strings when responding
to requests.

291
Understanding character sets in software

For example, the database server may be asked to supply all the last
names beginning with a letter ordered less than N in a table. This request
requires string comparisons to be carried out, and assumes a character
set ordering.
The database server receives strings from client applications as streams
of bytes. It associates these bytes with characters according to the
database character set. If the data is held in an indexed column, the
index is sorted according to the sort order of the collation.
♦ Database server software messages Applications can cause
database errors to be generated. For example, an application may submit
a query that references a column that does not exist. In this case, the
database server returns a warning or error message. This message is held
in a language resource library, which is a DLL or shared library called
by Adaptive Server Anywhere.
♦ Client application The client application interface displays text, and
internally the client application may process text.
♦ Client software messages The client library uses the same language
library as the database server to provide messages to the client
application.
♦ Operating system The client operating system has text displayed on
its interface, and may also process text.
For a satisfactory working environment, all these sources of text must work
together. Loosely speaking, they must all be working in the user’s language
and/or character set.

Code pages
Many languages have few enough characters to be represented in a single-
byte character set. In such a character set, each character is represented by a
single byte: a two-digit hexadecimal number.
At most, 256 characters can be represented in a single byte. No single-byte
character set can hold all of the characters used internationally, including
accented characters. This problem was addressed by the development of a set
of code pages, each of which describes a set of characters appropriate for
one or more national languages. For example, code page 869 contains the
Greek character set, and code page 850 contains an international character
set suitable for representing many characters in a variety of languages.

292
Chapter 11 International Languages and Character Sets

Upper and lower With few exceptions, characters 0 to 127 are the same for all the single-byte
pages code pages. The mapping for this range of characters is called the ASCII
character set. It includes the English language alphabet in upper and lower
case, as well as common punctuation symbols and the digits. This range is
often called the seven-bit range (because only seven bits are needed to
represent the numbers up to 127) or the lower page. The characters from 128
to 255 are called extended characters, or upper code-page characters, and
vary from code page to code page.
Problems with code page compatibility are rare if the only characters used
are from the English alphabet, as these are represented in the ASCII portion
of each code page (0 to 127). However, if other characters are used, as is
generally the case in any non-English environment, there can be problems if
the database and the application use different code pages.
Example Suppose a database holding French language strings uses code page 850, and
the client operating system uses code page 437. The character À (upper case
A grave) is held in the database as character \xB7 (decimal value 183). In
code page 437, character \xB7 is a graphical character. The client application
receives this byte and the operating system displays it on the screen, the user
sees a graphical character instead of an A grave.

ANSI and OEM code pages in Windows and Windows NT


For PC users, the issue is complicated because there are at least two code
pages in use on most PCs. MS-DOS, as well as character-mode applications
(those using the console or "DOS box") in Windows 95/98 and Windows
NT, use code pages taken from the IBM set. These are called OEM code
pages (Original Equipment Manufacturer) for historical reasons.
Windows operating systems do not require the line drawing characters that
were held in the extended characters of the OEM code pages, so they use a
different set of code pages. These pages are based on the ANSI standard and
are therefore commonly called ANSI code pages.
Adaptive Server Anywhere supports collations based on both OEM and
ANSI code pages.
Example Consider the following situation:
♦ A PC is running the Windows 95 operating system with ANSI code page
1252.
♦ The code page for character-mode applications is OEM code page 437.
♦ Text is held in a database created using the collation corresponding to
OEM code page 850.

293
Understanding character sets in software

An upper case A grave in the database is stored as character 183. This value
is displayed as a graphical character in a character-mode application. The
same character is displayed as a dot in a Windows application.
$ For information about choosing a single-byte collation for your
database, see "Understanding collations" on page 303.

Multibyte character sets


Some languages, such as Japanese and Chinese, have many more than 256
characters. These characters cannot all be represented using a single byte, but
can be represented in multibyte character sets. In addition, some character
sets use the much larger number of characters available in a multibyte
representation to represent characters from many languages in a single, more
comprehensive, character set.
Multibyte character sets are of two types. Some are variable width, in which
some characters are single-byte characters, others are double-byte, and so on.
Other sets are fixed width, in which all characters in the set have the same
number of bytes. Adaptive Server Anywhere supports only variable-width
character sets.
Example As an example, characters in the Shift-JIS character set are of either one or
two bytes in length. If the value of the first byte is in the range of
hexadecimal values from \x81 to \x9F or from \xE0 to \xEF (decimal values
129-159 or 224-239) the character is a two-byte character and the subsequent
byte (called a follow byte) completes the character. If the first byte is outside
this range, the character is a single-byte character and the next byte is the
first byte of the following character.
♦ The properties of any Shift-JIS character can be read from its first byte
also. Characters with a first byte in the range \x09 to \x0D, or \x20 are
space characters.
♦ Characters in the ranges \x41 to \x5A, \x61 to \x7A, \x81 to \x9F or \xE0
to \xEF are considered to be alphabetic (letters).
♦ Characters in the range \x30 to \x39 are digits.

$ For information on the multibyte character sets, see "Using multibyte


collations" on page 310.

Sorting characters using collations


The database collation sequence includes the notion of alphabetic ordering of
letters, and extends it to include all characters in the character set, including
digits and space characters.
294
Chapter 11 International Languages and Character Sets

Associating more More than one character can be associated with each sort position. This is
than one character useful if you wish, for example, to treat an accented character the same as the
with each sort character without an accent.
position
Two characters with the same sort position are considered identical in all
ways by the database. Therefore, if a collation assigned the characters a and
e to the same sort position, then a query with the following search condition:
WHERE col1 = ’want’
is satisfied by a row for which col1 contains the entry went.
At each sort position, lowercase and uppercase forms of a character can be
indicated. For case-sensitive databases, the lowercase and uppercase
characters are not treated as equivalent. For case-insensitive databases, the
lowercase and uppercase versions of the character are considered equivalent.

Tip
Any code that selects a default collation for a German system should
select 1252LATIN1, not 1252DEU. 1252DEU differentiates between
characters with and without an umlaut, while 1252LATIN1 does not.
1252LATIN1 considers Muller and Müller equal, but 1252DEU does not
consider them equal. Because 1252DEU views characters with umlauts as
separate characters, it has the following alphabetic ordering: ob, öa.

First-byte collation orderings for multibyte character sets


A sorting order for characters in a multibyte character set can be specified
only for the first byte. Characters that have the same first byte are sorted
according to the hexadecimal value of the following bytes.

International aspects of case sensitivity


Adaptive Server Anywhere is always case preserving and case insensitive
for identifiers, such as table names and column names. This means that the
names are stored in the case in which they are created, but any access to the
identifiers is done in a case-insensitive manner.
For example, the names of the system tables are held in upper case
(SYSDOMAIN, SYSTABLE, and so on), but access is case insensitive, so
that the two following statements are equivalent:
SELECT *
FROM systable
SELECT *

295
Understanding character sets in software

FROM SYSTABLE
The equivalence of upper and lower case characters is enforced in the
collation. There are some collations where particular care may be needed
when assuming case insensitivity of identifiers.
Example In the Turkish 857TRK collation, the lower case i does not have the
character I as its upper case equivalent. Therefore, despite the case
insensitivity of identifiers, the following two statements are not equivalent in
this collation:
SELECT *
FROM sysdomain
SELECT *
FROM SYSDOMAIN

296
Chapter 11 International Languages and Character Sets

Understanding locales
Both the database server and the client library recognize their language and
character set environment using a locale definition.

Introduction to locales
The application locale, or client locale, is used by the client library when
making requests to the database server, to determine the character set in
which results should be returned. If character-set translation is enabled, the
database server compares its own locale with the application locale to
determine whether character set translation is needed. Different databases on
a server may have different locale definitions.
$ For information on enabling character-set translation, see "Starting a
database server using character set translation" on page 323.
The locale consists of the following components:
♦ Language The language is a two-character string using the ISO-639
standard values: DE for German, FR for French, and so on. Both the
database server and the client have language values for their locale.
The database server uses the locale language to determine the following
behavior:
♦ Which language library to load.
♦ The language is used together with the character set to determine
which collation to use when creating databases, if no collation is
explicitly specified.
The client library uses the locale language to determine the following
behavior:
♦ Which language library to load.
♦ Which language to request from the database.
$ For more information, see "Understanding the locale language" on
page 298.
♦ Character set The character set is the code page in use. The client and
server both have character set values, and they may differ. If they differ,
character set translation may be required to enable interoperability.
For machines that use both OEM and ANSI code pages, the ANSI code
page is the value used here.

297
Understanding locales

$ For more information, see "Understanding the locale character set"


on page 299.
♦ Collation label The collation label is the Adaptive Server Anywhere
collation. The client side does not use a collation label. Different
databases on a database server may have different collation labels.
$ For more information, see "Understanding the locale collation
label" on page 302.

Understanding the locale language


The locale language is an indicator of the language being used by the user of
the client application, or expected to be used by users of the database server.
$ For how to find locale settings, see "Determining locale information"
on page 320.
The client library or database server determines the language component of
the locale as follows:
1 It checks the SQLLOCALE environment variable, if it exists.
$ For more information, see "Setting the SQLLOCALE environment
variable" on page 302.
2 On Windows and Windows NT, it checks the Adaptive Server
Anywhere language registry entry, as described in "Registry settings on
installation" on page 11 of the book ASA Reference.
3 On other operating systems, or if the registry setting is not present, it
checks the operating system language setting.

Language label The following table shows the valid language label values, together with the
values equivalent ISO 639 labels:

Language label Alternative label ISO_639 language code


chinese simpchin ZH
danish N/A DA
french N/A FR
german N/A DE
italian N/A IT
japanese N/A JA
korean N/A KO
norwegian norweg NO

298
Chapter 11 International Languages and Character Sets

Language label Alternative label ISO_639 language code


polish N/A PL
portuguese portugue PT
russian N/A RU
spanish N/A ES
swedish N/A SV
tchinese tradchin TW
ukrainian N/A UK
us_english english EN

Understanding the locale character set


Both application and server locale definitions have a character set. The
application uses its character set when requesting character strings from the
server. If character set translation is enabled, the database server compares its
character set with that of the application to determine whether character set
translation is needed.
The locale character set and language are used to determine which collation
to use when creating a database if none is explicitly specified.
$ For how to find locale settings, see "Determining locale information"
on page 320.
The client library or database server determines the character set as follows:
1 If the connection string specifies a character set, it is used.
$ For more information, see "CharSet connection parameter" on
page 50 of the book ASA Reference.
2 ODBC and Embedded SQL applications check the SQLLOCALE
environment variable, if it exists.
$ For more information, see "Setting the SQLLOCALE environment
variable" on page 302.
Open Client applications check the locales.dat file in the Sybase locales
directory.
3 Character set information from the operating system is used to determine
the locale:
♦ On Windows operating systems, use the GetACP system call. This
returns the ANSI character set, not the OEM character set.

299
Understanding locales

♦ On UNIX, default to ISO8859-1.


♦ On other platforms, use code page 850.
Character set The following table shows the valid character set label values, together with
labels the equivalent IANA labels and a description:

Character set IANA label Description


label
iso_1 iso_8859-1:1987 ISO 8859-1 Latin-1
cp850 <N/A> IBM CP850 - European code set
cp437 <N/A> IBM CP437 - U.S. code set
roman8 hp-rpman8 HP Roman-8
mac macintosh Standard Mac coding
sjis shift_jis Shift JIS (no extensions)
eucjis euc-jp Sun EUC JIS encoding
deckanji <N/A> DEC Unix JIS encoding
euccns <N/A> EUC CNS encoding: Traditional
Chinese with extensions
eucgb <N/A> EUC GB encoding = Simplified
Chinese
cp932 windows-31j Microsoft CP932 = Win31J-DBCS
iso88592 iso_8859-2:1987 ISO 8859-2 Latin-2 Eastern Europe
iso88595 iso_8859-5:1988 ISO 8859-5 Latin/Cyrillic
iso88596 iso_8859-6:1987 ISO 8859-6 Latin/Arabic
iso88597 iso_8859-7:1987 ISO 8859-7 Latin/Greek
iso88598 iso_8859-8:1988 ISO 8859-8 Latin/Hebrew
iso88599 iso_8859-9:1989 ISO 8859-9 Latin-5 Turkish
iso15 <N/A> ISO 8859-15 Latin1 with Euro, etc.
mac_cyr <N/A> Macintosh Cyrillic
mac_ee <N/A> Macintosh Eastern European
macgrk2 <N/A> Macintosh Greek
macturk <N/A> Macintosh Turkish
greek8 <N/A> HP Greek-8
turkish8 <N/A> HP Turkish-8
koi8 <N/A> KOI-8 Cyrillic

300
Chapter 11 International Languages and Character Sets

Character set IANA label Description


label
tis620 <N/A> TIS-620 Thai standard
big5 <N/A> Traditional Chinese (cf. CP950)
eucksc <N/A> EUC KSC Korean encoding (cf.
CP949)
cp852 <N/A> PC Eastern Europe
cp855 <N/A> IBM PC Cyrillic
cp856 <N/A> Alternate Hebrew
cp857 <N/A> IBM PC Turkish
cp860 <N/A> PC Portuguese
cp861 <N/A> PC Icelandic
cp862 <N/A> PC Hebrew
cp863 <N/A> IBM PC Canadian French code page
cp864 <N/A> PC Arabic
cp865 <N/A> PC Nordic
cp866 <N/A> PC Russian
cp869 <N/A> IBM PC Greek
cp874 <N/A> Microsoft Thai SB code page
cp936 </N/A> Simplified Chinese
cp949 <N/A> Korean
cp950 <N/A> PC (MS) Traditional Chinese
cp1250 <N/A> MS Windows 3.1 Eastern European
cp1251 <N/A> MS Windows 3.1 Cyrillic
cp1252 <N/A> MS Windows 3.1 US (ANSI)
cp1253 <N/A> MS Windows 3.1 Greek
cp1254 <N/A> MS Windows 3.1 Turkish
cp1255 <N/A> MS Windows Hebrew
cp1256 <N/A> MS Windows Arabic
cp1257 <N/A> MS Windows Baltic
cp1258 <N/A> MS Windows Vietnamese
utf8 utf-8 UTF-8 treated as a character set

301
Understanding locales

Understanding the locale collation label


Each database has its own collation. The collation label taken from the locale
definition is used by the database server to determine which code page to use
when initializing a database.
$ For how to find locale settings, see "Determining locale information"
on page 320.
The database server determines the collation label as follows:
1 It checks the SQLLOCALE environment variable, if it exists.
$ For more information, see "Setting the SQLLOCALE environment
variable" on page 302.
2 It uses an internal table to find a collation label corresponding to the
language and character set.
Collation label The collation label is a label for one of the supplied Adaptive Server
values Anywhere collations, as listed in "Understanding collations" on page 303.

Setting the SQLLOCALE environment variable


The SQLLOCALE environment variable is a single string that consists of
three semi-colon-separated assignments. It has the following form:
CS=cslabel;LANG=langlabel;LABEL=colabel
where cslabel, langlabel, and colabel are labels as defined in the previous
sections.
$ For information on how to set environment variables, see "Setting
environment variables" on page 6 of the book ASA Reference.

302
Chapter 11 International Languages and Character Sets

Understanding collations
This section describes the supplied collations, and provides suggestions as to
which collations to use under certain circumstances.
$ For information on how to create a database with a specific collation,
see "Creating a database with a named collation" on page 322. For
information on changing a database from one collation to another, see
"Changing a database from one collation to another" on page 327.

Supplied collations
The following collations are supplied with Adaptive Server Anywhere. You
can obtain this list by entering the following command at a system command
line:
dbinit -l

Collation label Type Description


437LATIN1 OEM Code Page 437, Latin 1, Western
437ESP OEM Code Page 437, Spanish
437SVE OEM Code Page 437, Swedish/Finnish
819CYR ANSI Code Page 819, Cyrillic
819DAN ANSI Code Page 819, Danish
819ELL ANSI Code Page 819, Greek
819ESP ANSI Code Page 819, Spanish
819ISL ANSI Code Page 819, Icelandic
819LATIN1 ANSI Code Page 819, Latin 1, Western
819LATIN2 ANSI Code Page 819, Latin 2, Central/Eastern
European
819NOR ANSI Code Page 819, Norwegian
819RUS ANSI Code Page 819, Russian
819SVE ANSI Code Page 819, Swedish/Finnish
819TRK ANSI Code Page 819, Turkish
850CYR OEM Code Page 850, Cyrillic, Western
850DAN OEM Code Page 850, Danish
850ELL OEM Code Page 850, Greek

303
Understanding collations

Collation label Type Description


850ESP OEM Code Page 850, Spanish
850ISL OEM Code Page 850, Icelandic
850LATIN1 OEM Code Page 850, Latin 1
850LATIN2 OEM Code Page 850, Latin 2, Central/Eastern
European
850NOR OEM Code Page 850, Norwegian
850RUS OEM Code Page 850, Russian
850SVE OEM Code Page 850, Swedish/Finnish
850TRK OEM Code Page 850, Turkish
852LATIN2 OEM Code Page 852, Latin 2, Central/Eastern
European
852CYR OEM Code Page 852, Cyrillic
852POL OEM Code Page 852, Polish
855CYR OEM Code Page 855, Cyrillic
856HEB OEM Code Page 856, Hebrew
857TRK OEM Code Page 857, Turkish
860LATIN1 OEM Code Page 860, Latin 1, Western
861ISL OEM Code Page 861, Icelandic
862HEB OEM Code Page 862, Hebrew
863LATIN1 OEM Code Page 863, Latin 1, Western
865NOR OEM Code Page 865, Norwegian
866RUS OEM Code Page 866, Russian
869ELL OEM Code Page 869, Greek
920TRK ANSI Code Page 920, Turkish, ISO-8859-9
932JPN Multibyte Code Page 932, Japanese Shift-JIS encoding
936ZHO Multibyte Code Page 936, Simplified Chinese, GB
2312-80 8-bit encoding
949KOR Multibyte Code Page 949, Korean KS C 5601-1987
encoding, Wansung
950TWN Multibyte Code Page 950, Traditional Chinese, Big 5
Encoding
1250LATIN2 ANSI Code Page 1250, Windows Latin 2,
Central/Eastern European
1250POL ANSI Code Page 1250, Windows Latin 2, Polish

304
Chapter 11 International Languages and Character Sets

Collation label Type Description


1251CYR ANSI Code Page 1251, Windows Cyrillic
1252DEU ANSI Code Page 1252, Windows Specialty
German, Umlaut chars not equal
1252LATIN1 ANSI Code Page 1252, Windows Latin 1, Western
1254TRK ANSI Code Page 1254, Windows Latin 1, Turkish,
ISO 8859-9 with extensions
SJIS Multibyte Japanese Shift-JIS Encoding
SJIS2 Multibyte Japanese Shift-JIS Encoding, Sybase
Adaptive Server Enterprise-compatible
EUC_JAPAN Multibyte Japanese EUC JIS X 0208-1990 and JIS X
0212-1990 Encoding
EUC_CHINA Multibyte Simplified Chinese GB 2312-80 Encoding
EUC_TAIWAN Multibyte Taiwanese Big 5 Encoding
EUC_KOREA Multibyte Korean KS C 5601-1992 Encoding, Johad,
Code Page 1361
ISO_1 ANSI ISO8859-1, Latin 1, Western
ISO_BINENG ANSI Binary ordering, English ISO/ASCII 7-bit
letter case mappings
ISO1LATIN1 ANSI ISO8859-1, ISO Latin 1, Western, Latin 1
Ordering
ISO9LATIN1 ANSI ISO8859-15, ISO Latin 9, Western, Latin 1
Ordering
WIN_LATIN1 ANSI Code Page 1252 Windows Latin 1, Western,
ISO8859-1 with extensions
WIN_LATIN5 ANSI Code Page 1254 Windows Latin 5, Turkish,
ISO8859-9 with extensions
UTF8 Multibyte UCS-4 Transformation Format

Creating databases for Windows CE


Windows CE is a Unicode-based operating system. The Adaptive Server
Anywhere ODBC driver supports either ASCII (8-bit) or Unicode strings,
and carries out character set translation as needed. If developing Embedded
SQL applications, you can use Windows API functions to get the Unicode
versions of strings from the database.

305
Understanding collations

When creating databases for Windows CE, you should use a collation based
on the same single- or multi-byte character set that Windows would use for
the language of interest. For example, if you are using English, French, or
German, use the 1252Latin1 collation. If you are using Japanese, use the
SJIS2 collation, and if you are using Korean, use the 949KOR collation.

ANSI or OEM?
Adaptive Server Anywhere collations are based on code pages that are
designated as either ANSI or OEM. In most cases, use of an ANSI code page
is recommended.
If you choose to use an ANSI code page, you must not use the ODBC
translation driver in the ODBC data source configuration window.
If you choose to use an OEM code page, you must do the following:
♦ Choose a code page that matches the OEM code pages on your users’
client machines.
♦ When setting up data sources for Windows-based ODBC applications,
do choose the Adaptive Server Anywhere translation driver in the
ODBC data source configuration.
The translation driver converts between the OEM code page on your
machine and the ANSI code page used by Windows. If the database
collation is a different OEM code page than the one on your machine, an
incorrect translation will be applied.

Notes on ANSI collations


The ISO_1 ISO_1 is provided for compatibility with the Adaptive Server Enterprise
collation default ISO_1 collation. The differences are as follows:
♦ The lower case letter sharp s (\xDF) sorts with the lower case s in
Adaptive Server Anywhere, but after ss in Adaptive Server Enterprise.
♦ The ligatures corresponding to AE and ae (\xC6 and \xE6) sort after A
and a respectively in Adaptive Server Anywhere, but after AE and ae in
Adaptive Server Enterprise.

The 1252LATIN1 This collation is the same as WIN_LATIN1 (see below), but includes the
collation euro currency symbol and several other characters (Z-with-caron and z-with-
caron). For single-byte Windows operating systems, this is the recommended
collation in most cases.

306
Chapter 11 International Languages and Character Sets

Windows NT service patch 4 changes the default character set in many


locales to a new 1252 character set on which 1252 LATIN1 is based. If you
have this service patch, you should use this collation instead of
WIN_LATIN1.
The euro symbol sorts with the other currency symbols.
The WIN_LATIN1 WIN_LATIN1 is similar to ISO_1, except that Windows has defined
collation characters in places where ISO_1 says "undefined", specifically the range
\x80-\xBF. The differences from Adaptive Server Enterprise’s ISO_1 are as
follows:
♦ The upper case and lower case Icelandic Eth (\xD0 and \xF0) is sorted
with D in Adaptive Server Anywhere, but after all other letters in
Adaptive Server Enterprise.
♦ The upper case and lower case Icelandic Thorn (\xD0 and \xF0) is sorted
with T in Adaptive Server Anywhere, but after all other letters in
Adaptive Server Enterprise.
♦ The upper-case Y-diaresis (\x9F) is sorted with Y in Adaptive Server
Anywhere, and case converts with lowercase Y-diaresis (\xFF). In
Adaptive Server Enterprise it is undefined and sorts after \x9E.
♦ The lower case letter sharp s (\xDF) sorts with the lower case s in
Adaptive Server Anywhere, but after ss in Adaptive Server Enterprise.
♦ Ligatures are two characters combined into a single character. The
ligatures corresponding to AE and ae (\xC6 and \xE6) sort after A and a
respectively in Adaptive Server Anywhere, but after AE and ae in
Adaptive Server Enterprise.
♦ The ligatures corresponding to OE and oe (\x8C and \x9C) sort with O
in Adaptive Server Anywhere, but after OE and oe in Adaptive Server
Enterprise.
♦ The upper case and lower case letter S with caron (\x8A and \x9A) sorts
with S in Adaptive Server Anywhere, but is undefined in Adaptive
Server Enterprise, sorting after \x89 and \x99.

The ISO1LATIN1 This collation is the same as ISO_1, but with sorting for values in the range
collation A0-BF. For compatibility with Adaptive Server Enterprise, the ISO_1
collation has no characters for 0xA0-0xBF. However the ISO Latin 1
character set on which it is based does have characters in these positions. The
ISO1LATIN1 collation reflects the characters in these positions.
If you are not concerned with Adaptive Server Enterprise compatibility,
ISOLATIN1 is generally recommended instead of ISO_1.
The ISO9LATIN1 This collation is the same as ISO1LATIN1, but it includes the euro currency
collation symbol and the other new characters included in the 1252 LATIN1 collation.

307
Understanding collations

If your machine uses the ISO Latin 9 character set, then you should use this
collation.

Notes on OEM collations


The following table shows the built-in collations that correspond to OEM
code pages. The table and the corresponding collations were derived from
several manuals from IBM concerning National Language Support, subject
to the restrictions mentioned above. (This table represents the best
information available at the time of writing.)

Country Language Primary Primary Secondary Secondary


Code Collation Code Page Collation
Page
Argentina Spanish 850 850ESP 437 437ESP
Australia English 437 437LATIN1 850 850LATIN1
Austria German 850 850LATIN1 437 437LATIN1
Belgium Belgian 850 850LATIN1 437 437LATIN1
Dutch
Belgium Belgian 850 850LATIN1 437 437LATIN1
French
Belarus Belarussian 855 855CYR
Brazil Portuguese 850 850LATIN1 437 437LATIN1
Bulgaria Bulgarian 855 855CYR 850 850CYR
Canada Cdn French 850 850LATIN1 863 863LATIN1
Canada English 437 437LATIN1 850 850LATIN1
Croatia Croatian 852 852LATIN2 850 850LATIN2
Czech Czech 852 852LATIN2 850 850LATIN2
Republic
Denmark Danish 850 850DAN
Finland Finnish 850 850SVE 437 437SVE
France French 850 850LATIN1 437 437LATIN1
Germany German 850 850LATIN1 437 437LATIN1
Greece Greek 869 869ELL 850 850ELL
Hungary Hungarian 852 852LATIN2 850 850LATIN2
Iceland Icelandic 850 850ISL 861 861ISL
Ireland English 850 850LATIN1 437 437LATIN1

308
Chapter 11 International Languages and Character Sets

Country Language Primary Primary Secondary Secondary


Code Collation Code Page Collation
Page
Israel Hebrew 862 862HEB 856 856HEB
Italy Italian 850 850LATIN1 437 437LATIN1
Mexico Spanish 850 850ESP 437 437ESP
Netherlands Dutch 850 850LATIN1 437 437LATIN1
New English 437 437LATIN1 850 850LATIN1
Zealand
Norway Norwegian 865 865NOR 850 850NOR
Peru Spanish 850 850ESP 437 437ESP
Poland Polish 852 852LATIN2 850 850LATIN2
Portugal Portuguese 850 850LATIN1 860 860LATIN1
Romania Romanian 852 852LATIN2 850 850LATIN2
Russia Russian 866 866RUS 850 850RUS
S. Africa Afrikaans 437 437LATIN1 850 850LATIN1
S. Africa English 437 437LATIN1 850 850LATIN1
Slovak Slovakian 852 852LATIN2 850 850LATIN2
Republic
Slovenia Slovenian 852 852LATIN2 850 850LATIN2
Spain Spanish 850 850ESP 437 437ESP
Sweden Swedish 850 850SVE 437 437SVE
Switzerland French 850 850LATIN1 437 437LATIN1
Switzerland German 850 850LATIN1 437 437LATIN1
Switzerland Italian 850 850LATIN1 437 437LATIN1
Turkey Turkish 857 857TRK 850 850TRK
UK English 850 850LATIN1 437 437LATIN1
USA English 437 437LATIN1 850 850LATIN1
Venezuela Spanish 850 850ESP 437 437ESP
Yugoslavia Macedonian 852 852LATIN2 850 850LATIN2
Yugoslavia Serbian 855 855CYR 852 852CYR
Cyrillic
Yugoslavia Serbian 852 852LATIN2 850 850LATIN2
Latin

309
Understanding collations

Using multibyte collations


This section describes how multibyte character sets are handled. The
description applies to the supported collations and to any multibyte custom
collations you may create.
Adaptive Server Anywhere provides collations using several multibyte
character sets.
$ For a complete listing, see "Understanding collations" on page 303.
Adaptive Server Anywhere supports variable width character sets. In these
sets, some characters are represented by one byte, and some by more than
one, to a maximum of four bytes. The value of the first byte in any character
indicates the number of bytes used for that character, and also indicates
whether the character is a space character, a digit, or an alphabetic (alpha)
character.
Adaptive Server Anywhere does not support fixed-length multibyte character
sets such as 2-byte Unicode (UCS-2) or 4-byte Unicode (UCS-4).

310
Chapter 11 International Languages and Character Sets

Understanding character set translation


Adaptive Server Anywhere can carry out character set translation among
character sets that represent the same characters, but at different positions in
the character set or code page. There needs to be a degree of compatibility
between the character sets for this to be possible. For example, character set
translation is possible between EUC-JIS and Shift-JIS character sets, but not
between EUC-JIS and OEM code page 850.
This section describes how Adaptive Server Anywhere carries out character
set translation. This information is provided for advanced users, such as
those who may be deploying applications or databases in a multi-character-
set environment.

Character translation for database messages


Error and other messages from the database software are held in a language
resource library. Localized versions of this library are provided with
localized versions of Adaptive Server Anywhere.
Client application users may see messages from the database as well as data
from the database. Some database messages, which are strings from the
language library, may include placeholders that are filled by characters from
the database. For example, if you execute a query with a column that does
not exist, the returned error messages is:
Column column-name not found
where column-name is filled in from the database.
To present these kinds of information to the client application in a consistent
manner, even if the database is in a different character set from the language
library, the database server automatically translates the characters of the
messages so that they match the character set used in the database collation.

v To use character translation for database messages:


♦ Ensure that the collation for your database is compatible with the
character set used on your computer, and with the character set used in
the Adaptive Server Anywhere language resource library. The language
resource library differs among different localized versions of Adaptive
Server Anywhere.
You must check that the characters of interest to you exist in each
character set.

311
Understanding character set translation

Messages are always translated into the database collation character set,
regardless of whether the -ct command-line option is used.

Translate to
Translate to
client
EUC-JIS
character set

Shift-JIS
Language
library

EUC-JIS

A further character set translation is carried out if the database server -ct
command-line option is used, and if the client character set is different from
that used in the database collation.

Connection strings and character sets


Connection strings present a special case for character set translation. The
connection string is parsed by the client library, in order to locate or start a
database server. This parsing is done with no knowledge of the server
character set or language.
The interface library parses the connection string as follows:
1 It is broken down into its keyword = value components. This can be
done independently of the character set, as long as you do not use the
curly braces {} around CommLinks parameters. Instead, use the
recommended parentheses (). Curly braces are valid follow bytes (bytes
other than the first byte) in some multi-byte character sets.
2 The server is located. The server name is interpreted according to the
character set of the client machine. In the case of Windows operating
systems, the ANSI character set is used. Extended chars can be used
unless they cause character set conversion issues between the client and
server machines.
For maximum compatibility among different machines, you should use
server names built from ASCII characters 0 to 127 (33 to 126, excluding
control characters and space) , using no punctuation characters. Server
names are truncated at 40 characters.

312
Chapter 11 International Languages and Character Sets

3 The DatabaseName or DatabaseFile parameter is interpreted in the


database server character set.
4 Once the database is located, the remaining connection parameters are
interpreted according to its character set.

Avoiding character-set translation


There is a performance cost associated with character set translation. If you
can set up an environment such that no character set translation is required,
then you do not have to pay this cost, and your setup is simpler to maintain.
If you work with a single-byte character set and are concerned only with
seven-bit ASCII characters (values 0 through 127), then you do not need
character set translation. Even if the code pages are different in the database
and on the client operating system, they are compatible over this range of
characters. Many English-language installations will meet these
requirements.
If you do require use of extended characters, there are other steps you may be
able to take:
♦ If the code page on your client machine operating system matches that
used in the database, no character set translation is needed for data in the
database.
For example, in many environments it is appropriate to use the
1252LATIN1 collation in your database, which corresponds to the
Windows NT code page in many single-byte environments.
♦ If you are able to use a version of Adaptive Server Anywhere built for
your language, and if you use the code page on your operating system,
no character set translation is needed for database messages. The
character set used in the Adaptive Server Anywhere message strings is
as follows:

Language Character set


English cp1252
French cp1252
German cp1252L
Japanese cp932 (Shift-JIS)

Also, recall that client/server character set translation takes place only if the
database server is started using the -ct command-line switch.

313
Collation internals

Collation internals
This section describes internal technical details of collations, including the
file format of collation files.
$ This section is of particular use if you want to create a database using a
custom collation. For information on the steps involved, see "Creating a
custom collation" on page 325, and "Creating a database with a custom
collation" on page 326.
You can create a database using a collation different from the supplied
collations. This section describes how to build databases using such a
custom collation.
In building multibyte custom collations, you can specify which ranges of
values for the first byte signify single- and double-byte (or more) characters,
and which specify space, alpha, and digit characters. However, all first bytes
of value less than \x40 must be single-byte characters, and no follow bytes
may have values less than \x40. This restriction is satisfied by all supported
encodings.
Collation files may include the following elements:
♦ Comment lines, which are ignored by the database.
♦ A title line.
♦ A collation sequence section.
♦ An Encodings section.
♦ A Properties section.

Comment lines
In the collation file, spaces are generally ignored. Comment lines start with
either the percent sign (%) or two dashes (--).

The title line


The first non-comment line must be of the form:
Collation label (name)
In this statement:

314
Chapter 11 International Languages and Character Sets

Descriptions of Argument Description


arguments
Collation A required keyword.
label The collation label, which appears in the system tables as
SYS.SYSCOLLATION.collation_label and
SYS.SYSINFO.default_collation. The label must contain no more
than 10 characters.
name A descriptive term, used for documentation purposes. The name
should contain no more than 128 characters.

For example, the Shift-JIS collation file contains the following collation line,
with label SJIS and name (Japanese Shift-JIS Encoding):
Collation SJIS (Japanese Shift-JIS Encoding)

The collation sequence section


After the title line, each non-comment line describes one position in the
collation. The ordering of the lines determines the sort ordering used by the
database, and determines the result of comparisons. Characters on lines
appearing higher in the file (closer to the beginning) sort before characters
that appear later.
The form of each line in the sequence is:
[sort-position] : character [ [, character ] ...]
or
[sort-position] : character [lowercase uppercase]

Descriptions of Argument Description


arguments
sort-position Optional. Specifies the position at which the characters on that
line will sort. Smaller numbers represent a lesser value, so will
sort closer to the beginning of the sorted set. Typically, the
sort-position is omitted, and the characters sort immediately
following the characters from the previous sort position.
character The character whose sort-position is being specified.
lowercase Optional. Specifies the lowercase equivalent of the character.
If not specified, the character has no lowercase equivalent.
uppercase Optional. Specifies the uppercase equivalent of the character.
If not specified, the character has no uppercase equivalent.

315
Collation internals

Multiple characters may appear on one line, separated by commas ,. In this


case, these characters are sorted and compared as if they were the same
character.
Specifying Each character and sort position is specified in one of the following ways:
character and sort-
position Specification Description
\dnnn Decimal number, using digits 0-9 (such as \d001)
\xhh Hexadecimal number, using digits 0-9 and letters a-f or A-F
(such as \xB4)
’c’ Any character in place of c (such as ’,’)
c Any character other than quote (’), back-slash (\), colon (:) or
comma (,). These characters must use one of the previous
forms.

The following are some sample lines for a collation:


% Sort some special characters at the beginning:
: ’ ’
: _
: \xF2
: \xEE
: \xF0
: -
: ’,’
: ;
: ’:’
: !
% Sort some letters in alphabetical order
: A a A
: a a A
: B b B
: b b B
% Sort some E’s from code page 850,
% including some accented extended characters:
: e e E, \x82 \x82 \x90, \x8A \x8A \xD4
: E e E, \x90 \x82 \x90, \xD4 \x8A \xD4

Other syntax notes For databases using case-insensitive sorting and comparison (no -c specified
on the dbinit command line), the lower case and upper case mappings are
used to find the lower case and upper case characters that will be sorted
together.
For multibyte character sets, the first byte of a character is listed in the
collation sequence, and all characters with the same first byte are sorted
together, and ordered according to the value of the following bytes. For
example, the following is part of the Shift-JIS collation file:
: \xfb

316
Chapter 11 International Languages and Character Sets

: \xfc
: \xfd
In this collation, all characters with first byte \xfc come after all characters
with first byte \xfb and before all characters with first byte \xfd. The two-
byte character \xfc \x01 would be ordered before the two-byte character \xfc
\x02.
Any characters omitted from the collation are added to the end of the
collation. The tool that processes the collation file issues a warning.

The Encodings section


The Encodings section is optional, and follows the collation sequence. It is
not useful for single-byte character sets.
The Encodings section lists which characters are lead-bytes, for multi-byte
character sets, and what are valid follow-bytes.
For example, the Shift-JIS Encodings section is as follows:
Encodings:
[\x00-\x80,\xa0-\xdf,\xf0-\xff]
[\x81-\x9f,\xe0-\xef][\x00-\xff]
The first line following the section title lists valid single-byte characters. The
square brackets enclose a comma-separated list of ranges. Each range is
listed as a hyphen-separated pair of values. In the Shift-JIS collation, values
\x00 to \x80 are valid single-byte characters, but \x81 is not a valid single-
byte character.
The second line following the section title lists valid multibyte characters.
Any combination of one byte from the second line followed by one byte
from the first is a valid character. Therefore \x81\x00 is a valid double-byte
character, but \xd0 \x00 is not.

The Properties section


The Properties section is optional, and follows the Encodings section.
If a Properties section is supplied, an Encodings section must be supplied
also.
The Properties section lists values for the first-byte of each character that
represent alphabetic characters, digits, or spaces.
The Shift-JIS Properties section is as follows:
Properties:
space: [\x09-\x0d,\x20]

317
Collation internals

digit: [\x30-\x39]
alpha: [\x41-\x5a,\x61-\x7a,\x81-\x9f,\xe0-\xef]
This indicates that characters with first bytes \x09 to \x0d, as well as \x20,
are to be treated as space characters, digits are found in the range \x30 to
\x39 inclusive, and alphabetic characters in the four ranges \x41-\x5a, \x61-
\x7a, \x81-\x9f, and \xe0-\xef.

318
Chapter 11 International Languages and Character Sets

International language and character set tasks


This section groups together the tasks associated with international language
and character set issues.

Finding the default collation


If you do not explicitly specify a collation when creating a database, a
default collation is used. The default collation depends on the operating
system you are working on.

v To find the default collation for your machine:


1 Start Interactive SQL. Connect to the sample database.
2 Enter the following command:
SELECT PROPERTY( ’DefaultCollation’ )
The default collation is returned. For more information about this
collation, see "Supplied collations" on page 303.

Configuring your character set environment


This section describes how to set up your computing environment so that
character set issues are handled properly. If you set your locale environments
properly, then you do not need to explicitly choose collations for your
databases, and you do not need to turn on character set translation between
client and server.

v To configure your character set environment:


1 Determine the default locale of each computing platform in your
environment. The default locale is the character set and language of each
computer. On Windows operating systems, the character set is the ANSI
code page.
$ For how to find locale information, see "Determining locale
information" on page 320.
2 Decide whether the locale settings are appropriate for your environment.
$ For more information, see "Understanding collations" on page 303.

319
International language and character set tasks

3 If the default settings are inappropriate, decide on a character set,


language, and database collation that matches your data and avoids
character set translation.
$ For more information, see "Avoiding character-set translation" on
page 313.
4 Set locales on each of the machines in the environment to these values.
$ For more information, see "Setting locales" on page 321.
5 Create your database using the default collation. If the default collation
does not match your needs, create a database using a named collation.
$ For more information, see "Creating a database with a named
collation" on page 322, and "Changing a database from one collation to
another" on page 327.
When choosing the collation for your database,
♦ Choose a collation that uses a character set and sort order appropriate for
the data in the database. It is often the case that there are several
alternative collations that meet this requirement, including some that are
OEM collations and some that are ANSI collations.
♦ There is a performance cost, as well as extra complexity in system
configuration, when you use character set translation. Choose a collation
that avoids the need for character set translation.
You can avoid character set translation by using a collation sequence in
the database that matches the character set in use on your client machine
operating system. In the case of Windows operating systems on the
client machine, choose the ANSI character set.
$ For information, see "Avoiding character-set translation" on
page 313.

Determining locale information


You can determine locale information using system functions.
$ For a complete list, see "System functions" on page 310 of the book
ASA Reference.

v To determine the locale of a database server:


1 Start Interactive SQL, and connect to a database server.
2 Execute the following statement to determine the database server
character set:

320
Chapter 11 International Languages and Character Sets

SELECT PROPERTY( ’CharSet’ )


The query returns one of the supported character sets listed in "Character
set labels" on page 300.
3 Execute the following statement to determine the database server
language:
SELECT PROPERTY( ’Language’ )
The query returns one of the supported languages listed in "Language
label values" on page 298.
4 Execute the following statement to determine the database server default
collation:
SELECT PROPERTY( ’DefaultCollation’ )
The query returns one of the collations listed in "Supplied collations" on
page 303.

Notes To obtain client locale information, connect to a database server running on


your current machine.
To obtain the character set for an individual database, execute the following
statement:
SELECT DB_PROPERTY ( ’CharSet’ )

Setting locales
You can use the default locale on your operating system, or explicitly set a
locale for use by the Adaptive Server Anywhere components on your
machine.

v To set the Adaptive Server Anywhere locale on a computer:


1 If the default locale is appropriate for your needs, you do not need to
take any action.
$ To find out the default locale of your operating system, see
"Determining locale information" on page 320.
2 If you need to change the locale, create a SQLLOCALE environment
variable with the following value:
Charset=cslabel;Language=langlabel;CollationLabel=co
label

321
International language and character set tasks

where cslabel is a character set label from the list in "Character set
labels" on page 300, langlabel is a language label from the list in
"Language label values" on page 298, and CollationLabel is taken from
the list in "Understanding collations" on page 303, or is a custom
collation label.
$ For information on how to set environment variables on different
operating systems, see "Setting environment variables" on page 6 of the
book ASA Reference.

Creating a database with a named collation


You may specify the collation for each database when you create the
database. The default collation is inferred from the code page and sort order
of the database server’s computer’s operating system.

v To specify a database collation when creating a database (Sybase


Central):
♦ You can use the Create Database wizard in Sybase Central to create a
database. The wizard has a Collation Sequence page where you choose a
collation from a list.

v To specify a database collation when creating a database


(Command line):
1 List the supplied collation sequences:
dbinit -l

322
Chapter 11 International Languages and Character Sets

The first column of the list is the collation label, which you supply when
creating the database.
437LATIN1 Code Page 437, Latin 1, Western
437ESP Code Page 437, Spanish
437SVE Code Page 437, Swedish/Finnish
819CYR Code Page 819, Cyrillic
819DAN Code Page 819, Danish
819ELL Code Page 819, Greek
...
2 Create a database using the dbinit utility, specifying a collation sequence
using the -z option. The following command creates a database with a
Greek collation.
dbinit -z 869ELL mydb.db

v To specify a database collation when creating a database (SQL)


♦ You can use the CREATE DATABASE statement to create a database.
The following statement creates a database with a Greek collation:
CREATE DATABASE ’mydb.db’
COLLATION ’819ELL’

Starting a database server using character set translation


Character set translation takes place if the client and server locales are
different, but only if you specifically turn on character set conversion on the
database server command line.

v To enable character-set translation on a database server:


♦ Start the database server using the -ct command-line option. For
example:
dbsrv7 -ct asademo.db

Using ODBC code page translation


Adaptive Server Anywhere provides an ODBC translation driver. This
driver converts characters between OEM and ANSI code pages. It allows
Windows applications using ANSI code pages to be compatible with
databases that use OEM code pages in their collations.

323
International language and character set tasks

Not needed if you use ANSI character sets


If you use an ANSI character set in your database, and are using ANSI
character set applications, you do not need to use this translation driver.

The translation driver carries out a mapping between the OEM code page in
use in the "DOS box" and the ANSI code page used in the Windows
operating system. If your database uses the same code page as the OEM code
page, the characters are translated properly. If your database does not use the
same code page as your machine’s OEM code page, you will still have
compatibility problems.
Embedded SQL does not provide any such code page translation mechanism.

v To use the ODBC translation driver:


1 In the ODBC Administrator, choose Add to create a new Adaptive
Server Anywhere data source or Configure to edit an existing Adaptive
Server Anywhere data source.
2 On the ODBC tab of the ODBC Configuration for Adaptive Server
Anywhere window, click Select and choose Adaptive Server Anywhere
7.0 Translator from the list of translation drivers.

Using character set translation for Sybase Central and Interactive


SQL
Interactive SQL and Sybase Central both employ internal OEM to ANSI
code page translation if the database uses an OEM character set. As with the
ODBC translation driver, there is an assumption that the OEM code page on
the local machine matches the data in the database.

v To turn off character set translation in Interactive SQL:


♦ Set the Interactive SQL option CHAR_OEM_Translation to a value of
OFF.
SET OPTION CHAR_OEM_TRANSLATION = ’OFF’

$ For more information on OEM to ANSI character set translation in


Interactive SQL, see "CHAR_OEM_TRANSLATION option" on page 176
of the book ASA Reference.

324
Chapter 11 International Languages and Character Sets

Creating a custom collation


If none of the supplied collations meet your needs, you can modify a
supplied collation to create a custom collation. You can then use this custom
collation when creating a database.
$ For a list of supplied collations, see "Supplied collations" on page 303.

v To create a custom collation:


1 Decide on a starting collation You should choose a collation as close
as possible to the one you want to create as a starting point for your
custom collation.
For a listing of supplied collations, see "Understanding collations" on
page 303. Alternatively, run dbinit with the -l (lower case L) option:
dbinit -l
2 Create a custom collation file You do this using the Collation utility.
The output is a collation file.
For example, the following statement extracts the 1252LATIN1
collation into a file named mycol.col:
dbcollat -z 1252LATIN1 mycol.col
3 Edit the custom collation file Open the collation file (in this case
mycol.col) in a text editor.
4 Change the name of the collation The name of the collation is
specified on a line near the top of the file, starting with Collation. You
should edit this line to provide a new name. The name you need to
change is the second word on the line not counting switches: in the
example above, it is 1252LATIN1.
The other entries on this line relate the Adaptive Server Anywhere
collation label to the names that Java and the Sybase TDS interface give
to the same collation information. If you are not using these interfaces
you do not need to alter these entries.
The Collation line takes the following form:
Collation ASA_NAME asa_description charset
so_sensitive so_insensitive jdk
where character set label (charset) and the two sort-order labels
(so_sensitive and so_insensitive) are state which Open Client character
set and sort order is the closest to the current collation. The jdk label is
the closest character set known to Java.
5 Change the collation definition Make the changes you wish in the
custom collation file to define your new collation.
325
International language and character set tasks

$ For information on the collation file contents and format, see


"Collation internals" on page 314.
6 Convert the file to SQL scripts You do this using the dbcollat
command-line utility using the -d switch.
For example, the following command line creates the mycustmap.sql file
and mycustom.sql files from the mycol.col collation file:
dbcollat -d mycol.col mycustmap.sql mycustom.sql
7 Add the SQL scripts to the scripts in your installation The scripts
used when creating databases are held in the scripts subdirectory of your
Adaptive Server Anywhere installation directory. Append the contents
of mycustmap.sql to custmap.sql, and the contents of mycustom.sql to
end of custom.sql.
The new collation is now in place, and can be used when creating a
database.

Creating a database with a custom collation


If none of the supplied collations meet your needs, you can create a database
using a custom collation. The custom collation is used in indexes and any
string comparisons.

v To create a database with a custom collation:


1 Create a custom collation.
You must have a custom collation in place to use when creating a
database.
$ For instructions on how to create custom collations, see "Creating
a custom collation" on page 325.
2 Create the new database.
Use the Initialization utility, specifying the name of your custom
collation.
For example, the following command line creates a database named
newcol.db using the custom collation sequence newcol.
dbinit -z newcol temp.db
You can also use the Initialization utility from Sybase Central.

326
Chapter 11 International Languages and Character Sets

Changing a database from one collation to another


Changing your database from one collation to another may be a good idea for
any number of reasons. It can be especially useful, for example, if you want
to:
♦ avoid the need for character set translation across your setup
♦ unify the character set you are using with the collation in your database.
Using the same character set defined in your database is especially
important for sorting purposes.
♦ switch from one character set to another. You may, for example, want to
move from a DOS character set to a Windows character set.

Simply modifying the collation in an existing database is not permitted, since


it would invalidate all the indexes for that database. In order to change the
collation for a database, you must rebuild the database. Rebuilding a
database creates a new database with new settings (including collation
settings), using the old database’s data.
When you change the collation for a database, there are two main scenarios
to consider. The difference between the two lies in whether the character set
of the data needs to be changed.
Example 1 In the first example, only the collation needs to be changed. The data should
not change character sets. To resolve the collation issue, you need to rebuild
the database with new collation settings using the old data.
Consider an old database using the 850LATIN1 collation. If the database
contains data inserted from a Windows ’windowed’ application, it is likely
that the data is actually from the CP1252 character set, which does not match
CP850 used by the 850LATIN1 collation. This situation will often be
discovered when an ORDER BY clause seems to sort accented characters
incorrectly. To correct this problem, you would create a new database using
the 1252LATIN1 collation, and move the data from the old database to the
new database without translation, since the data is already in the character set
(CP1252) that matches the new database’s collation.
The simplest way to ensure that translation does not occur is to start the
server without the -ct switch.
$ For more information about rebuilding a database, see "Rebuilding a
database" on page 697.
$ For more information about specifying collations when creating
databases, see "Creating a database with a named collation" on page 322.

327
International language and character set tasks

Example 2 In the second situation, both the collation and the character set need to be
changed. To resolve the collation and character set issues, you need to
rebuild the database with the new collation settings, and change the character
set of the data.
Suppose that the 850LATIN1 database had been used properly such that it
contains characters from the CP850 character set. However, you want to
update both the collation and the character set, perhaps to avoid extra
translation. You would create a new database using 1252LATIN1, and move
the data from the old database to the new database with translation, thus
converting the CP850 characters to CP1252.
The translation of the database data from one character set to another occurs
using the client-server translation feature of the server. This feature, invoked
using the -ct command line switch, translates the data during the
communication between the client application and the server. The database’s
collation determines the server’s character set. The locale setting of the
operating system determines the client’s default character set, however, the
client's character set can be overridden by the CHARSET parameter of the
SQLLOCALE environment variable.
Since character set translation takes place during the communication between
the client application and the server, an external unload or reload is
necessary. If you use internal unload and reload, you will avoid character set
translation altogether, and end up where you began. Similarly, if character
set translation occurs in both the unload and the reload steps, you will
perform the translation and then immediately undo the translation and still
end up where you began. Character set translation can occur in either the
unload or the reload steps, but not in both.

v To convert a database from one collation to another, and translate


the data’s character set (using translation on reload):
1 Unload the data from the source database.
You can use the Unload utility to produce a reload.sql file and a set of
data files in the character set of the source database. Since we do not
want any translation during this phase, ensure that the 6.0 server running
the source database does not use the -ct switch.
If the unload/reload is occurring on a single machine, use the -ix switch
to do an internal unload and an external reload. If the unload/reload
occurs across machines, use the Unload utility with the -xx switch to
force an external unload and an external reload.

328
Chapter 11 International Languages and Character Sets

Remember that an “external” unload or reload means that an application


(dbunload and DBISQL) opens a cursor on the database and either reads
or writes the data to disk. An “internal” unload or reload means that an
UNLOAD TABLE or LOAD TABLE is used so that the server reads or
writes the data itself.
If you want to unload specific tables, use the Interactive SQL OUTPUT
statement.
$ For more information on the Unload utility, see "The Unload
utility" on page 138 of the book ASA Reference.
2 Create a target database with the appropriate collation using the
Initialization utility and the -z switch.
To reload the new database, you must be using at least version 6.0.1.
The database should be created using the version of the server and tools
corresponding to the server that they will use to run it.
$ For more information on specifying collations when creating
databases, see "Creating a database with a named collation" on
page 322.
3 Start the target database using the 6.0 engine/server running with the -
ct and -z switch.

The -ct switch enables the database server to carry out character set
translation into the character set of the target database. The -ct switch is
required.
The –z switch, while not required, allows for verification of the
character sets that will be used, and that translation will occur.
$ For more information, see "Starting a database server using
character set translation" on page 323.
4 Create the SQLLOCALE environment variable with the CHARSET
parameter set to the character set of the source database on the client
The value of the CHARSET parameter is based on the character set of
the data, not on the old collation.
$ For more information, see "Setting the SQLLOCALE environment
variable" on page 302.
5 Start Interactive SQL and connect to the server started in step 3.
Ensure that the engine message window shows that the client is using
the SQLLOCALE character set. There should be a message similar to :
server character translation is on, database charset is “iso-1”, client
charset is “cp1252”

329
International language and character set tasks

6 From Interactive SQL, use the Read command to run the reload.sql
script, or use the INPUT statement to load the data files created in step
1.
The data transferred to the target database will be translated to the
character set appropriate to the collation defined when the database was
created in step 2.
7 Once the conversion is complete, shutdown the engine and Interactive
SQL and unset the SQLLOCALE environment variable. If you no
longer require character set translation, remove the -ct switch from the
engine command line for subsequent runs.

330
P A R T T H R E E

Relational Database Concepts

This part describes key concepts and strategies for effective use of Adaptive
Server Anywhere.

331
332
C H A P T E R 1 2

Designing Your Database

About this chapter This chapter introduces the basic concepts of relational database design and
gives you step-by-step suggestions for designing your own databases. It uses
the expedient technique known as conceptual data modeling, which focuses
on entities and the relationships between them.
Contents
Topic Page
Introduction 334
Database design concepts 335
The design process 341
Designing the database table properties 355

333
Introduction

Introduction
While designing a database is not a difficult task for small and medium sized
databases, it is an important one. Bad database design can lead to an
inefficient and possibly unreliable database system. Because client
applications are built to work on specific parts of a database, and rely on the
database design, a bad design can be difficult to revise at a later date.
$ This chapter covers database design in an elementary manner. For more
advanced information, you may wish to the DataArchitect documentation.
DataArchitect is a component of Sybase PowerDesigner, a database design
tool.
$ You may also wish to consult an introductory book such as A Database
Primer by C. J. Date. If you are interested in pursuing database theory,
C. J. Date’s An Introduction to Database Systems is an excellent textbook on
the subject.
Java classes and The addition of Java classes to the available data types extends the relational
database design database concepts on which this chapter is based. Database design involving
Java classes is not discussed in this chapter.
$ For information on designing databases that take advantage of Java
class data types, see "Java database design" on page 583.

334
Chapter 12 Designing Your Database

Database design concepts


In designing a database, you plan what things you want to store information
about, and what information you will keep about each one. You also
determine how these things are related. In the common language of database
design, what you are creating during this step is a conceptual database
model.
Entities and The distinguishable objects or things that you want to store information
relationships about are called entities. The associations between them are called
relationships. You might like to think of the entities as nouns in the
language of database description and the relationships as verbs.
Conceptual models are useful because they make a clean distinction between
the entities and relationships. These models hide the details involved in
implementing a design in any particular database management system. They
allow you to focus on fundamental database structure. Hence, they also form
a common language for the discussion of database design.
Entity-relationship The main component of a conceptual database model is a diagram that shows
diagrams the entities and relationships. This diagram is commonly called an entity-
relationship diagram. In consequence, many people use the name entity-
relationship modeling to refer to the task of creating a conceptual database
model.
Conceptual database design is a top-down design method. There are now
sophisticated tools such as Sybase PowerDesigner that help you pursue this
method, or other approaches. This chapter is an introductory chapter only,
but it does contain enough information for the design of straightforward
databases.

Entities
An entity is the database equivalent of a noun. Distinguishable objects such
as employees, order items, departments and products are all examples of
entities. In a database, a table represents each entity. The entities that you
build into your database arise from the activities for which you will be using
the database, whether that be tracking sales calls, maintaining employee
information, or some other activity.
Attributes and Each entity contains a number of attributes. Attributes are particular
identifiers characteristics of the things that you would like to store. For example, in an
employee entity, you might want to store an employee ID number, first and
last names, an address, and other particular information that pertains to a
particular employee. Attributes are also known as properties.

335
Database design concepts

You depict an entity using a rectangular box. Inside, you list the attributes
associated with than entity.

Employee
Employee Number
First Name
Last Name
Address

An identifier is one or more attributes on which all the other attributes


depend. It uniquely identifies an item in the entity. Underline the names of
attributes that you wish to form part of an identifier.
In the Employee entity, above, the Employee Number uniquely identifies an
employee. All the other attributes store information that pertains only to that
one employee. For example, an employee number uniquely determines an
employee’s name and address. Two employees might have the same name or
the same address, but you can make sure that they don’t have the same
employee number. Employee Number is underlined to show that it is an
identifier.
It is good practice to create an identifier for each entity. As will be explained
later, these identifiers become primary keys within your tables. Primary key
values must be unique and cannot be null or undefined. They identify each
row in a table uniquely and improve the performance of the database server.

Relationships
A relationship between entities is the database equivalent of a verb. An
employee is a member of a department, or an office is located in a city.
Relationships in a database may appear as foreign key relationships between
tables, or may appear as separate tables themselves. You will see examples
of each in this chapter.
The relationships in the database are an encoding of rules or practices that
govern the data in the entities. If each department has one department head,
you can create a one-to-one relationship between departments and employees
to identify the department head.
Once a relationship is built into the structure of the database, there is no
provision for exceptions. There is nowhere to put a second department head.
Duplicating the department entry would involve duplicating the department
ID, which is the identifier. Duplicate identifiers are not allowed.

336
Chapter 12 Designing Your Database

Tip
Strict database structure can benefit you, because it can eliminate
inconsistencies, such as a department with two managers. On the other
hand, you as the designer should make your design flexible enough to
allow some expansion for unforeseen uses. Extending a well-designed
database is usually not too difficult, but modifying the existing table
structure can render an entire database and its client applications obsolete.

Cardinality of There are three kinds of relationships between tables. These correspond to
relationships the cardinality (number) of the entities involved in the relationship.
♦ One-to-one relationships You depict a relationship by drawing a line
between two entities. The line may have other markings on it such as the
two little circles shown. Later sections explain the purpose of these
marks.

Department Employee

Management Relationship

One employee manages one department.


♦ One-to-many relationships The fact that one item contained in
Entity 1 can be associated with multiple entities in Entity 2 is denoted by
the multiple lines forming the attachment to Entity 2.

Office Telephones

Phone Location Relationship

One office can have many telephones.


♦ Many-to-many relationships In this case, draw multiple lines for the
connections to both entities.

Parts Warehouses

Storage Relationship

One warehouse can hold many different parts and one type of part can
be stored at many warehouses.

Roles You can describe each relationship with two roles. Roles are verbs or
phrases that describe the relationship from each point of view. For example,
a relationship between employees and departments might be described by the
following two roles.
1 An employee is a member of a department.
337
Database design concepts

2 A department contains an employee.

Employee
is a member of
Employee Number Department
First Name
Department ID
Last Name
Department Name
Address contains

Roles are very important because they afford you a convenient and effective
means of verifying your work.

Tip
Whether reading from left-to-right or from right-to-left, the following rule
makes it easy to read these diagrams: Read the
1 name of the first entity,
2 role next to the first entity,
3 cardinality from the connection to the second entity, and
4 name of the second entity.

Mandatory The little circles just before the end of the line that denotes the relation serve
elements an important purpose. A circle means that an element can exist in the one
entity without a corresponding element in the other entity.
If a cross bar appears in place of the circle, that entity must contain at least
one element for each element in the other entity. An example will clarify
these statements.

Publisher
ID Number
Publisher Name

publishes
Books is written by Authors
ID Number ID Number
Title First Name
is published by
writes Last Name

This diagram corresponds to the following four statements.


1 A publisher publishes zero or more books.
2 A book is published by exactly one publisher.
3 A book is written by one or more authors.
4 An author writes zero or more books.

338
Chapter 12 Designing Your Database

Tip
Think of the little circle as the digit 0 and the cross bar as the number one.
The circle means at least zero. The cross bar means at least one.

Reflexive Sometimes, a relationship will exist between entries in a single entity. In this
relationships case, the relationship is called reflexive. Both ends of the relationship attach
to a single entity.

Employee
Employee Number
First Name
Last Name
Address

manages reports to

This diagram corresponds to the following two statements.


1 An employee reports to at most one other employee.
2 An employee manages zero or more or more employees.
Notice that in the case of this relation, it is essential that the relation be
optional in both directions. Some employees are not managers. Similarly, at
least one employee should head the organization and hence report to no one.
$ Naturally, you would also like to specify that an employee cannot be his
or her own manager. This restriction is a type of business rule. Business rules
are discussed later as part of "The design process" on page 341.

Changing many-to-many relationships into entities


When you have attributes associated with a relationship, rather than an
entity, you can change the relationship into an entity. This situation
sometimes arises with many-to-many relationships, when you have attributes
that are particular to the relationship and so you cannot reasonably add them
to either entity.
Suppose that your parts inventory is located at a number of different
warehouses. You have drawn the following diagram.

Parts
stored at
Part Number
Description Warehouse
contains Warehouse ID
Address

339
Database design concepts

But you wish to record the quantity of each part stored at each location. This
attribute can only be associated with the relationship. Each quantity depends
on both the parts and the warehouse involved. To represent this situation,
you can redraw the diagram as follows:

Parts
Part Number
stored at
Description
Inventory Warehouse
Quantity Warehouse ID
contains Address

Notice the following details of the transformation:


1 Two new relations join the relation entity with each of the two original
entities. They inherit their names from the two roles of the original
relationship: stored at and contains, respectively.
2 Each entry in the Inventory entity demands one mandatory entry in the
Parts entity and one mandatory entry in the Warehouse entity. These
relationships are mandatory because a storage relationship only makes
sense if it is associated with one particular part and one particular
warehouse.
3 The new entity is dependent on both the Parts entity and on the
Warehouse entity, meaning that the new entity is identified by the
identifiers of both of these entities. In this new diagram, one identifier
from the Parts entity and one identifier from the Warehouse entity
uniquely identify an entry in the Inventory entity. The triangles that
appear between the circles and the multiple lines that join the two new
relationships to the new Inventory entity denote the dependencies.
Do not add either a Part Number or Warehouse ID attribute to the Inventory
entity. Each entry in the Inventory entity does depend on both a particular
part and a particular warehouse, but the triangles denote this dependence
more clearly.

340
Chapter 12 Designing Your Database

The design process


There are five major steps in the design process.
♦ "Step 1: Identify entities and relationships" on page 341.
♦ "Step 2: Identify the required data" on page 344.
♦ "Step 3: Normalize the data" on page 346.
♦ "Step 4: Resolve the relationships" on page 349.
♦ "Step 5: Verify the design" on page 352.

$ For information about implementing the database design, see "Working


with Database Objects" on page 111.

Step 1: Identify entities and relationships

v To identify the entities in your design and their relationship to each


other:
1 Define high-level activities Identify the general activities for which
you will use this database. For example, you may want to keep track of
information about employees.
2 Identify entities For the list of activities, identify the subject areas you
need to maintain information about. These subjects will become entities.
For example, hire employees, assign to a department, and determine a
skill level.
3 Identify relationships Look at the activities and determine what the
relationships will be between the entities. For example, there is a
relationship between parts and warehouses. Define two roles to describe
each relationship.
4 Break down the activities You started out with high-level activities.
Now, examine these activities more carefully to see if some of them can
be broken down into lower-level activities. For example, a high-level
activity such as maintain employee information can be broken down
into:
♦ Add new employees.
♦ Change existing employee information.
♦ Delete terminated employees.

341
The design process

5 Identify business rules Look at your business description and see


what rules you follow. For example, one business rule might be that a
department has one and only one department head. These rules will be
built into the structure of the database.

Entity and relationship example


Example ACME Corporation is a small company with offices in five locations.
Currently, 75 employees work for ACME. The company is preparing for
rapid growth and has identified nine departments, each with its own
department head.
To help in its search for new employees, the personnel department has
identified 68 skills that it believes the company will need in its future
employee base. When an employee is hired, the employee’s level of expertise
for each skill is identified.
Define high-level Some of the high-level activities for ACME Corporation are:
activities
♦ Hire employees.
♦ Terminate employees.
♦ Maintain personal employee information.
♦ Maintain information on skills required for the company.
♦ Maintain information on which employees have which skills.
♦ Maintain information on departments.
♦ Maintain information on offices.

Identify the entities Identify the entities (subjects) and the relationships (roles) that connect them.
and relationships Create a diagram based on the description and high-level activities.
Use boxes to show entities and lines to show relationships. Use the two roles
to label each relationship. You should also identify those relationships that
are one-to-many, one-to-one, and many-to-many using the appropriate
annotation.
Below, is a rough entity-relationship diagram. It will be refined throughout
the chapter.

342
Chapter 12 Designing Your Database

Skill
is acquired by
Department
is headed by

is capable of
manages

contains
Employee
works out of

contains is a member of

Office
manages reports to

Break down the The following lower-level activities below are based on the high-level
high-level activities activities listed above:
♦ Add or delete an employee.
♦ Add or delete an office.
♦ List employees for a department.
♦ Add a skill to the skill list.
♦ Identify the skills of an employee.
♦ Identify an employee’s skill level for each skill.
♦ Identify all employees that have the same skill level for a particular skill.
♦ Change an employee’s skill level.
These lower-level activities can be used to identify if any new tables or
relationships are needed.
Identify business Business rules often identify one-to-many, one-to-one, and many-to-many
rules relationships.
The kind of business rules that may be relevant include the following:
♦ There are now five offices; expansion plans allow for a maximum of ten.
♦ Employees can change department or office.
♦ Each department has one department head.
♦ Each office has a maximum of three telephone numbers.
♦ Each telephone number has one or more extensions.

343
The design process

♦ When an employee is hired, the level of expertise in each of several


skills is identified.
♦ Each employee can have from three to twenty skills.
♦ An employee may or may not be assigned to an office.

Step 2: Identify the required data

v To identify the required data:


1 Identify supporting data.
2 List all the data you need to track.
3 Set up data for each entity.
4 List the available data for each entity. The data that describes an entity
(subject) answers the questions who, what, where, when, and why.
5 List any data required for each relationship (verb).
6 List the data, if any, that applies to each relationship.

Identify supporting The supporting data you identify will become the names of the attributes of
data the entity. For example, the data below might apply to the Employee entity,
the Skill entity, and the Expert In relationship.

Employee Skill Expert In


Employee ID Skill ID Skill level
Employee first name Skill name Date skill was acquired
Employee last name Description of skill
Employee department
Employee office
Employee address

If you make a diagram of this data, it will look something like this picture:

Employee Skill
is acquired by
Employee ID Skill ID
First name Skill name
Last name is capable of Skill description
Home address

344
Chapter 12 Designing Your Database

Observe that not all of the attributes you listed appear in this diagram. The
missing items fall into two categories:
1 Some are contained implicitly in other relationships; for example,
Employee department and Employee office are denoted by the relations
to the Department and Office entities, respectively.
2 Others are not present because they are associated not with either of
these entities, but rather the relationship between them. The above
diagram is inadequate.
The first category of items will fall naturally into place when you draw the
entire entity-relationship diagram.
You can add the second category by converting this many-to-many
relationship into an entity.

Employee
Skill
Employee ID Expert In
Skill ID
First name Skill level
Last name is capable of Date acquired is acquired by Skill name
Skill description
Home address

The new entity depends on both the Employee and the Skill entities. It
borrows its identifiers from these entities because it depends on both of them.
Things to ♦ When you are identifying the supporting data, be sure to refer to the
remember activities you identified earlier to see how you will access the data.
For example, you may need to list employees by first name in some
situations and by last name in others. To accommodate this requirement,
create a First Name attribute and a Last Name attribute, rather than a
single attribute that contains both names. With the names separate, you
can later create two indexes, one suited to each task.
♦ Choose consistent names. Consistency makes it easier to maintain your
database and easier to read reports and output windows.
For example, if you choose to use an abbreviated name such as
Emp_status for one attribute, you should not use a full name, such as
Employee_ID, for another attribute. Instead, the names should be
Emp_status and Emp_ID.
♦ At this stage, it is not crucial that the data be associated with the correct
entity. You can use your intuition. In the next section, you’ll apply tests
to check your judgment.

345
The design process

Step 3: Normalize the data


Normalization is a series of tests that eliminate redundancy in the data and
make sure the data is associated with the correct entity or relationship. There
are five tests. This section presents the first three of them. These three tests
are the most important and so the most frequently used.

Why normalize?
The goals of normalization are to remove redundancy and to improve
consistency. For example, if you store a customer’s address in multiple
locations, it is difficult to update all copies correctly should he move.

$ For more information about the normalization tests, see a book on


database design.
Normal forms There are several tests for data normalization. When your data passes the
first test, it is considered to be in first normal form. When it passes the
second test, it is in second normal form, and when it passes the third test, it is
in third normal form.

v To normalize data in a database:


1 List the data.
♦ Identify at least one key for each entity. Each entity must have an
identifier.
♦ Identify keys for relationships. The keys for a relationship are the
keys from the two entities that it joins.
♦ Check for calculated data in your supporting data list. Calculated
data is not normally stored in a relational database.
2 Put data in first normal form.
♦ If an attribute can have several different values for the same entry,
remove these repeated values.
♦ Create one or more entities or relationships with the data that you
remove.
3 Put data in second normal form.
♦ Identify entities and relationships with more than one key.
♦ Remove data that depends on only one part of the key.
♦ Create one or more entities and relationships with the data that you
remove.
4 Put data in third normal form.

346
Chapter 12 Designing Your Database

♦ Remove data that depends on other data in the entity or relationship,


not on the key.
♦ Create one or more entities and relationships with the data that you
remove.

Data and identifiers Before you begin to normalize (test your design), simply list the data and
identify a unique identifier each table. The identifier can be made up of one
piece of data (attribute) or several (a compound identifier).
The identifier is the set of attributes that uniquely identifies each row in an
entity. The identifier for the Employee entity is the Employee ID attribute.
The identifier for the Works In relationship consists of the Office Code and
Employee ID attributes. You can make an identifier for each relationship in
your database by taking the identifiers from each of the entities that it
connects. In the following table, the attributes identified with an asterisk are
the identifiers for the entity or relationship.

Entity or Relationship Attributes


Office *Office code
Office address
Phone number
Works in *Office code
*Employee ID
Department *Department ID
Department name
Heads *Department ID
*Employee ID
Member of *Department ID
*Employee ID
Skill *Skill ID
Skill name
Skill description
Expert in *Skill ID
*Employee ID
Skill level
Date acquired
Employee *Employee ID
last name
first name
Social security
number
Address
phone number
date of birth

347
The design process

Putting data in first ♦ To test for first normal form, look for attributes that can have repeating
normal form values.
♦ Remove attributes when multiple values can apply to a single item.
Move these repeating attributes to a new entity.
In the entity below, Phone number can repeat—an office can have more than
one telephone number.

Office and Phone


Office code
Office address
Phone number

Remove the repeating attribute and make a new entity called Telephone. Set
up a relationship between Office and Telephone.

Office
has
Office code
Office address Telephone
Phone number
is located at

Putting data in ♦ Remove data that does not depend on the whole key.
second normal
♦ Look only at entities and relationships whose identifier is composed of
form
more than one attribute. To test for second normal form, remove any
data that does not depend on the whole identifier. Each attribute should
depend on all of the attributes that comprise the identifier.
In this example, the identifier of the Employee and Department entity is
composed of two attributes. Some of the data does not depend on both
identifier attributes; for example, the department name depends on only one
of those attributes, Department ID, and Employee first name depends only on
Employee ID.

Employee and Department


Employee ID
Department ID
Employee first name
Employee last name
Department name

Move the identifier Department ID, which the other employee data does not
depend on, to a entity of its own called Department. Also move any attributes
that depend on it. Create a relationship between Employee and Department.

348
Chapter 12 Designing Your Database

Employee works in
Employee ID Department
Employee first name
Department ID
Employee last name
contains Department name

Putting data in third ♦ Remove data that doesn’t depend directly on the key.
normal form
♦ To test for third normal form, remove any attributes that depend on other
attributes, rather than directly on the identifier.
In this example, the Employee and Office entity contains some attributes that
depend on its identifier, Employee ID. However, attributes such as Office
location and Office phone depend on another attribute, Office code. They do
not depend directly on the identifier, Employee ID.

Employee and Office


Employee ID
Employee first name
Employee last name
Office code
Office location
Office phone

Remove Office code and those attributes that depend on it. Make another
entity called Office. Then, create a relationship that connects Employee with
Office.

Employee
Employee ID works out of
Employee first name
Office
Employee last name
Office code
houses Office location
Office phone

Step 4: Resolve the relationships


When you finish the normalization process, your design is almost complete.
All you need to do is to generate the physical data model that corresponds
to your conceptual data model. This process is also known as resolving the
relationships, because a large portion of the task involves converting the
relationships in the conceptual model into the corresponding tables and
foreign-key relationships.
Whereas the conceptual model is largely independent of implementation
details, the physical data model is tightly bound to the table structure and
options available in a particular database application. In this case, that
application is Adaptive Server Anywhere.
349
The design process

Resolving In order to implement relationships that do not carry data, you define foreign
relationships that keys. A foreign key is a column or set of columns that contains primary key
do not carry data values from another table. The foreign key allows you to access data from
more than one table at one time.
A database design tool such as the DataArchitect component of Sybase
PowerDesigner can generate the physical data model for you. However, if
you’re doing it yourself there are some basic rules that help you decide
where to put the keys.
♦ One to many An one-to-many relationship always becomes an entity
and a foreign key relationship.

Employee
is a member of
Employee Number Department
First Name
Department ID
Last Name
Department Name
Address contains

Notice that entities become tables. Identifiers in entities become (at least
part of) the primary key in a table. Attributes become columns. In a one-
to-many relationship, the identifier in the one entity will appear as a new
foreign key column in the many table.

employee department
emp_id integer dept_id integer
manager_id integer dept_name char(40)
emp_fname char(20) dept_head_id integer
dept_id = dept_id
emp_lname char(20)
dept_id integer
street char(40)
city char(20)
state char(4)
zip_code char(9)
phone char(10)
status char(1)
ss_number char(11)
salary numeric(20,3)
start_date date
termination_date date
birth_date date
bene_health_ins char(1)
bene_life_ins char(1)
bene_day_care char(1)
sex char(1)

In this example, the Employee entity becomes an Employee table.


Similarly, the Department entity becomes a Department table. A foreign
key called Department ID appears in the Employee table.

350
Chapter 12 Designing Your Database

♦ One to one In a one-to-one relationship, the foreign key can go into


either table. If the relationship is mandatory on one side, but optional on
the other, it should go on the optional side. In this example, put the
foreign key (Vehicle ID) in the Truck table because a vehicle does not
have to be a truck.

Vehicle
may be Truck
Vehicle ID
Model Weight rating
Price is a type of

The above entity-relationship model thus resolves the database base


structure, below.

Vehicle
Vehicle ID <pk> Truck
Model Vehicle ID = Vehicle ID
Vehicle ID <fk>
Price
Weight rating

♦ Many to many In a many-to-many relationship, a new table is created


with two foreign keys. This arrangement is necessary to make the
database efficient.

Parts
stored at
Part Number
Description Warehouse
contains Warehouse ID
Address

The new Storage Location table relates the Parts and Warehouse tables.

Parts
Part Number <pk>
Description

Storage Location
Part Number <pk,fk> Warehouse ID = Warehouse ID
Part Number = Part Number Warehouse ID <pk,fk>

Warehouse
Warehouse ID <pk>
Address

Resolving Some of your relationships may carry data. This situation often occurs in
relationships that many-to-many relationships.
carry data

351
The design process

Parts
Part Number
stored at
Description
Inventory Warehouse
Quantity Warehouse ID
contains Address

If this is the case, each entity resolves to a table. Each role becomes a foreign
key that points to another table.

Parts
Part Number <pk>
Description
Inventory
Warehouse ID <pk,fk> Warehouse ID = Warehouse ID
Part Number = Part Number Part Number <pk,fk>
Quantity
Warehouse
Warehouse ID <pk>
Address

The Inventory entity borrows its identifiers from the Parts and Warehouse
tables, because it depends on both of them. Once resolved, these borrowed
identifiers form the primary key of the Inventory table.

Tip
A conceptual data model simplifies the design process because it hides a
lot of details. For example, a many-to-many relationship always generates
an extra table and two foreign key references. In a conceptual data model,
you can usually denote all of this structure with a single connection.

Step 5: Verify the design


Before you implement your design, you need to make sure that it supports
your needs. Examine the activities you identified at the start of the design
process and make sure you can access all of the data that the activities
require.
♦ Can you find a path to get the information you need?
♦ Does the design meet your needs?
♦ Is all of the required data available?
If you can answer yes to all the questions above, you are ready to implement
your design.

352
Chapter 12 Designing Your Database

Final design Applying steps 1 through 3 to the database for the little company produces
the following entity-relationship diagram. This database is now in third
normal form.

Skill
is acquired by ID Number
Skill name
Skill description

Department
Expert In
is headed by Department ID
Skill Level
Department name
Date Acquired
manages
contains

Employee
is capable of Employee ID
is a member of
works out of First name
Last name
Home address
houses
manages reports to
Office
ID Number
Office name
Address

The corresponding physical data model appears below.

353
The design process

Skill
ID Number = ID Number ID Number <pk>
Skill name
Skill description
Department
Employee ID = Employee ID Department ID <pk>
Expert In Employee ID <fk>
ID Number <pk,fk> Department name
Employee ID <pk,fk>
Skill Level
Department ID = Department ID
Date Acquired
Department/Employee
Department ID <pk,fk>
Employee
Employee ID <pk,fk>
Employee ID <pk>
Employee ID = Employee ID
ID Number <fk>
Emp_Employee ID <fk> Employee ID = Employee ID
ID Number = ID Number First name
Last name
Home address
Office
ID Number <pk>
Office name
Address Employee ID = Emp_Employee ID

354
Chapter 12 Designing Your Database

Designing the database table properties


The database design specifies which tables you have and what columns each
table contains. This section describes how to specify each column’s
properties.
For each column, you must decide the column name, the data type and size,
whether or not NULL values are allowed, and whether you want the database
to restrict the values allowed in the column.

Choosing column names


A column name can be any set of letters, numbers or symbols. However, you
must enclose a column name in double quotes if it contains characters other
than letters, numbers, or underscores, if it does not begin with a letter, or if it
is the same as a keyword.

Choosing data types for columns


Available data types in Adaptive Server Anywhere include the following:
♦ Integer data types
♦ Decimal data types
♦ Floating-point data types
♦ Character data types
♦ Binary data types
♦ Date/time data types
♦ Domains (user-defined data types)
♦ Java class data types
$ For a description of data types, see "SQL Data Types" on page 263 of
the book ASA Reference.
The long binary data type can be used to store information such as images
(for instance, stored as bitmaps) or word-processing documents in a
database. These types of information are commonly called binary large
objects, or BLOBS.
$ For a complete description of each data type, see "SQL Data Types" on
page 263 of the book ASA Reference.

355
Designing the database table properties

NULL and If the column value is mandatory for a record, you define the column as
NOT NULL being NOT NULL. Otherwise, the column is allowed to contain the NULL
value, which represents no value. The default in SQL is to allow NULL
values, but you should explicitly declare columns NOT NULL unless there is
a good reason to allow NULL values.
$ For a complete description of the NULL value, see "NULL value" on
page 260 of the book ASA Reference. For information on its use in
comparisons, see "Search conditions" on page 239 of the book ASA
Reference.

Choosing constraints
Although the data type of a column restricts the values that are allowed in
that column (for example, only numbers or only dates), you may want to
further restrict the allowed values.
You can restrict the values of any column by specifying a CHECK
constraint. You can use any valid condition that could appear in a WHERE
clause to restrict the allowed values. Most CHECK constraints use either the
BETWEEN or IN condition.
$ For more information about valid conditions, see "Search conditions"
on page 239 of the book ASA Reference. For more information about
assigning constraints to tables and columns, see "Ensuring Data Integrity"
on page 357.
Example The sample database has a table called Department, which has columns
named dept_id, dept_name, and dept_head_id. Its definition is as follows:

Column Data Type Size Null/Not Null Constraint


dept_id integer — not null None
dept_name char 40 not null None
dept_head_id integer — null None

If you specify NOT NULL, a column value must be supplied for every row
in the table.

356
C H A P T E R 1 3

Ensuring Data Integrity

About this chapter Building integrity constraints right into the database is the surest way to
make sure your data stays in good shape. This chapter describes the facilities
in Adaptive Server Anywhere for ensuring that the data in your database is
valid and reliable.
You can enforce several types of integrity constraints. For example, you can
ensure individual entries are correct by imposing constraints and CHECK
conditions on tables and columns. Setting column properties by choosing an
appropriate data type or setting special default values assists this task.
The SQL statements in this chapter use the CREATE TABLE and ALTER
TABLE statements, basic forms of which were introduced in "Working with
Database Objects" on page 111.
Contents
Topic Page
Data integrity overview 358
Using column defaults 362
Using table and column constraints 367
Using domains 371
Enforcing entity and referential integrity 374
Integrity rules in the system tables 379

357
Data integrity overview

Data integrity overview


If data has integrity, it means that the data is valid—correct and accurate—
and that the relational structure of the database is intact. Referential integrity
constraints enforce the relational structure of the database. These rules
maintain the consistency of data between tables.
Adaptive Server Anywhere supports stored procedures, which allow you
detailed control over how data enters the database. You can also create
triggers, or customized stored procedures invoked automatically when a
certain action, such as an update of a particular column, occurs.
$ For more information on procedures and triggers see "Using Procedures,
Triggers, and Batches" on page 435.

How data can become invalid


Here are a few examples of how the data in a database may become invalid if
proper checks are not made. You can prevent each of these examples from
occurring using facilities described in this chapter.
Incorrect ♦ an operator enters the date of a sales transaction incorrectly
information
♦ an employee's salary becomes ten times too small because the operator
missed a digit

Duplicated data ♦ two different people add the same new department (with dept_id 200) to
the department table of the organization's database

Foreign key ♦ The department identified by dept_id 300 closes down, and one
relations employee record inadvertently remains unassigned to a new department.
invalidated

Integrity constraints belong in the database


To ensure the validity of data in a database, you need to formulate checks to
define valid and invalid data, and design rules to which data must adhere
(also known as business rules). Together, checks and rules become
constraints.
Build integrity Constraints built into the database itself are inherently more reliable than
constraints into those built into client applications or spelled out as instructions to database
database users. Constraints built into the database become part of the definition of the
database itself and the database enforces them consistently across all
applications. Setting a constraint once in the database imposes it for all
subsequent interactions with the database, no matter from what source.
358
Chapter 13 Ensuring Data Integrity

In contrast, constraints built into client applications are vulnerable every time
the software changes, and may need to be imposed in several applications, or
in several places in a single client application.

How database contents change


Changes occur to information in database tables when you submit SQL
statements from client applications. Only a few SQL statements actually
modify the information in a database. You can:
♦ Update information in a row of a table using the UPDATE statement.
♦ Delete an existing row of a table using the DELETE statement.
♦ Insert a new row into a table using the INSERT statement.

Data integrity tools


To assist in maintaining data integrity, you can use defaults, data constraints,
and constraints that maintain the referential structure of the database.
Defaults You can assign default values to columns to make certain kinds of data entry
more reliable. For example:
♦ A column can have a current date default for recording the date of
transactions with any user or client application action.
♦ Another type of default allows column values to increment
automatically without any specific user action other than entering a new
row. With this feature, you can guarantee that items (such as purchase
orders for example) are unique, sequential numbers.
$ For more information on these and other column defaults, see "Using
column defaults" on page 362.
Constraints You can apply several types of constraints to the data in individual columns
or tables. For example:
♦ A NOT NULL constraint prevents a column from containing a null
entry.
♦ A CHECK condition assigned to a column can ensure that every item in
the column meets a particular condition. For example, you can ensure
that salary column entries fit within a specified range and thus protect
against user error when typing in new values.

359
Data integrity overview

♦ CHECK conditions can be made on the relative values in different


columns. For example, you can ensure that a date_returned entry is later
than a date_borrowed entry in a library database.
♦ Triggers can enforce more sophisticated CHECK conditions. For more
information on triggers, see "Using Procedures, Triggers, and Batches"
on page 435.
As well, column constraints can be inherited from domains. For more
information on these and other table and column constraints, see "Using table
and column constraints" on page 367.
Entity and Relationships, defined by the primary keys and foreign keys, tie together the
referential integrity information in relational database tables. You must build these relations
directly into the database design. The following integrity rules maintain the
structure of the database:
♦ Entity integrity Keeps track of the primary keys. It guarantees that
every row of a given table can be uniquely identified by a primary key
that guarantees IS NOT NULL.
♦ Referential integrity Keeps track of the foreign keys that define the
relationships between tables. It guarantees that all foreign key values
either match a value in the corresponding primary key or contain the
NULL value if they are defined to allow NULL.
$ For more information about enforcing referential integrity, see
"Enforcing entity and referential integrity" on page 374. For more
information about designing appropriate primary and foreign key relations,
see "Designing Your Database" on page 333.
Triggers for You can also use triggers to maintain data integrity. A trigger is a procedure
advanced integrity stored in the database and executed automatically whenever the information
rules in a specified table changes. Triggers are a powerful mechanism for database
administrators and developers to ensure that data remains reliable.
$ For a full description of triggers, see "Using Procedures, Triggers, and
Batches" on page 435.

SQL statements for implementing integrity constraints


The following SQL statements implement integrity constraints:
♦ CREATE TABLE statement This statement implements integrity
constraints during creation of the database.
♦ ALTER TABLE statement This statement adds integrity constraints to
an existing database, or modifies constraints for an existing database.

360
Chapter 13 Ensuring Data Integrity

♦ CREATE TRIGGER statement This statement creates triggers that


enforce more complex business rules.
$ For full descriptions of the syntax of these statements, see "SQL
Statements" on page 377 of the book ASA Reference.

361
Using column defaults

Using column defaults


Column defaults automatically assign a specified value to particular columns
whenever someone enters a new row into a database table. The default value
assigned requires no any action on the part of the client application, however
if the client application does specify a value for the column, the new value
overrides the column default value.
Column defaults can quickly and automatically fill columns with
information, such as the date or time a row is inserted, or the user ID of the
person entering the information. Using column defaults encourages data
integrity, but does not enforce it. Client applications can always override
defaults.
Supported default SQL supports the following default values:
values
♦ A string specified in the CREATE TABLE statement or ALTER
TABLE statement
♦ A number specified in the CREATE TABLE statement or ALTER
TABLE statement
♦ An automatically incremented number: one more than the previous
highest value in the column
♦ The current date, time, or timestamp
♦ The current user ID of the database user
♦ A NULL value
♦ A constant expression, as long as it does not reference database objects.

Creating column defaults


You can use the CREATE TABLE statement to create column defaults at the
time a table is created, or the ALTER TABLE statement to add column
defaults at a later time.
Example The following statement adds a condition to an existing column named id in
the sales_order table, so that it automatically increments (unless a client
application specifies a value):
ALTER TABLE sales_order
MODIFY id DEFAULT AUTOINCREMENT
$ Each of the other default values is specified in a similar manner. For a
detailed description of the syntax, see "CREATE TABLE statement" on
page 466 of the book ASA Reference.

362
Chapter 13 Ensuring Data Integrity

Modifying and deleting column defaults


You can change or remove column defaults using the same form of the
ALTER TABLE statement you used to create defaults. The following
statement changes the default value of a column named order_date from its
current setting to CURRENT DATE:
ALTER TABLE sales_order
MODIFY order_date DEFAULT CURRENT DATE
You can remove column defaults by modifying them to be NULL. The
following statement removes the default from the order_date column:
ALTER TABLE sales_order
MODIFY order_date DEFAULT NULL

Working with column defaults in Sybase Central


You can add, alter, and delete column defaults in Sybase Central using the
Default tab of the column properties sheet.

v To display the property sheet for a column:


1 Connect to the database.
2 Open the Tables folder for that database.
3 Double-click the table holding the column you want to change.
4 Open the Columns folder for that table.
5 Right-click the column and choose Properties from the popup menu.

Current date and time defaults


For columns with the DATE, TIME, or TIMESTAMP data type, you can use
the current date, current time, or current timestamp as a default. The default
you choose must be compatible with the column’s data type.
Useful examples of A current date default might be useful to record:
current date default
♦ dates of phone calls in a contact database
♦ dates of orders in a sales entry database
♦ the date a patron borrows a book in a library database

363
Using column defaults

Current timestamp The current timestamp is similar to the current date default, but offers greater
accuracy. For example, a user of a contact management application may have
several contacts with a single customer in one day: the current timestamp
default would be useful to distinguish these contacts.
Since it records a date and the time down to a precision of millionths of a
second, you may also find the current timestamp useful when the sequence of
events is important in a database.
$ For more information about timestamps, times, and dates, see "SQL
Data Types" on page 263 of the book ASA Reference.

The user ID default


Assigning a DEFAULT USER to a column is an easy and reliable way of
identifying the person making an entry in a database. This information may
be required, for example, when salespeople are working on commission.
Building a user ID default into the primary key of a table is a useful
technique for occasionally connected users, and helps to prevent conflicts
during information updates. These users can make a copy of tables relevant
to their work on a portable computer, make changes while not connected to a
multiuser database, and then apply the transaction log to the server when
they return.

The AUTOINCREMENT default


The AUTOINCREMENT default is useful for numeric data fields where the
value of the number itself may have no meaning. The feature assigns each
new row a value of one greater than the previous highest value in the
column. You can use AUTOINCREMENT columns to record purchase
order numbers, to identify customer service calls or other entries where an
identifying number is required.
Autoincrement columns are typically primary key columns or columns
constrained to hold unique values (see "Enforcing entity integrity" on
page 374). For example, an autoincrement default is effective when the
column is the first column of an index, because the server uses an index or
key definition to find the highest value.
While using the autoincrement default in other cases is possible, doing so can
adversely affect database performance. For example, in cases where the next
value for each column is stored as a long integer (4 bytes), using values
greater than (2**31 - 1) or large double or numeric values may cause
wraparound to negative values.

364
Chapter 13 Ensuring Data Integrity

You can retrieve the most recent value inserted into an autoincrement
column using the @@identity global variable. For more information, see
"@@identity global variable" on page 257 of the book ASA Reference.
Autoincrement and Autoincrement is intended to work with positive integers.
negative numbers
The initial autoincrement value is set to 0 when the table is created. This
value remains as the highest value assigned when inserts are done that
explicitly insert negative values into the column. An insert where no value is
supplied causes the AUTOINCREMENT to generate a value of 1, forcing
any other generated values to be positive. If only negative values were
inserted and the database was stopped and restarted, we would recalculate
the highest value for the column and would then start generating negative
values.
In UltraLite applications, the autoincrement value is not set to 0 when the
table is created, and AUTOINCREMENT generates negative numbers when
a signed data type is used for the column.
You should define AUTOINCREMENT columns as unsigned to prevent
negative values from being used.
Autoincrement and $ A column with the AUTOINCREMENT default is referred to in
the IDENTITY Transact-SQL applications as an IDENTITY column. For information on
column IDENTITY columns, see "The special IDENTITY column" on page 973.

The NULL default


For columns that allow NULL values, specifying a NULL default is exactly
the same as not specifying a default at all. If the client inserting the row does
not explicitly assign a value, the row automatically receives A NULL value.
You can use NULL defaults when information for some columns is optional
or not always available, and when it is not required for the data in the
database be correct.
$ For more information on the NULL value, see "NULL value" on
page 260 of the book ASA Reference.

String and number defaults


You can specify a specific string or number as a default value, as long as the
column holds a string or number data type. You must ensure that the default
specified can be converted to the column’s data type.

365
Using column defaults

Default strings and numbers are useful when there is a typical entry for a
given column. For example, if an organization has two offices, the
headquarters in city_1 and a small office in city_2, you may want to set a
default entry for a location column to city_1, to make data entry easier.

Constant expression defaults


You can use a constant expression as a default value, as long as it does not
reference database objects. Constant expressions allow column defaults to
contain entries such as the date fifteen days from today, which would be
entered as
... DEFAULT ( dateadd( day, 15, getdate() ) )

366
Chapter 13 Ensuring Data Integrity

Using table and column constraints


Along with the basic table structure (number, name and data type of
columns, name and location of the table), the CREATE TABLE statement
and ALTER TABLE statement can specify many different attributes for a
table that allow control over data integrity.

Caution
Altering tables can interfere with other users of the database. Although
you can execute the ALTER TABLE statement while other connections are
active, you cannot execute the ALTER TABLE statement if any other
connection is using the table you want to alter. For large tables, ALTER
TABLE is a time-consuming operation, and all other requests referencing
the table being altered are prohibited while the statement is processing.

This section describes how to use constraints to help ensure the accuracy of
data entered in the table.

Using CHECK conditions on columns


You use a CHECK condition to ensure that the values in a column satisfy
some definite criterion or rule. For example, these rules or criteria may
simply be required for data to be reasonable, or they may be more rigid rules
that reflect organization policies and procedures.
CHECK conditions on individual column values are useful when only a
restricted range of values are valid for that column.
Example 1 ♦ You can enforce a particular formatting requirement. For example, if a
table has a column for phone numbers you may wish to ensure that users
enter them all in the same manner. For North American phone numbers,
you could use a constraint such as:
ALTER TABLE customer
MODIFY phone
CHECK ( phone LIKE ’(___) ___-____’ )

Example 2 ♦ You can ensure that the entry matches one of a limited number of
values. For example, to ensure that a city column only contains one of a
certain number of allowed cities (say, those cities where the organization
has offices), you could use a constraint such as:
ALTER TABLE office
MODIFY city
CHECK ( city IN ( ’city_1’, ’city_2’, ’city_3’ ) )

367
Using table and column constraints

♦ By default, string comparisons are case insensitive unless the database is


explicitly created as a case-sensitive database.

Example 3 ♦ You can ensure that a date or number falls in a particular range. For
example, you may require that the start_date column of an employee
table must be between the date the organization was formed and the
current date using the following constraint:
ALTER TABLE employee
MODIFY start_date
CHECK ( start_date BETWEEN ’1983/06/27’
AND CURRENT DATE )
♦ You can use several date formats. The YYYY/MM/DD format in this
example has the virtue of always being recognized regardless of the
current option settings.
Column CHECK tests only fail if the condition returns a value of FALSE. If
the condition returns a value of UNKNOWN, the change is allowed.

Column CHECK conditions in previous releases


There is a change in the way that column CHECK conditions are held in
this release. In previous releases, column CHECK conditions were
merged together with all other CHECK conditions on a table into a single
CHECK condition. Consequently, you could not replace or delete them
individually. In this release, column CHECK conditions are held
individually in the system tables, and you can replace or delete them
individually. Column CHECK conditions added before this release remain
under the single table constraint, even if you upgrade the database.

Column CHECK conditions from domains


You can attach CHECK conditions to domains. Columns defined on those
data types inherit the CHECK conditions. A CHECK condition explicitly
specified for the column overrides that from the domain.
Any column defined using the posint data type accepts only positive integers
unless the column itself has a CHECK condition explicitly specified. In the
following example, the domain accepts only positive integers. Since any
variable prefixed with the @ sign is replaced by the name of the column
when the CHECK condition is evaluated, any variable name prefixed with @
could be used instead of @col.
CREATE DATATYPE posint INT
CHECK ( @col > 0 )

368
Chapter 13 Ensuring Data Integrity

An ALTER TABLE statement with the DELETE CHECK clause deletes all
CHECK conditions from the table definition, including those inherited from
domains.
$ For information on domains, see "Domains" on page 286 of the book
ASA Reference.

Working with table and column constraints in Sybase Central


You indicate all adding, altering and deleting of column constraints in
Sybase Central in the Constraints tab of the table or column property sheet.

v To manage constraints on a table:


1 Open the Tables folder.
2 Right-click the table and choose Properties from the popup menu.
3 Click the Constraints tab.
4 Make the appropriate changes.

v To manage constraints on a column:


1 Open the Tables folder and double-click a table to open it.
2 Open the Columns folder.
3 Right-click a column and choose Properties from the popup menu.
4 Click the Constraints tab.
5 Make the appropriate changes.

Using CHECK conditions on tables


A CHECK condition applied as a constraint on the table typically ensures
that two values in a row being added or modified have a proper relation to
each other. Column CHECK conditions, in contrast, are held individually in
the system tables, and you can replace or delete them individually. Since this
is more flexible behavior, use CHECK conditions on individual columns
where possible.
For example, in a library database, the date_borrowed must come before the
date_returned.
ALTER TABLE loan
ADD CHECK(date_returned >= date_borrowed)

369
Using table and column constraints

Modifying and deleting CHECK conditions


There are several ways to alter the existing set of CHECK conditions on a
table.
♦ You can add a new CHECK condition to the table or to an individual
column, as described above.
♦ You can delete a CHECK condition on a column by setting it to NULL.
For example, the following statement removes the CHECK condition on
the phone column in the customer table:
ALTER TABLE customer
MODIFY phone CHECK NULL
♦ You can replace a CHECK condition on a column in the same way as
you would add a CHECK condition. For example, the following
statement adds or replaces a CHECK condition on the phone column of
the customer table:
ALTER TABLE customer
MODIFY phone
CHECK ( phone LIKE ’___-___-____’ )
♦ There are two ways of modifying a CHECK condition defined on the
table, as opposed to a CHECK condition defined on a column:
♦ You can add a new CHECK condition using ALTER TABLE with
an ADD table-constraint clause.
♦ You can delete all existing CHECK conditions, including column
CHECK conditions, using ALTER TABLE DELETE CHECK, and
then add in new CHECK conditions.
You can remove all CHECK conditions on a table (including CHECK
conditions on all its columns and CHECK conditions inherited from
domains) using the ALTER TABLE statement with the DELETE CHECK
clause, as follows:
ALTER TABLE table_name
DELETE CHECK
Deleting a column from a table does not delete CHECK conditions
associated with the column held in the table constraint. Not removing the
constraints produces a column not found error message upon any attempt to
query data in the table.
Table CHECK conditions fail only if a value of FALSE is returned. A value
of UNKNOWN allows the change.

370
Chapter 13 Ensuring Data Integrity

Using domains
A domain is a user-defined data type that, together with other attributes, can
restrict the range of acceptable values or provide defaults. A domain extends
one of the built-in data types. The range of permissible values is usually
restricted by a check constraint. In addition, a domain can specify a default
value and may or may not allow nulls.
You can define your own domains for a number of reasons.
♦ A number of common errors can be prevented if inappropriate values
cannot be entered. A constraint placed on a domain ensures that all
columns and variables intended to hold values in a desired range or
format can hold only the intended values. For example, a data type can
ensure that all credit card numbers entered into the database contain the
correct number of digits.
♦ Domains can make it much easier to understand applications and the
structure of a database.
♦ Domains can prove convenient. For example, you may intend that all
table identifiers are positive integers that, by default, auto-increment.
You could enforce this restriction by entering the appropriate constraints
and defaults each time you define a new table, but it is less work to
define a new domain, then simply state that the identifier can take only
values from the specified domain.
$ For more information about domains, see "SQL Data Types" on
page 263 of the book ASA Reference.

Creating domains (Sybase Central)


You can use Sybase Central to create a domain or assign it to a column.

v To create a new domain (Sybase Central):


1 Open the Domains folder.
2 In the right pane, double-click Add Domain.
3 Follow the instructions of the wizard.
All domains appear in the Domains folder in Sybase Central.

v To assign domains to columns (Sybase Central):


1 For the desired table, open the Columns folder.

371
Using domains

2 Right-click the desired column and choose Properties from the popup
menu.
3 On the Data Type tab of the column’s property sheet, assign a domain.
$ For more information, see "Property Sheet Descriptions" on page 1061.

Creating domains (SQL)


You can use the CREATE DOMAIN statement in Interactive SQL to create
and define a domain.

v To create a new domain (SQL):


1 Connect to a database.
2 Execute a CREATE DOMAIN statement.

Example 1: Simple Some columns in the database are to be used for people’s names and others
domains are to store addresses. You might then define type following domains.
CREATE DOMAIN persons_name CHAR(30)
CREATE DOMAIN street_address CHAR(35)
Having defined these domains, you can use them much as you would the
built-in data types. For example, you can use these definitions to define a
tables as follows.
CREATE TABLE customer (
id INT DEFAULT AUTOINCREMENT PRIMARY KEY
name persons_name
address street_address
)

Example 2: Default In the above example, the table’s primary key is specified to be of type
values, check integer. Indeed, many of your tables may require similar identifiers. Instead
constraints, and of specifying that these are integers, it is much more convenient to create an
identifiers identifier domain for use in these applications.
When you create a domain, you can specify a default value and provide
check constraint to ensure that no inappropriate values are entered into any
column of this type.
Integer values are commonly used as table identifiers. A good choice for
unique identifiers is to use positive integers. Since such identifiers are likely
to be used in many tables, you could define the following domain.
CREATE DOMAIN identifier INT
DEFAULT AUTOINCREMENT
CHECK ( @col > 0 )

372
Chapter 13 Ensuring Data Integrity

This check constraint uses the variable @col. Using this definition, you can
rewrite the definition of the customer table, shown above.
CREATE TABLE customer (
id identifier PRIMARY KEY
name persons_name
address street_address
)

Example 3: Built-in Adaptive Server Anywhere comes with some domains pre-defined. You can
domains use these pre-defined domains as you would a domain that you created
yourself. For example, the following monetary domain has already been
created for you.
CREATE DOMAIN MONEY NUMERIC(19,4)
NULL
$ For more information, see "CREATE DOMAIN statement" on
page 433 of the book ASA Reference.

Deleting domains
You can use either Sybase Central or a DROP DOMAIN statement in
Interactive SQL to delete a domain.
Only the user DBA or the user who created a domain can drop it. In addition,
since a domain cannot be dropped if any variable or column in the database
is an instance of the domain, you need to first drop any columns or variables
of that type before you can drop the domain.

v To delete a domain (Sybase Central):


1 Open the Domains folder.
2 Right-click the desired domain and choose Delete from the popup menu.

v To delete a domain (SQL):


1 Connect to a database.
2 Execute a DROP DOMAIN statement.

Example The following statement drops the customer_name domain.


DROP DOMAIN customer_name
$ For more information, see "DROP statement" on page 505 of the book
ASA Reference.

373
Enforcing entity and referential integrity

Enforcing entity and referential integrity


The relational structure of the database enables the personal server to identify
information within the database, and ensures all the rows in each table
uphold the relationships between tables (described in the database structure).

Enforcing entity integrity


When a user inserts or updates a row, the database server ensures that the
primary key for the table is still valid: that each row in the table is uniquely
identified by the primary key.
Example 1 The employee table in the sample database uses an employee ID as the
primary key. When you add a new employee to the table, the database server
checks that the new employee ID value is unique and is not NULL.
Example 2 The sales_order_items table in the sample database uses two columns to
define a primary key.
This table holds information about items ordered. One column contains an id
specifying an order, but there may be several items on each order, so this
column by itself cannot be a primary key. An additional line_id column
identifies which line corresponds to the item. The columns id and line_id,
taken together, specify an item uniquely, and form the primary key.

If a client application breaches entity integrity


Entity integrity requires that each value of a primary key be unique within
the table, and that no NULL values exist. If a client application attempts to
insert or update a primary key value, providing values that are not unique
would breach entity integrity. A breach in entity integrity prevents the new
information from being added to the database, and instead sends the client
application an error.
The application programmer should decide how to present this information
to the user and enable the user to take appropriate action. The appropriate
action is usually as simple as asking the user to provide a different, unique
value for the primary key.

374
Chapter 13 Ensuring Data Integrity

Primary keys enforce entity integrity


Once you specify the primary key for each table, maintaining entity integrity
requires no further action by either client application developers or by the
database administrator.
The table owner defines the primary key for a table when they create it. If
they modify the structure of a table at a later date, they can also redefine the
primary key.
Some application development systems and database design tools allow you
to create and alter database tables. If you are using such a system, you may
not have to enter the CREATE TABLE or ALTER TABLE command
explicitly: the application may generate the statement itself from the
information you provide.
$ For information on creating primary keys, see "Managing primary
keys" on page 131. For the detailed syntax of the CREATE TABLE
statement, see "CREATE TABLE statement" on page 466 of the book ASA
Reference. For information about changing table structure, see the "ALTER
TABLE statement" on page 392 of the book ASA Reference.

Enforcing referential integrity


A foreign key (made up of a particular column or combination of columns)
relates the information in one table (the foreign table) to information in
another (referenced or primary) table. For the foreign key relationship to be
valid, the entries in the foreign key must correspond to the primary key
values of a row in the referenced table. Occasionally, some other unique
column combination may be referenced instead of a primary key.
Example 1 The sample database contains an employee table and a department table. The
primary key for the employee table is the employee ID, and the primary key
for the department table is the department ID. In the employee table, the
department ID is called a foreign key for the department table because each
department ID in the employee table corresponds exactly to a department ID
in the department table.
The foreign key relationship is a many-to-one relationship. Several entries in
the employee table have the same department ID entry, but the department
ID is the primary key for the department table, and so is unique. If a foreign
key could reference a column in the department table containing duplicate
entries, or entries with a NULL value, there would be no way of knowing
which row in the department table is the appropriate reference. This is a
mandatory foreign key.

375
Enforcing entity and referential integrity

Example 2 Suppose the database also contained an office table listing office locations.
The employee table might have a foreign key for the office table that
indicates which city the employee’s office is in. The database designer can
choose to leave an office location unassigned at the time the employee is
hired, for example, either because they haven’t been assigned to an office yet,
or because they don’t work out of an office. In this case, the foreign key can
allow NULL values, and is optional.

Foreign keys enforce referential integrity


Like primary keys, you use the CREATE TABLE or ALTER TABLE
statements to create foreign keys. Once you create a foreign key, the column
or colums in the key can contain only values that are present as primary key
values in the table associated with the foreign key.
$ For information on creating foreign keys, see "Managing primary keys"
on page 131.

Losing referential integrity


Your database can lose referential integrity if someone:
♦ updates or deletes a primary key value. All the foreign keys referencing
that primary key would become invalid.
♦ adds a new row to the foreign table, and enters a value for the foreign
key that has no corresponding primary key value. The database would
become invalid.
Adaptive Server Anywhere provides protection against both types of
integrity loss.

If a client application breaches referential integrity


If a client application updates or deletes a primary key value in a table, and if
a foreign key references that primary key value elsewhere in the database,
there is a danger of a breach of referential integrity.
Example If the server allowed the primary key to be updated or deleted, and made no
alteration to the foreign keys that referenced it, the foreign key reference
would be invalid. Any attempt to use the foreign key reference, for example
in a SELECT statement using a KEY JOIN clause, would fail, as no
corresponding value in the referenced table exists.

376
Chapter 13 Ensuring Data Integrity

While Adaptive Server Anywhere handles breaches of entity integrity in a


generally straightforward fashion by simply refusing to enter the data and
returning an error message, potential breaches of referential integrity become
more complicated. You have several options (known as referential integrity
actions) available to help you maintain referential integrity.

Referential integrity actions


Maintaining referential integrity when updating or deleting a referenced
primary key can be as simple as disallowing the update or delete. Often,
however, it is also possible to take a specific action on each foreign key to
maintain referential integrity. The CREATE TABLE and ALTER TABLE
statements allow database administrators and table owners to specify what
action to take on foreign keys that reference a modified primary key when a
breach occurs.
You can specify each of the available referential integrity actions separately
for updates and deletes of the primary key:
♦ RESTRICT Generates an error and prevents the modification if an
attempt to modify a referenced primary key value occurs. This is the
default referential integrity action.
♦ SET NULL Sets all foreign keys that reference the modified primary
key to NULL.
♦ SET DEFAULT Sets all foreign keys that reference the modified
primary key to the default value for that column (as specified in the table
definition).
♦ CASCADE When used with ON UPDATE, this action updates all
foreign keys that reference the updated primary key to the new value.
When used with ON DELETE, this action deletes all rows containing
foreign keys that reference the deleted primary key.
System triggers implement referential integrity actions. The trigger, defined
on the primary table, is executed using the permissions of the owner of the
secondary table. This behavior means that cascaded operations can take
place between tables with different owners, without additional permissions
having to be granted.

377
Enforcing entity and referential integrity

Referential integrity checking


For foreign keys defined to RESTRICT operations that would violate
referential integrity, default checks occur at the time a statement executes. If
you specify a CHECK ON COMMIT clause, then the checks occur only
when the transaction is committed.
Using a database Setting the WAIT_FOR_COMMIT database option controls the behavior
option to control when a foreign key is defined to restrict operations that would violate
check time referential integrity. The CHECK ON COMMIT clause can override this
option.
With the default WAIT_FOR_COMMIT set to OFF, operations that would
leave the database inconsistent cannot execute. For example, an attempt to
DELETE a department that still has employees in it is not allowed. The
following statement gives the error primary key for row in table ’department’
is referenced in another table:
DELETE FROM department
WHERE dept_id = 200
Setting WAIT_FOR_COMMIT to ON causes referential integrity to remain
unchecked until a commit executes. If the database is in an inconsistent state,
the database disallows the commit and reports an error. In this mode, a
database user could delete a department with employees in it, however, the
user cannot commit the change to the database until they:
♦ Delete or reassign the employees belonging to that department.
♦ Redo the search condition on a SELECT statement to select the rows
that violate referential integrity.
♦ Insert the dept_id row back into the department table.
♦ Roll back the transaction to undo the DELETE operation.

378
Chapter 13 Ensuring Data Integrity

Integrity rules in the system tables


All the information about database integrity checks and rules is held in the
following system tables:

System table Description


SYS.SYSTABLE The view_def column of SYS.SYSTABLE holds
CHECK constraints. For views, the view_def
holds the CREATE VIEW command that created
the view. You can check whether a particular table
is a base table or a view by looking at the
table_type column, which is BASE or VIEW.
SYS.SYSTRIGGER SYS.SYSTRIGGER holds referential integrity
actions. The referential_action column holds a
single character indicating whether the action is
cascade (C), delete (D), set null (N), or restrict
(R). The event column holds a single character
specifying the event that causes the action to
occur: a delete (D), insert (I), update (U), or
update of column-list (C). The trigger_time
column shows whether the action occurs after (A)
or before (B) the triggering event.
SYS.SYSFOREIGNKEYS This view presents the foreign key information
from the two tables SYS.SYSFOREIGNKEY and
SYS.SYSFKCOL in a more readable format.
SYS.SYSCOLUMNS This view presents the information from the
SYS.SYSCOLUMN table in a more readable
format. It includes default settings and primary
key information for columns.

$ For a description of the contents of each system table, see "System


Tables" on page 991 of the book ASA Reference. You can use Sybase Central
or Interactive SQL to browse these tables and views.

379
Integrity rules in the system tables

380
C H A P T E R 1 4

Using Transactions and Isolation Levels

About this chapter You can group SQL statements into transactions, which have the property
that either all statements are executed or none is executed. You should design
each transaction to perform a task that changes your database from one
consistent state to another.
This chapter describes transactions and how to use them in applications. It
also describes how Adaptive Server Anywhere you can set isolation levels to
limit the interference among concurrent transaction.
Contents
Topic Page
Introduction to transactions 382
Isolation levels and consistency 386
Transaction blocking and deadlock 392
Choosing isolation levels 394
Isolation level tutorials 398
How locking works 413
Particular concurrency issues 426
Replication and concurrency 428
Summary 430

381
Introduction to transactions

Introduction to transactions
To ensure data integrity it is essential that you can identify states in which
the information in your database is consistent. The concept of consistency is
best illustrated through an example:
Consistency: an Suppose you use your database to handle financial accounts, and you wish to
example transfer money from one client’s account to another. The database is in a
consistent state both before and after the money is transferred; but it is not in
a consistent state after you have debited money from one account and before
you have credited it to the second. During a transferal of money, the database
is in a consistent state when the total amount of money in the clients’
accounts is as it was before any money was transferred. When the money has
been half transferred, the database is in an inconsistent state. Either both or
neither of the debit and the credit must be processed.
Transactions are A transaction is a logical unit of work. Each transaction is a sequence of
logical units of logically related commands that accomplish one task and transform the
work database from one consistent state into another. The nature of a consistent
state depends on your database.
The statements within a transaction are treated as an indivisible unit: either
all are executed or none is executed. At the end of each transaction, you
commit your changes to make them permanent. If for any reason some of the
commands in the transaction do not process properly, then any intermediate
changes are undone, or rolled back. Another way of saying this is that
transactions are atomic.
Grouping statements into transactions is key both to protecting the
consistency of your data even in the event of media or system failure, and to
managing concurrent database operations. Transactions may be safely
interleaved and the completion of each transaction marks a point at which the
information in the database is consistent.
In the event of a system failure or database crash during normal operation,
Adaptive Server Anywhere performs automatic recovery of your data when
the database is next started. The automatic recovery process recovers all
completed transactions, and rolls back any transactions that were
uncommitted when the failure occurred. The atomic character of transactions
ensures that databases are recovered to a consistent state.
$ For more information about database backups and data recovery, see
"Backup and Data Recovery" on page 645.
$ For further information about concurrent database usage, see
"Introduction to concurrency" on page 384.

382
Chapter 14 Using Transactions and Isolation Levels

Using transactions
Adaptive Server Anywhere expects you to group your commands into
transactions. Knowing which commands or actions signify the start or end of
a transaction lets you take full advantage of this feature.
Starting Transactions start with one of the following events:
transactions
♦ The first statement following a connection to a database
♦ The first statement following the end of a transaction

Completing Transactions complete with one of the following events:


transactions
♦ A COMMIT statement makes the changes to the database permanent.
♦ A ROLLBACK statement undoes all the changes made by the
transaction.
♦ A statement with a side effect of an automatic commit is executed: data
definition commands, such as ALTER, CREATE, COMMENT, and
DROP all have the side effect of an automatic commit.
♦ A disconnection from a database performs an implicit rollback.
♦ ODBC and JDBC have an autocommit setting that enforces a COMMIT
after each statement. By default, ODBC and JDBC require autocommit
to be on, and each statement is a single transaction. If you want to take
advantage of transaction design possibilities, then you should turn
autocommit off.
$ For more information on autocommit, see "Setting autocommit or
manual commit mode" on page 283.
♦ Setting the database option CHAINED to OFF is similar to enforcing an
autocommit after each statement. By default, connections that use
jConnect or Open Client applications have CHAINED set to OFF.
$ For more information, see "Setting autocommit or manual commit
mode" on page 283, and "CHAINED option" on page 175 of the book
ASA Reference.

Options in Interactive SQL lets you control when and how transactions from your
Interactive SQL application terminate:
♦ If you set the option AUTO_COMMIT to ON, Interactive SQL
automatically commits your results following every successful statement
and automatically perform a ROLLBACK after each failed statement.

383
Introduction to transactions

♦ The setting of the option COMMIT_ON_EXIT controls what happens to


uncommitted changes when you exit Interactive SQL. If this option is
set to ON (the default), Interactive SQL does a COMMIT; otherwise it
undoes your uncommitted changes with a ROLLBACK statement.
$ Adaptive Server Anywhere also supports Transact-SQL commands
such as BEGIN TRANSACTION, for compatibility with Sybase Adaptive
Server Enterprise. For further information, see "Transact-SQL
Compatibility" on page 957.

Introduction to concurrency
Concurrency is the ability of the database server to process multiple
transactions at the same time. Were it not for special mechanisms within the
database server, concurrent transactions could interfere with each other to
produce inconsistent and incorrect information.
Example A database system in a department store must allow many clerks to update
customer accounts concurrently. Each clerk must be able to update the status
of the accounts as they assist each customer: they cannot afford to wait until
no one else is using the database.
Who needs to Concurrency is a concern to all database administrators and developers. Even
know about if you are working with a single-user database, you must be concerned with
concurrency concurrency if you wish to process instructions from multiple applications or
even from multiple connections from a single application. These applications
and connections can interfere with each other in exactly the same way as
multiple users in a network setting.
Transaction size The way you group SQL statements into transactions can have significant
affects effects on data integrity and on system performance. If you make a
concurrency transaction too short and it does not contain an entire logical unit of work,
then inconsistencies can be introduced into the database. If you write a
transaction that is too long and contains several unrelated actions, then there
is greater chance that a ROLLBACK will unnecessarily undo work that
could have been committed quite safely into the database.
If your transactions are long, they can lower concurrency by preventing other
transactions from being processed concurrently.
There are many factors that determine the appropriate length of a transaction,
depending on the type of application and the environment.

384
Chapter 14 Using Transactions and Isolation Levels

Savepoints within transactions


You may identify important states within a transaction and return to them
selectively by using savepoints to separate groups of related statements.
A SAVEPOINT statement defines an intermediate point during a transaction.
You can undo all changes after that point using a ROLLBACK TO
SAVEPOINT statement. Once a RELEASE SAVEPOINT statement has
been executed or the transaction has ended, you can no longer use the
savepoint.
No locks are released by the RELEASE SAVEPOINT or ROLLBACK TO
SAVEPOINT commands: locks are released only at the end of a transaction.
Naming and By using named, nested savepoints, you can have many active savepoints
nesting savepoints within a transaction. Changes between a SAVEPOINT and a RELEASE
SAVEPOINT can be canceled by rolling back to a previous savepoint or
rolling back the transaction itself. Changes within a transaction are not a
permanent part of the database until the transaction is committed. All
savepoints are released when a transaction ends.
Savepoints cannot be used in bulk operations mode. There is very little
additional overhead in using savepoints.

385
Isolation levels and consistency

Isolation levels and consistency


There are four Adaptive Server Anywhere allows you to control the degree to which the
isolation levels operations in one transaction are visible to the operations in other concurrent
transactions. You do so by setting a database option called the isolation
level. Adaptive Server Anywhere has four different isolation levels
(numbered 0 through 3) that prevent some or all inconsistent behavior. Level
3 provides the highest level of isolation. Lower levels allow more
inconsistencies, but typically have better performance. Level 0 is the default
setting.
All isolation levels guarantee that each transaction will execute completely
or not at all, and that no updates will be lost.

Typical types of inconsistency


There are three typical types of inconsistency that can occur during the
execution of concurrent transactions. This list is not exhaustive as other types
of inconsistencies can also occur. These three types are mentioned in the ISO
SQL/92 standard and are important because behavior at lower isolation
levels is defined in terms of them.
♦ Dirty read Transaction A modifies a row, but does not commit or roll
back the change. Transaction B reads the modified row. Transaction A
then either further changes the row before performing a COMMIT, or
rolls back its modification. In either case, transaction B has seen the row
in a state which was never committed.
$ For an example of how isolation levels create dirty reads, see
"Tutorial 1: The dirty read" on page 398.
♦ Non-repeatable read Transaction A reads a row. Transaction B then
modifies or deletes the row and performs a COMMIT. If transaction A
then attempts to read the same row again, the row will have been
changed or deleted.
$ For an example of a non-repeatable read, see "Tutorial 2 – The
non-repeatable read" on page 401.
♦ Phantom row Transaction A reads a set of rows that satisfy some
condition. Transaction B then executes an INSERT, or an UPDATE on a
row which did not previously meet A's condition. Transaction B
commits these changes. These newly committed rows now satisfy the
condition. Transaction A then repeats the initial read and obtains a
different set of rows.

386
Chapter 14 Using Transactions and Isolation Levels

$ For an example of a phantom row, see "Tutorial 3 – A phantom


row" on page 405.
Other types of inconsistencies can also exist. These three were chosen for the
ISO SQL/92 standard because they are typical problems and because it was
convenient to describe amounts of locking between transactions in terms of
them.
Isolation levels and The isolation levels are different with respect to the type of inconsistent
dirty reads, non- behaviour that Adaptive Server Anywhere allows. An x means that the
repeatable reads, behavior is prevented, and a á means that the behavior may occur.
and phantom rows
Isolation Dirty Non-repeatable Phantom
level reads reads rows
0 á á á
1 x á á
2 x x á
3 x x x

This table demonstrates two points:


♦ Each isolation level eliminates one of the three typical types of
inconsistencies.
♦ Each level eliminates the types of inconsistencies eliminated at all lower
levels.
The four isolation levels have different names under ODBC. These names
are based on the names of the inconsistencies that they prevent, and are
described in "The ValuePtr parameter" on page 389.

Cursor instability
Another significant inconsistency is cursor instability. When this
inconsistency is present, a transaction can modify a row that is being
referenced by another transaction's cursor.
Example Transaction A reads a row using a cursor. Transaction B modifies that row.
Not realizing that the row has been modified, Transaction A modifies it,
rendering the affected row's data incorrect.

387
Isolation levels and consistency

Eliminating cursor Adaptive Server Anywhere achieves cursor stability at isolation levels 1, 2,
instability and 3. Cursor stability ensures that no other transactions can modify
information that is contained in the present row of your cursor. The
information in a row of a cursor may be the copy of information contained in
a particular table or may be a combination of data from different rows of
multiple tables. More than one table will likely be involved whenever you
use a join or sub-selection within a SELECT statement.
$ For information on programming SQL procedures and cursors, see
"Using Procedures, Triggers, and Batches" on page 435.
$ Cursors are used only when you are using Adaptive Server Anywhere
through another application. For more information, see "Using SQL in
Applications" on page 263.

Setting the isolation level


Each connection to the database has its own isolation level. In addition, the
database can store a default isolation level for each user or group. The
PUBLIC setting enables you to set a single default isolation level for the
entire database’s group.
The isolation level is a database option. You change database options using
the SET OPTION statement. For example, the following command sets the
isolation level for the current user to 3, the highest level.
SET OPTION ISOLATION_LEVEL = 3
You can change the isolation of your connection and the default level
associated with your user ID using the SET OPTION command. If you have
permission, you can also change the isolation level for other users or groups.

v To set the isolation level for the current user ID:


♦ Execute the SET OPTION statement. For example, the following
statement sets the isolation level to 3 for the current user:
SET TEMPORARY OPTION ISOLATION_LEVEL = 3

v To set the isolation level for a user or group:


1 Connect to the database as a user with DBA authority.
2 Execute the SET OPTION statement, adding the name of the group and
a period before ISOLATION_LEVEL. For example, the following
command sets the default isolation for the special group PUBLIC to 3.
SET OPTION PUBLIC.ISOLATION_LEVEL = 3

388
Chapter 14 Using Transactions and Isolation Levels

v To set the isolation level just for your present session:


♦ Execute the SET OPTION statement using the TEMPORARY keyword.
For example, the following statement sets the isolation level to 3 for the
duration of your connection:
SET TEMPORARY OPTION ISOLATION_LEVEL = 3
Once you disconnect, your isolation level reverts to its previous value.
Default isolation When you connect to a database, the database server determines your initial
level isolation level as follows:
1 A default isolation level may be set for each user and group. If a level is
stored in the database for your user ID, then the database server uses it.
2 If not, the database server checks the groups to which you belong until it
finds a level. All users are members of the special group PUBLIC. If it
finds no other setting first, then Adaptive Server Anywhere will use the
level assigned to that group.
$ For further information about users and groups, please refer to
"Managing User IDs and Permissions" on page 735.
$ For a description of the SET OPTION statement syntax, see "SET
OPTION statement" on page 612 of the book ASA Reference.
$ You may wish to change the isolation level in mid-transaction if, for
example, just one table or group of tables requires serialized access. For
information about changing the isolation level within a transaction, see
"Changing the isolation level within a transaction" on page 390.

Setting the isolation level from an ODBC-enabled application


ODBC applications call SQLSetConnectAttr with Attribute set to
SQL_ATTR_TXN_ISOLATION and ValuePtr set according to the
corresponding isolation level:
The ValuePtr ValuePtr Isolation Level
parameter
SQL_TXN_READ_UNCOMMITTED 0
SQL_TXN_READ_COMMITTED 1
SQL_TXN_REPEATABLE_READ 2
SQL_TXT_SERIALIZABLE 3

Changing an You can change the isolation level of your connection via ODBC by
isolation level via using the function SQLSetConnectOption in the library ODBC32.dll.
ODBC

389
Isolation levels and consistency

The SQLSetConnectOption function reads three parameters: the value


of the ODBC connection handle, the fact that you wish to set the
isolation level, and the value corresponding to the isolation level. These
values appear in the table below.

String Value
SQL_TXN_ISOLATION 108
SQL_TXN_READ_UNCOMMITTED 1
SQL_TXN_READ_COMMITTED 2
SQL_TXN_REPEATABLE_READ 4
SQL_TXT_SERIALIZABLE 8

Example The following function call sets the isolation level of the connection
MyConnection to isolation level 2:
SQLSetConnectOption(MyConnection.hDbc, SQL_TXN_ISOLATION, SQL_TXN_REPEATABLE_READ)

ODBC uses the isolation feature to support assorted database lock options.
For example, in PowerBuilder you can use the Lock attribute of the
transaction object to set the isolation level when you connect to the database.
The Lock attribute is a string, and is set as follows:
SQLCA.lock = "RU"
The Lock option is honored only at the moment the CONNECT occurs.
Changes to the Lock attribute after the CONNECT have no effect on the
connection.

Changing the isolation level within a transaction


Sometimes you will find that different isolation levels are suitable for
different parts of a single transaction. Adaptive Server Anywhere allows you
to change the isolation level of your database in the middle of a transaction.
When you change the ISOLATION_LEVEL option in the middle of a
transaction, the new setting affects only the following:
♦ Any cursors opened after the change
♦ Any statements executed after the change
You may wish to change the isolation level during a transaction, as doing so
affords you control over the number of locks your transaction places. You
may find a transaction needs to read a large table, but perform detailed work
with only a few of the rows. If an inconsistency would not seriously affect
your transaction, set the isolation to a low level while you scan the large
table to avoid delaying the work of others.
390
Chapter 14 Using Transactions and Isolation Levels

You may also wish to change the isolation level in mid transaction if, for
example, just one table or group of tables requires serialized access.
$ For an example in which the isolation level is changed in the middle of
a transaction, see "Tutorial 3 – A phantom row" on page 405.

Viewing the isolation level


You can inspect the isolation level of the current connection using the
CONNECTION_PROPERTY function.

v To view the isolation level for the current connection:


♦ Execute the following statement:
SELECT CONNECTION_PROPERTY(’ISOLATION_LEVEL’)

391
Transaction blocking and deadlock

Transaction blocking and deadlock


When a transaction is being executed, the database server places locks on
rows to prevent other transactions from interfering with the affected rows.
Locks can interfere with other transactions that are trying to access the
locked rows.
Adaptive Server Anywhere uses transaction blocking to allow transactions
to execute concurrently without interference, or with limited interference.
Any transaction can acquire a lock to prevent other concurrent transactions
from modifying or even accessing a particular row. This transaction blocking
scheme always stops some types of interference. For example, a transaction
that is updating a particular row of a table always acquires a lock on that row
to ensure that no other transaction can update or delete the same row at the
same time.

Transaction blocking
When a transaction attempts to carry out an operation, but is forbidden by a
lock held by another transaction, a conflict arises and the progress of the
transaction attempting to carry out the operation is impeded or blocked.
$ "Two-phase locking" on page 422 describes deadlock, which occurs
when two or more transactions are blocked by each other in such a way that
none can proceed.
$ Sometimes a set of transactions arrive at a state where none of them can
proceed. For more information see "Deadlock" on page 393.

The BLOCKING option


If two transactions have each acquired a read lock on a single row, the
behavior when one of them attempts to modify that row depends on the
database setting BLOCKING. To modify the row, that transaction must
block the other, yet it cannot do so while the other transaction has it blocked.
♦ If BLOCKING is ON (the default setting), then the transaction that
attempts to write waits until the other transaction releases its read lock.
At that time, the write goes through.
♦ If BLOCKING has been set to OFF, then the transaction that attempts to
write receives an error.

392
Chapter 14 Using Transactions and Isolation Levels

When BLOCKING is set to OFF, the transaction terminates instead of


waiting and any changes it has made are rolled back. In this event, try
executing the transaction again, later.
Blocking is more likely to occur at higher isolation levels because more
locking and more checking is done. Higher isolation levels usually provide
less concurrency. How much less depends on the individual natures of the
concurrent transactions.
$ For information about the BLOCKING option, see "BLOCKING
option" on page 175 of the book ASA Reference.

Deadlock
Transaction blocking can lead to deadlock, a situation in which a set of
transactions arrive at a state where none of them can proceed.
Reasons for A deadlock can arise for two reasons:
deadlocks
♦ A cyclical blocking conflict Transaction A is blocked on transaction
B, and transaction B is blocked on transaction A. Clearly, more time will
not solve the problem, and one of the transactions must be canceled,
allowing the other to proceed. The same situation can arise with more
than two transactions blocked in a cycle.
♦ All active database threads are blocked When a transaction becomes
blocked, its database thread is not relinquished. If the database is
configured with three threads and transactions A, B, and C are blocked
on transaction D which is not currently executing a request, then a
deadlock situation has arisen since there are no available threads.
Adaptive Server Anywhere automatically cancels the last transaction that
became blocked (eliminating the deadlock situation), and returns an error to
that transaction indicating which form of deadlock occurred.
$ The number of database threads that the server uses depends on the
individual database’s setting. For information on setting the number of
database threads, see "THREAD_COUNT option" on page 213 of the book
ASA Reference and "–ge command-line option" on page 27 of the book ASA
Reference.
Determining who is You can use the sa_conn_info system procedure to determine which
blocked connections are blocked on which other connections. This procedure returns
a result set consisting of a row for each connection. One column of the result
set lists whether the connection is blocked, and if so which other connection
it is blocked on.
$ For more information, see "sa_conn_info system procedure" on
page 964 of the book ASA Reference.

393
Choosing isolation levels

Choosing isolation levels


The choice of isolation level depends on the kind of task an application is
carrying out. This section gives some guidelines for choosing isolation
levels.
When you choose an appropriate isolation level you must balance the need
for consistency and accuracy with the need for concurrent transactions to
proceed unimpeded. If a transaction involves only one or two specific values
in one table, it is unlikely to interfere as much with other processes as one
which searches many large tables and may need to lock many rows or entire
tables and may take a very long time to complete.
For example, if your transactions involve transferring money between bank
accounts or even checking account balances, you will likely want to do your
utmost to ensure that the information you return is correct. On the other
hand, if you just want a rough estimate of the proportion of inactive
accounts, then you may not care whether your transaction waits for others or
not and indeed may be willing to sacrifice some accuracy to avoid interfering
with other users of the database.
Furthermore, a transfer may affect only the two rows which contain the two
account balances, whereas all the accounts must be read in order to calculate
the estimate. For this reason, the transfer is less likely to delay other
transactions.
Adaptive Server Anywhere provides four levels of isolation: levels 0, 1, 2,
and 3. Level 3 provides complete isolation and ensures that transactions are
interleaved in such a manner that the schedule is serializable.

Serializable schedules
To process transactions concurrently, the database server must execute some
component statements of one transaction, then some from other transactions,
before continuing to process further operations from the first. The order in
which the component operations of the various transactions are interleaved is
called the schedule.
Applying transactions concurrently in this manner can result in many
possible outcomes, including the three particular inconsistencies described in
the previous section. Sometimes, the final state of the database also could
have been achieved had the transactions been executed sequentially, meaning
that one transaction was always completed in its entirety before the next was
started. A schedule is called serializable whenever executing the
transactions sequentially, in some order, could have left the database in the
same state as the actual schedule.

394
Chapter 14 Using Transactions and Isolation Levels

$ For information about how Adaptive Server Anywhere handles


serialization, see "Two-phase locking" on page 422.
Serializability is the commonly accepted criterion for correctness. A
serializable schedule is accepted as correct because the database is not
influenced by the concurrent execution of the transactions.
The isolation level affects a transaction’s serializability. At isolation level 3,
all schedules are serializable. The default setting is 0.
Serializable means Even when transactions are executed sequentially, the final state of the
that concurrency database can depend upon the order in which these transactions are executed.
has added no For example, if one transaction sets a particular cell to the value 5 and
effect another sets it to the number 6, then the final value of the cell is determined
by which transaction executes last.
Knowing a schedule is serializable does not settle which order transactions
would best be executed, but rather states that concurrency has added no
effect. Outcomes which may be achieved by executing the set of transactions
sequentially in some order are all assumed correct.
Unserializable The inconsistencies introduced in "Typical types of inconsistency" on
schedules page 386 are typical of the types of problems that appear when the schedule
introduce is not serializable. In each case, the inconsistency appeared because the
inconsistencies statements were interleaved in such a way as to produce a result that would
not be possible if all transactions were executed sequentially. For example, a
dirty read can only occur if one transaction can select rows while another
transaction is in the middle of inserting or updating data in the same row.

Typical transactions at various isolation levels


Various isolation levels lend themselves to particular types of tasks. Use the
information below to help you decide which level is best suited to each
particular operation.
Typical level 0 Transactions that involve browsing or performing data entry may last several
transactions minutes, and read a large number of rows. If isolation level 2 or 3 is used,
concurrency can suffer. Isolation level of 0 or 1 is typically used for this kind
of transaction.
For example, a decision support application that reads large amounts of
information from the database to produce statistical summaries may not be
significantly affected if it reads a few rows that are later modified. If high
isolation is required for such an application, it may acquire read locks on
large amounts of data, not allowing other applications write access to it.

395
Choosing isolation levels

Typical level 1 Isolation level 1 is particularly useful in conjunction with cursors, because
transactions this combination ensures cursor stability without greatly increasing locking
requirements. Adaptive Server Anywhere achieves this benefit through the
early release of read locks acquired for the present row of a cursor. These
locks must persist until the end of the transaction at either levels two or three
in order to guarantee repeatable reads.
For example, a transaction that updates inventory levels through a cursor is
particularly suited to this level, because each of the adjustments to inventory
levels as items are received and sold would not be lost, yet these frequent
adjustments would have minimal impact on other transactions.
Typical level 2 At isolation level 2, rows that match your criterion cannot be changed by
transactions other transactions. You can thus employ this level when you must read rows
more than once and rely that rows contained in your first result set won’t
change.
Because of the relatively large number of read locks required, you should use
this isolation level with care. As with level 3 transactions, careful design of
your database and indexes reduce the number of locks acquired and hence
can improve the performance of your database significantly.
Typical level 3 Isolation level 3 is appropriate for transactions that demand the most in
transactions security. The elimination of phantom rows lets you perform multi-step
operations on a set of rows without fear that new rows will appear partway
through your operations and corrupt the result.
However much integrity it provides, isolation level 3 should be used
sparingly on large systems that are required to support a large number of
concurrent transactions. Adaptive Server Anywhere places more locks at this
level than at any other, raising the likelihood that one transaction will impede
the process of many others.

Improving concurrency at isolation levels 2 and 3


Isolation levels 2 and 3 use a lot of locks and so good design is of particular
importance for databases that make regular use of these isolation levels.
When you must make use of serializable transactions, it is important that you
design your database, in particular the indices, with the business rules of
your project in mind. You may also improve performance by breaking large
transactions into several smaller ones, thus shortening the length of time that
rows are locked.

396
Chapter 14 Using Transactions and Isolation Levels

Although serializable transactions have the most potential to block other


transactions, they are not necessarily less efficient. When processing these
transactions, Adaptive Server Anywhere can perform certain optimizations
that may improve performance, in spite of the increased number of locks. For
example, since all rows read must be locked whether or not they match the a
search criteria, the database server is free to combine the operation of reading
rows and placing locks.

Reducing the impact of locking


You should avoid running transactions at isolation level 3 whenever
practical. They tend to place large number of locks and hence impact the
efficient execution of other concurrent transactions.
When the nature of an operation demands that it run at isolation level 3, you
can lower its impact on concurrency by designing the query to read as few
rows and index entries as possible. These steps will help the level 3
transaction run more quickly and, of possibly greater importance, will reduce
the number of locks it places.
In particular, you may find that adding an index may greatly help speed up
transactions, particularly when at least one of them must execute at isolation
level 3. An index can have two benefits:
♦ An index enables rows to be located in an efficient manner
♦ Searches that make use of the index may need fewer locks.
$ Further information on the details of the locking methods employed by
Adaptive Server Anywhere is located in "How locking works" on page 413.
$ For information on performance and how Adaptive Server Anywhere
plans its access of information to execute your commands, see "Monitoring
and Improving Performance" on page 799.

397
Isolation level tutorials

Isolation level tutorials


The different isolation levels behave in very different ways, and which one
you will want to use depends on your database and on the operations you are
carrying out. The following set of tutorials will help you determine which
isolation levels are suitable for different tasks.

Tutorial 1: The dirty read


The following tutorial demonstrates one type of inconsistency that can occur
when multiple transactions are executed concurrently. Two employees at a
typical small merchandising company both access the corporate database at
the same time. The first person is the company’s Sales Manager. The second
is the Accountant.
The Sales Manager wants to increase the price of one of the tee shirts sold by
their firm by $0.95, but is having a little trouble with the syntax of the SQL
language. At the same time, unknown to the Sales Manager, the Accountant
is trying to calculate the retail value of the current inventory to include in a
report he volunteered to bring to the next management meeting.
In this example, you will play the role of two people, both using the
demonstration database concurrently.
1 Start Interactive SQL.
2 Connect to the sample database as the Sales Manager.
In the Connect dialog, choose the ODBC data source ASA 7.0 Sample.
On the Advanced tab, enter the following string to make the window
easier to identify:
ConnectionName=Sales Manager
Click OK to connect.
3 Start a second copy of Interactive SQL.
4 Connect to the sample database as the Accountant.
In the Connect dialog, choose the ODBC data source ASA 7.0 Sample.
On the Advanced tab, enter the following string to make the window
easier to identify:
ConnectionName=Accountant
Click OK to connect.
5 As the Sales Manager, raise the price of all the tee shirts by $0.95.

398
Chapter 14 Using Transactions and Isolation Levels

In the window labeled Sales Manager, execute the following


commands:
SELECT id, name, unit_price
FROM product;
UPDATE PRODUCT
SET unit_price = unit_price + 95
WHERE NAME = ’Tee Shirt’
The result is:

id name unit_price
300 Tee Shirt 104.00
301 Tee Shirt 109.00
302 Tee Shirt 109.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00
600 Sweatshirt 24.00
601 Sweatshirt 24.00
700 Shorts 15.00

You observe immediately that you should have entered 0.95 instead
of 95, but before you can fix your error, the Accountant accesses the
database from another office.
6 The company’s Accountant is worried that too much money is tied up in
inventory. As the Accountant, execute the following commands to
calculate the total retail value of all the merchandise in stock:
SELECT SUM( quantity * unit_price )
AS inventory
FROM product
The result is:

inventory
21453.00

399
Isolation level tutorials

Unfortunately, this calculation is not accurate. The Sales Manager


accidentally raised the price of the visor $95, and the result reflects this
erroneous price. This mistake demonstrates one typical type of
inconsistency known as a dirty read. You, as the Accountant, accessed
data which the Sales Manager has entered, but has not yet committed.
$ You can eliminate dirty reads and other inconsistencies explained
in "Isolation levels and consistency" on page 386.
7 As the Sales Manager, fix the error by rolling back your first changes
and entering the correct UPDATE command. Check that your new
values are correct.
ROLLBACK;
UPDATE product
SET unit_price = unit_price + 0.95
WHERE NAME = ’Tee Shirt’;

id name unit_price
300 Tee Shirt 9.95
301 Tee Shirt 14.95
302 Tee Shirt 14.95
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00
600 Sweatshirt 24.00
601 Sweatshirt 24.00
700 Shorts 15.00

8 The Accountant does not know that the amount he calculated was in
error. You can see the correct value by executing his SELECT statement
again in his window.
SELECT SUM( quantity * unit_price )
AS inventory
FROM product;

inventory
6687.15

400
Chapter 14 Using Transactions and Isolation Levels

9 Finish the transaction in the Sales Manager’s window. She would enter a
COMMIT statement to make his changes permanent, but you may wish
to enter a ROLLBACK, instead, to avoid changing the copy of the
demonstration database on your machine.
ROLLBACK;

The Accountant unknowingly receives erroneous information from the


database because the database server is processing the work of both the Sales
Manager and the Accountant concurrently.

Tutorial 2 – The non-repeatable read


The example in section "Introduction to concurrency" on page 384
demonstrated the first type of inconsistency, namely the dirty read. In that
example, an Accountant made a calculation while the Sales Manager was in
the process of updating a price. The Accountant’s calculation used erroneous
information which the Sales Manager had entered and was in the process of
fixing.
The following example demonstrates another type of inconsistency: non-
repeatable reads. In this example, you will play the role of the same two
people, both using the demonstration database concurrently. The Sales
Manager wishes to offer a new sales price on plastic visors. The Accountant
wishes to verify the prices of some items that appear on a recent order.
This example begins with both connections at isolation level 1, rather than at
isolation level 0, which is the default for the demonstration database supplied
with Adaptive Server Anywhere. By setting the isolation level to 1, you
eliminate the type of inconsistency which the previous tutorial demonstrated,
namely the dirty read.
1 Start Interactive SQL.
2 Connect to the sample database as the Sales Manager.
In the Connect dialog, choose the ODBC data source ASA 7.0 Sample.
On the Advanced tab, enter the following string to make the window
easier to identify:
ConnectionName=Sales Manager
Click OK to connect.
3 Start a second copy of Interactive SQL.
4 Connect to the sample database as the Accountant.
In the Connect dialog, choose the ODBC data source ASA 7.0 Sample.

401
Isolation level tutorials

On the Advanced tab, enter the following string to make the window
easier to identify:
ConnectionName=Accountant
Click OK to connect.
5 Set the isolation level to 1 for the Accountant’s connection by executing
the following command.
SET TEMPORARY OPTION ISOLATION_LEVEL = 1;
6 Set the isolation level to 1 in the Sales Manager’s window by executing
the following command:
SET TEMPORARY OPTION ISOLATION_LEVEL = 1;
7 The Accountant decides to list the prices of the visors. As the
Accountant, execute the following command:
SELECT id, name, unit_price FROM product

id name unit_price
300 Tee Shirt 9.00
301 Tee Shirt 14.00
302 Tee Shirt 14.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00

8 The Sales Manager decides to introduce a new sale price for the plastic
visor. As the Sales Manager, execute the following command:
SELECT id, name, unit_price FROM product
WHERE name = ’Visor’;
UPDATE product
SET unit_price = 5.95 WHERE id = 501;
COMMIT;

id name unit_price
500 Visor 7.00
501 Visor 5.95

402
Chapter 14 Using Transactions and Isolation Levels

9 Compare the price of the visor in the Sales Manager window with the
price for the same visor in the Accountant window. The Accountant
window still shows the old price, even though the Sales Manager has
entered the new price and committed the change.
This inconsistency is called a non-repeatable read, because if the
Accountant did the same select a second time in the same transaction,
he wouldn’t get the same results. Try it for yourself. As the Accountant,
execute the select command again. Observe that the Sales Manager’s
sale price now displays.
SELECT id, name, price
FROM products;

id name unit_price
300 Tee Shirt 9.00
301 Tee Shirt 14.00
302 Tee Shirt 14.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 5.95

Of course if the Accountant had finished his transaction, for example by


issuing a COMMIT or ROLLBACK command before using SELECT
again, it would be a different matter. The database is available for
simultaneous use by multiple users and it is completely permissible for
someone to change values either before or after the Accountant’s
transaction. The change in results is only inconsistent because it happens
in the middle of his transaction. Such an event makes the schedule
unserializable.
10 The Accountant notices this behavior and decides that from now on he
doesn’t want the prices changing while he looks at them. Repeatable
reads are eliminated at isolation level 2. Play the role of the Accountant:
SET TEMPORARY OPTION ISOLATION_LEVEL = 2;
SELECT id, name, unit_price
FROM product;
11 The Sales Manager decides that it would be better to delay the sale on
the plastic visor until next week so that she won’t have to give the lower
price on a big order that she’s expecting will arrive tomorrow. In her
window, try to execute the following statements. The command will
start to execute, then his window will appear to freeze.

403
Isolation level tutorials

UPDATE product
SET unit_price = 7.00
WHERE id = 501
The database server must guarantee repeatable reads at isolation level 2.
To do so, it places a read lock on each row of the product table that the
Accountant reads. When the Sales Manager tries to change the price
back, her transaction must acquire a write lock on the plastic visor row
of the product table. Since write locks are exclusive, her transaction
must wait until the Accountant’s transaction releases its read lock.
12 The Accountant is finished looking at the prices. He doesn’t want to risk
accidentally changing the database, so he completes his transaction with
a ROLLBACK statement.
ROLLBACK
Observe that as soon as the database server executes this statement, the
Sales Manager’s transaction completes.

id name unit_price
500 Visor 7.00
501 Visor 7.00

13 The Sales Manager can finish now. She wishes to commit her change to
restore the original price.
COMMIT

Types of Locks When you upgraded the Accountant’s isolation from level 1 to level 2, the
and different database server used read locks where none had previously been acquired. In
isolation levels general, each isolation level is characterized by the types of locks needed and
by how locks held by other transactions are treated.
At isolation level 0, the database server needs only write locks. It makes use
of these locks to ensure that no two transactions make modifications that
conflict. For example, a level 0 transaction acquires a write lock on a row
before it updates or deletes it, and inserts any new rows with a write lock
already in place.
Level 0 transactions perform no checks on the rows they are reading. For
example, when a level 0 transaction reads a row, it doesn’t bother to check
what locks may or may not have been acquired on that row by other
transactions. Since no checks are needed, level 0 transactions are particularly
fast. This speed comes at the expense of consistency. Whenever they read a
row which is write locked by another transaction, they risk returning dirty
data.

404
Chapter 14 Using Transactions and Isolation Levels

At level 1, transactions check for write locks before they read a row.
Although one more operation is required, these transactions are assured that
all the data they read is committed. Try repeating the first tutorial with the
isolation level set to 1 instead of 0. You will find that the Accountant’s
computation cannot proceed while the Sales Manager’s transaction, which
updates the price of the tee shirts, remains incomplete.
When the Accountant raised his isolation to level 2, the database server
began using read locks. From then on, it acquired a read lock for his
transaction on each row that matched his selection.
Transaction In step 10 of the above tutorial, the Sales Manager window froze during the
blocking execution of her UPDATE command. The database server began to execute
her command, then found that the Accountant’s transaction had acquired a
read lock on the row that the Sales Manager needed to change. At this point,
the database server simply paused the execution of the UPDATE. Once the
Accountant finished his transaction with the ROLLBACK, the database
server automatically released his locks. Finding no further obstructions, it
then proceeded to complete execution of the Sales Manager’s UPDATE.
In general, a locking conflict occurs when one transaction attempts to acquire
an exclusive lock on a row on which another transaction holds a lock, or
attempts to acquire a shared lock on a row on which another transaction
holds an exclusive lock. One transaction must wait for another transaction to
complete. The transaction that must wait is said to be blocked by another
transaction.
When the database server identifies a locking conflict which prohibits a
transaction from proceeding immediately, it can either pause execution of the
transaction, or it can terminate the transaction, roll back any changes, and
return an error. You control the route by setting the BLOCKING option.
When BLOCKING is set to ON, then the second transaction waits as in the
above tutorial
$ For further information regarding the blocking option, see "The
BLOCKING option" on page 392.

Tutorial 3 – A phantom row


The following continues the same scenario. In this case, the Accountant
views the department table while the Sales Manager creates a new
department. You will observe the appearance of a phantom row.
If you have not done so, do steps 1 through 4 of the previous tutorial,
"Tutorial 2 – The non-repeatable read" on page 401. These steps describe
how to open two copies of Interactive SQL.

405
Isolation level tutorials

1 Set the isolation level to 2 in the Sales Manager window by executing


the following command.
SET TEMPORARY OPTION ISOLATION_LEVEL = 2;
2 Set the isolation level to 2 for the Accountant window by executing the
following command.
SET TEMPORARY OPTION ISOLATION_LEVEL = 2;
3 In the Accountant window, enter the following command to list all the
departments.
SELECT * FROM department
ORDER BY dept_id;

dept_id dept_name dept_head_id


100 R&D 501
200 Sales 902
300 Finance 1293
400 Marketing 1576
500 Shipping 703

4 The Sales Manager decides to set up a new department to focus on the


foreign market. Philip Chin, who has number 129, will head the new
department.
INSERT INTO department
(dept_id, dept_name, dept_head_id)
VALUES(600, ’Foreign Sales’, 129);
The final command creates the new entry for the new department. It
appears as a new row at the bottom of the table in the Sales Manager’s
window.
5 The Accountant, however, is not aware of the new department. At
isolation level 2, the database server places locks to ensure that no row
changes, but places no locks that stop other transactions from inserting
new rows.
The Accountant will only discover the new row if he should execute his
select command again. In the Accountant’s window, execute the
SELECT statement again. You will see the new row appended to the
table.
SELECT *
FROM department
ORDER BY dept_id;

406
Chapter 14 Using Transactions and Isolation Levels

dept_id dept_name dept_head_id


100 R&D 501
200 Sales 902
300 Finance 1293
400 Marketing 1576
500 Shipping 703
600 Foreign Sales 129

The new row that appears is called a phantom row because, from the
Accountant’s point of view, it appears like an apparition, seemingly from
nowhere. The Accountant is connected at isolation level 2. At that level,
the database server acquires locks only on the rows that he is using.
Other rows are left untouched and hence there is nothing to prevent the
Sales Manager from inserting a new row.
6 The Accountant would prefer to avoid such surprises in future, so he
raises the isolation level of his current transaction to level 3. Enter the
following commands for the Accountant.
SET TEMPORARY OPTION ISOLATION_LEVEL = 3
SELECT *
FROM department
ORDER BY dept_id
7 The Sales Manager would like to add a second department to handle
sales initiative aimed at large corporate partners. Execute the following
command in the Sales Manager’s window.
INSERT INTO department
(dept_id, dept_name, dept_head_id)
VALUES(700, ’Major Account Sales’, 902)
The Sales Manager’s window will pause during execution because the
Accountant’s locks block the command. Click the Interrupt the SQL
Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry.
8 To avoid changing the demonstration database that comes with Adaptive
Server Anywhere, you should roll back the insertion of the new
departments. Execute the following command in the Sales Manager's
window:
ROLLBACK

407
Isolation level tutorials

When the Accountant raised his isolation to level 3 and again selected all
rows in the department table, the database server placed anti-insert locks on
each row in the table, and one extra phantom lock to avoid insertion at the
end of the table. When the Sales Manager attempted to insert a new row at
the end of the table, it was this final lock that blocked her command.
Notice that the Sales Manager’s command was blocked even though the
Sales Manager is still connected at isolation level 2. The database server
places anti-insert locks, like read locks, as demanded by the isolation level
and statements of each transactions. Once placed, these locks must be
respected by all other concurrent transactions.
$ For more information on locking, see "How locking works" on
page 413.

Tutorial 4 – Practical locking implications


The following continues the same scenario. In this tutorial, the Accountant
and the Sales Manager both have tasks that involve the sales order and sales
order items tables. The Accountant needs to verify the amounts of the
commission checks paid to the sales employees for the sales they made
during the month of April 1994. The Sales Manager notices that a few orders
have not been added to the database and wants to add them.
Their work demonstrates phantom locking. When a transaction at isolation
level 3 selects rows which match a given criterion, the database server places
anti-insert locks to stop other transactions from inserting rows which would
also match. The number of locks placed on your behalf depends both on the
search criterion and on the design of your database.
If you have not done so, do steps 1 through 3 of the previous tutorial which
describe how to start two copies of Interactive SQL.
1 Set the isolation level to 2 in both the Sales Manager window and the
Accountant window by executing the following command.
SET TEMPORARY OPTION ISOLATION_LEVEL = 2
2 Each month, the sales representatives are paid a commission, which is
calculated as a percentage of their sales for that month. The Accountant
is preparing the commission checks for the month of April 1994. His
first task is to calculate the total sales of each representative during this
month.
Enter the following command in the Accountant’s window. Prices, sales
order information, and employee data are stored in separate tables. Join
these tables using the foreign key relationships to combine the necessary
pieces of information.

408
Chapter 14 Using Transactions and Isolation Levels

SELECT emp_id, emp_fname, emp_lname,


SUM(sales_order_items.quantity * unit_price)
AS "April sales"
FROM employee
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN product
WHERE ’1994-04-01’ <= order_date
AND order_date < ’1994-05-01’
GROUP BY emp_id, emp_fname, emp_lname

emp_id emp_fname emp_lname April sales


129 Philip Chin 2160.00
195 Marc Dill 2568.00
299 Rollin Overbey 5760.00
467 James Klobucher 3228.00
667 Mary Garcia 2712.00
690 Kathleen Poitras 2124.00
856 Samuel Singer 5076.00
902 Moira Kelly 5460.00
949 Pamela Savarino 2592.00
1142 Alison Clark 2184.00
1596 Catherine Pickett 1788.00

3 The Sales Manager notices that a big order sold by Philip Chin was not
entered into the database. Philip likes to be paid his commission
promptly, so the Sales manager enters the missing order, which was
placed on April 25.
In the Sales Manager’s window, enter the following commands. The
Sales order and the items are entered in separate tables because one
order can contain many items. You should create the entry for the sales
order before you add items to it. To maintain referential integrity, the
database server allows a transaction to add items to an order only if that
order already exists.
INSERT into sales_order
VALUES ( 2653, 174, ’1994-04-22’, ’r1’,
’Central’, 129);
INSERT into sales_order_items
VALUES ( 2653, 1, 601, 100, ’1994-04-25’ );
COMMIT;

409
Isolation level tutorials

4 The Accountant has no way of knowing that the Sales Manager has just
added a new order. Had the new order been entered earlier, it would
have been included in the calculation of Philip Chin’s April sales.
In the Accountant’s window, calculate the April sales totals again. Use
the same command, and observe that Philip Chin’s April sales changes to
$4560.00.

emp_id emp_fname emp_lname April sales


129 Philip Chin 4560.00
195 Marc Dill 2568.00
299 Rollin Overbey 5760.00
467 James Klobucher 3228.00
667 Mary Garcia 2712.00
690 Kathleen Poitras 2124.00
856 Samuel Singer 5076.00
902 Moira Kelly 5460.00
949 Pamela Savarino 2592.00
1142 Alison Clark 2184.00
1596 Catherine Pickett 1788.00

Imagine that the Accountant now marks all orders placed in April to
indicate that commission has been paid. The order that the Sales
Manager just entered might be found in the second search and marked as
paid, even though it was not included in Philip’s total April sales!
5 At isolation level 3, the database server places anti-insert locks to ensure
that no other transactions can add a row which matches the criterion of a
search or select.
First, roll back the insertion of Philip’s missing order: Execute the
following statement in the Sales Manager’s window.
ROLLBACK
6 In the Accountant’s window, execute the following two statements.
ROLLBACK;
SET TEMPORARY OPTION ISOLATION_LEVEL = 3;
7 In the Sales Manager’s window, execute the following statements to
remove the new order.

410
Chapter 14 Using Transactions and Isolation Levels

DELETE
FROM sales_order_items
WHERE id = 2653;
DELETE
FROM sales_order
WHERE id = 2653;
COMMIT;
8 In the Accountant’s window, execute same query as before.
SELECT emp_id, emp_fname, emp_lname,
SUM(sales_order_items.quantity * unit_price)
AS "April sales"
FROM employee
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN product
WHERE ’1994-04-01’ <= order_date
AND order_date < ’1994-05-01’
GROUP BY emp_id, emp_fname, emp_lname
Because you set the isolation to level 3, the database server will
automatically place anti-insert locks to ensure that the Sales Manager
can’t insert April order items until the Accountant finishes his
transaction.
9 Return to the Sales Manager’s window. Again attempt to enter Philip
Chin’s missing order.
INSERT INTO sales_order
VALUES ( 2653, 174, ’1994-04-22’,
’r1’,’Central’, 129)
The Sales Manager’s window will hang; the operation will not complete.
Click the Interrupt the SQL Statement button on the toolbar (or click
Stop in the SQL menu) to interrupt this entry.
10 The Sales Manager can’t enter the order in April, but you might think
that she could still enter it in May.
Change the date of the command to May 05 and try again.
INSERT INTO sales_order
VALUES ( 2653, 174, ’1994-05-05’, ’r1’,
’Central’, 129)
The Sales Manager’s window will hang again. Click the Interrupt the
SQL Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry. Although the database server places no more locks
than necessary to prevent insertions, these locks have the potential to
interfere with a large number of other transactions.

411
Isolation level tutorials

The database server places locks in table indices. For example, it places
a phantom lock in an index so a new row cannot be inserted immediately
before it. However, when no suitable index is present, it must lock every
row in the table.
In some situations, anti-insert locks may block some insertions into a
table, yet allow others.
11 The Sales Manager wishes to add a second item to order 2651. Use the
following command.
INSERT INTO sales_order_items
VALUES ( 2651, 2, 302, 4, ’1994-05-22’ )
All goes well, so the Sales Manager decides to add the following item to
order 2652 as well.
INSERT INTO sales_order_items
VALUES ( 2652, 2, 600, 12, ’1994-05-25’ )
The Sales Manager’s window will hang. Click the Interrupt the SQL
Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry.
12 Conclude this tutorial by undoing any changes to avoid changing the
demonstration database. Enter the following command in the Sales
Manager’s window.
ROLLBACK
Enter the same command in the Accountant’s window.
ROLLBACK
You may now close both windows.

412
Chapter 14 Using Transactions and Isolation Levels

How locking works


When the database server processes a transaction, it can lock one or more
rows of a table. The locks maintain the reliability of information stored in the
database by preventing concurrent access by other transactions. They also
improve the accuracy of result queries by identifying information which is in
the process of being updated.
The database server places these locks automatically and needs no explicit
instruction. It holds all the locks acquired by a transaction until the
transaction is completed, for example by either a COMMIT or ROLLBACK
statement, with a single exception noted in "Early release of read locks—an
exception" on page 424.
The transaction that has access to the row is said to hold the lock. Depending
on the type of lock, other transactions may have limited access to the locked
row, or none at all.
You can use the sa_locks system procedure to list information about locks
that are held in the database. For more information, see "sa_locks system
procedure" on page 969 of the book ASA Reference.

Objects that can be locked


Adaptive Server Anywhere places locks on the following objects.
♦ Rows in tables A transaction can lock a particular row to prevent
another transaction from changing it. A transaction must place a write
lock on a row if it intends to modify the row.
♦ Insertion points between rows Transactions typically scan rows using
the ordering imposed by an index, or scan rows sequentially. In either
case, a lock can be placed on the scan position. For example, placing a
lock in an index can prevent another transaction from inserting a row
with a specific value or range of values.
♦ Table schemas A transaction can lock the schema of a table,
preventing other transactions from modifying the table’s structure.
Of these objects, rows are likely the most intuitive. It is understandable that a
transaction reading, updating, deleting, or inserting a row should limit the
simultaneous access to other transactions. Similarly, a transaction changing
the structure of a table, perhaps inserting a new column, could greatly impact
other transactions. In such a case, it is essential to limit the access of other
transactions to prevent errors.

413
How locking works

Row orderings You can use an index to order rows based on a particular criterion
established when the index was constructed.
When there is no index, Adaptive Server Anywhere orders rows by their
physical placement on disk; in the case of a sequential scan, the specific
ordering is defined by the internal workings of the database server. You
should not rely on the order of rows in a sequential scan. From the point of
view of scanning the rows, however, Adaptive Server Anywhere treats the
request similarly to an indexed scan, albeit using an ordering of its own
choosing. It can place locks on positions in the scan as it would were it using
an index.
Through locking a scan position, a transaction prevents some actions by
other transactions relating to a particular range of values in that ordering of
the rows. Insert and anti-insert locks are always placed on scan positions.
For example, a transaction might delete a row, hence deleting a particular
primary key value. Until this transaction either commits the change or rolls it
back, it must protect its right to do either. In the case of a deleted row, it
must ensure that no other transaction can insert a row using the same primary
key value, hence making a rollback operation impossible. A lock on the scan
position this row occupied reserves this right while having the least impact
on other transactions.

The types of locks


Adaptive Server Anywhere uses four distinct types of locks to implement its
locking scheme and ensure appropriate levels of isolation between
transactions:
♦ read lock (shared)
♦ phantom lock or anti-insert lock (shared)
♦ write lock (exclusive)
♦ anti-phantom lock or insert lock (shared)

Each of these locks has a separate purpose, and they all work together. Each
prevents a particular set of inconsistencies that could occur in their absence.
Depending on the isolation level you select, the database server will use
some or all of them to maintain the degree of consistency you require.
The above types of locks have the following uses:
♦ A transaction acquires a write lock whenever it inserts, updates, or
deletes a row. No other transaction can obtain either a read or a write
lock on the same row when a write lock is set. A write lock is an
exclusive lock.
414
Chapter 14 Using Transactions and Isolation Levels

♦ A transaction can acquire a read lock when it reads a row. Several


transactions can acquire read locks on the same row (a read lock is a
shared or nonexclusive lock). Once a row has been read locked, no other
transaction can obtain a write lock on it. Thus, a transaction can ensure
that no other transaction modifies or deletes a row by acquiring a read
lock.
♦ An anti-insert lock, or phantom lock, is a shared lock placed on an
indexed scan position to prevent phantom rows. It prevents other
transactions from inserting a row into a table immediately before the
row which is anti-insert locked. Anti-insert locks for lookups using
indexes require a read lock on each row that is read, and one extra read
lock to prevent insertions into the index at the end of the result set.
Phantom rows for lookups that do not use indexes require a read lock on
all rows in a table to prevent insertions from altering the result set, and
so can have a bad effect on concurrency.
♦ An insert lock, or anti-phantom lock, is a shared lock placed on an
indexed scan position to reserve the right to insert a row. Once one
transaction acquires an insert lock on a row, no other transaction can
acquire an anti-insert lock on the same row. A read lock on the
corresponding row is always acquired at the same time as an insert lock
to ensure that no other process can update or destroy the row, thereby
bypassing the insert lock.
Adaptive Server Anywhere uses these four types of locks as necessary to
ensure the level of consistency that you require. You do not need to
explicitly request the use of a particular lock. Instead, you control the level of
consistency, as is explained in the next section. Knowledge of the types of
locks will guide you in choosing isolation levels and understanding the
impact of each level on performance.
Exclusive versus These four types of locks each fall into one of two categories:
shared locks
♦ Exclusive locks Only one transaction can hold an exclusive lock on a
row of a table at one time. No transaction can obtain an exclusive lock
while any other transaction holds a lock of any type on the same row.
Once a transaction acquires an exclusive lock, requests to lock the row
by other transactions will be denied.
Write locks are exclusive.
♦ Shared locks Any number of transactions may acquire shared locks on
any one row at the same time. Shared locks are sometimes referred to as
non-exclusive locks.
Read locks, insert locks, and anti-insert locks are shared.

415
How locking works

Only one transaction should change any one row at one time. Otherwise, two
simultaneous transactions might try to change one value to two different new
ones. Hence, it is important that a write lock be exclusive.
By contrast, no difficulty arises if more than one transaction wants to read a
row. Since neither is changing it, there is no conflict of interest. Hence, read
locks may be shared.
You may apply similar reasoning to anti-insert and insert locks. Many
transactions can prevent the insertion of a row in a particular scan position by
each acquiring an anti-insert lock. Similar logic applies for insert locks.
When a particular transaction requires exclusive access, it can easily achieve
exclusive access by obtaining both an anti-insert and an insert lock on the
same row. These locks to not conflict when they are held by the same
transaction.
Which specific The following table identifies the combination of locks that conflict.
locks conflict?
read write anti-insert insert
read conflict
write conflict conflict
anti-insert conflict
insert conflict

These conflicts arise only when the locks are held by different transactions.
For example, one transaction can obtain both anti-insert and insert locks on a
single scan position to obtain exclusive access to a location.

Locking during queries


The locks that Adaptive Server Anywhere uses when a user enters a
SELECT statement depend on the transaction’s isolation level.
SELECT No locking operations are required when executing a SELECT statement at
statements at isolation level 0. Each transaction is not protected from changes introduced
isolation level 0 by other transactions. It is the responsibility of the programmer or database
user to interpret the result of these queries with this limitation in mind.
SELECT You may be surprised to learn that Adaptive Server Anywhere uses almost
statements at no more locks when running a transaction at isolation level 1 than it does at
isolation level 1 isolation level 0. Indeed, the database server modifies its operation in only
two ways.

416
Chapter 14 Using Transactions and Isolation Levels

The first difference in operation has nothing to do with acquiring locks, but
rather with respecting them. At isolation level 0, a transaction is free to read
any row, whether or not another transaction has acquired a write lock on it.
By contrast, before reading each row an isolation level 1 transaction must
check whether a write lock is in place. It cannot read past any write-locked
rows because doing so might entail reading dirty data.
The second difference in operation creates cursor stability. Cursor stability is
achieved by acquiring a read lock on the current row of a cursor. This read
lock is released when the cursor is moved. More than one row may be
affected if the contents of the cursor is the result of a join. In this case, the
database server acquires read locks on all rows which have contributed
information to the cursor’s current row and removes all these locks as soon as
another row of the cursor is selected as current. A read lock placed to ensure
cursor stability is the only type of lock that does not persist until the end of a
transaction.
SELECT At isolation level 2, Adaptive Server Anywhere modifies its procedures to
statements at ensure that your reads are repeatable. If your SELECT command returns
isolation level 2 values from every row in a table, then the database server acquires a read
lock on each row of the table as it reads it. If, instead, your SELECT contains
a WHERE clause, or another condition which restricts the rows to selected,
then the database server instead reads each row, tests the values in the row
against your criterion, and then acquires a read lock on the row if it meets
your criterion.
As at all isolation levels, the locks acquired at level 2 include all those set at
levels 1 and 0. Thus, cursor stability is again ensured and dirty reads are not
permitted.
SELECT When operating at isolation level 3, Adaptive Server Anywhere is obligated
statements at to ensure that all schedules are serializable. In particular, in addition to the
isolation level 3 requirements imposed at each of the lower levels, it must eliminate phantom
rows.
To accommodate this requirement, the database server uses read locks and
anti-insert locks. When you make a selection, the database server acquires a
read lock on each row that contributes information to your result set. Doing
so ensures that no other transactions can modify that material before you
have finished using it.

417
How locking works

This requirement is similar to the procedures that the database server uses at
isolation level 2, but differs in that a lock must be acquired for each row
read, whether or not it meets any attached criteria. For example, if you select
the names of all employees in the sales department, then the server must lock
all the rows which contain information about a sales person, whether the
transaction is executing at isolation level 2 or 3. At isolation level 3,
however, it must also acquire read locks on each of the rows of employees
which are not in the sales department. Otherwise, someone else accessing the
database could potentially transfer another employee to the sales department
while you were still using your results.
The fact that a read lock must be acquired on each row whether or not it
meets your criteria has two important implications.
♦ The database server may need to place many more locks than would be
necessary at isolation level 2.
♦ The database server can operate a little more efficiently: It can
immediately acquire a read lock on each row at as it reads it, since the
locks must be placed whether or not the information in the row is
accepted.
The number of anti-insert locks the server places can very greatly and
depends upon your criteria and on the indexes available in the table. Suppose
you select information about the employee with Employee ID 123. If the
employee ID is the primary key of the employee table, then the database
server can economize its operations. It can use the index, which is
automatically built for a primary key, to locate the row efficiently. In
addition, there is no danger that another transaction could change another
Employee’s ID to 123 because primary key values must be unique. The
server can guarantee that no second employee is assigned that ID number
simply by acquiring a read lock on only the one row containing information
about the employee with that number.
By contrast, the database server would acquire more locks were you instead
to select all the employees in the sales department. Since any number of
employees could be added to the department, the server will likely have to
read every row in the employee table and test whether each person is in sales.
If this is the case, both read and anti-insert locks must be acquired for each
row.

Locking during inserts


INSERT operations create new rows. Adaptive Server Anywhere employs
the following procedure to ensure data integrity.

418
Chapter 14 Using Transactions and Isolation Levels

1 Make a location in memory to store the new row. The location is


initially hidden from the rest of the database, so there is as yet no
concern that another transaction could access it.
2 Fill the new row with any supplied values.
3 Write lock the new row.
4 Place an insert lock in the table to which the row is being added. Recall
that insert locks are exclusive, so once the-insert lock is acquired, no
other transaction can block the insertion by acquiring an anti-insert lock
5 Insert the row into the table. Other transactions can now, for the first
time, see that the new row exists. They can’t modify or delete it, though,
because of the write lock acquired earlier.
6 Update all affected indexes and verify both referential integrity and
uniqueness, where appropriate. Verifying referential integrity means
ensuring that no foreign key points to a primary key that does not exist.
Primary key values must be unique. Other columns may also be defined
to contain only unique values, and if any such columns exist, uniqueness
is verified.
7 The transaction can be committed provided referential integrity will not
be violated by doing so: record the operation in the transaction log file
and release all locks.
8 Insert other rows as required, if you have selected the cascade option,
and fire triggers.
$ For more information about how locks are used during inserts, see
"Anti-insert locks" on page 421.

Uniqueness You can ensure that all values in a particular column, or combination of
columns, are unique. The database server always performs this task by
building an index for the unique column, even if you do not explicitly create
one.
In particular, all primary key values must be unique. The database server
automatically builds an index for the primary key of every table. Thus, you
should not ask the database server to create an index on a primary key, as
that index would be a redundant index.
Orphans and A foreign key is a reference to a primary key, usually in another table. When
referential integrity that primary key doesn’t exist, the offending foreign key is called an orphan.
Adaptive Server Anywhere automatically ensures that your database contains
no orphans. This process is referred to as verifying referential integrity.
The database server verifies referential integrity by counting orphans.

419
How locking works

WAIT FOR You can ask the database server to delay verifying referential integrity to the
COMMIT end of your transaction. In this mode, you can insert one row which contains
a foreign key, then insert a second row which contains the missing primary
key. You must perform both operations in the same transaction. Otherwise,
the database server will not allow your operations.
To request that the database server delay referential integrity checks until
commit time, set the value of the option WAIT_FOR_COMMIT to ON. By
default, this option is OFF. To turn it on, issue the following command:
SET OPTION WAIT_FOR_COMMIT = ON;
Before committing a transaction, the database server verifies that referential
integrity is maintained by checking the number of orphans your transaction
has created. At the end of every transaction, that number must be zero.
Even if the necessary primary key exists at the time you insert the row, the
database server must ensure that it still exists when you commit your results.
It does so by placing a read lock on the target row. With the read lock in
place, any other transaction is still free to read that row, but none can delete
or alter it.

Locking during updates


The database server modifies the information contained in a particular record
using the following procedure.
1 Write lock the affected row.
2 If any entries changed are included in an index, delete each index entry
corresponding to the old values. Make a record of any orphans created
by doing so.
3 Update each of the affected values.
4 If indexed values were changed, add new index entries. Verify
uniqueness where appropriate and verify referential integrity if a
primary of foreign key was changed.
5 The transaction can be committed provided referential integrity will not
be violated by doing so: record the operation in the transaction log file,
including the previous values of all entries in the row, and release all
locks.
6 Cascade the insert or delete operations, if you have selected this option
and primary or secondary keys are affected.

420
Chapter 14 Using Transactions and Isolation Levels

You may be surprised to see that the deceptively simple operation of


changing a value in a table can necessitate a rather large number of
operations. The amount of work that the database server needs to do is much
less if the value you are changing is not part of a primary or foreign key. It is
lower still if it is not contained in an index, either explicitly or implicitly
because you have declared that attribute unique.
The operation of verifying referential integrity during an UPDATE operation
is no less simple than when the verification is performed during an INSERT.
In fact, when you change the value of a primary key, you may create
orphans. When you insert the replacement value, the database server must
check for orphans once more.

Locking during deletes


The DELETE operation follows almost the same steps as the INSERT
operation, except in the opposite order.
1 Write lock the affected row.
2 Delete each index entry present for the any values in the row.
Immediately prior to deleting each index entry, acquire one or more anti-
insert locks as necessary to prevent another transaction inserting a
similar entry before the delete is committed. In order to verify referential
integrity, the database server also keeps track of any orphans created as
a side effect of the deletion.
3 Remove the row from the table so that it is no longer visible to other
transactions. The row cannot be destroyed until the transaction is
committed because doing so would remove the option of rolling back
the transaction.
4 The transaction can be committed provided referential integrity will not
be violated by doing so: record the operation in the transaction log file
including the values of all entries in the row, release all locks, and
destroy the row.
5 Cascade the delete operation, if you have selected this option and have
modified a primary or foreign key.

Anti-insert locks The database server must ensure that the DELETE operation can be rolled
back. It does so in part by acquiring anti-insert locks. These locks are not
exclusive; however, they deny other transactions the right to insert rows that
make it impossible to roll back the DELETE operation. For example, the row
deleted may have contained a primary key value, or another unique value.
Were another transaction allowed to insert a row with the same value, the
DELETE could not be undone without violating the uniqueness property.

421
How locking works

Adaptive Server Anywhere enforces uniqueness constraints through indexes.


In the case of a simple table with only a one-attribute primary key, a single
phantom lock may suffice. Other arrangements can quickly escalate the
number of locks required. For example, the table may have no primary key
or other index associated with any of the attributes. Since the rows in a table
have no fundamental ordering, the only way of preventing inserts may be to
anti-insert lock the entire table.
Deleting a row can mean acquiring a great many locks. You can minimize
the effect on concurrency in your database in a number of ways. As
described earlier, indexes and primary keys reduce the number of locks
required because they impose an ordering on the rows in the table. The
database server automatically takes advantage of these orderings. Instead of
acquiring locks on every row in the table, it can simply lock the next row.
Without the index, the rows have no order and thus the concept of a next row
is meaningless.
The database server acquires anti-insert locks on the row following the row
deleted. Should you delete the last row of a table, the database server simply
places the anti-insert lock on an invisible end row. In fact, if the table
contains no index, the number of anti-insert locks required is one more than
the number of rows in the table.
Anti-insert locks While one or more anti-insert locks exclude an insert lock and one or more
and read locks read locks exclude a write lock, no interaction exists between anti-
insert/insert locks and read/write locks. For example, although a write lock
cannot be acquired on a row that contains a read lock, it can be acquired on a
row that has only an anti-insert lock. More options are open to the database
server because of this flexible arrangement, but it means that the server must
generally take the extra precaution of acquiring a read lock when acquiring
an anti-insert lock. Otherwise, another transaction could delete the row.

Two-phase locking
Often, the general information about locking provided in the earlier sections
will suffice to meet your needs. There are times, however, when you may
benefit from more knowledge of what goes on inside the database server
when you perform basic types of operations. This knowledge will provide
you with a better basis from which to understand and predict potential
problems that users of your database may encounter.
Two-phase locking is important in the context of ensuring that schedules are
serializable. The two-phase locking protocol specifies a procedure each
transaction follows.

422
Chapter 14 Using Transactions and Isolation Levels

This protocol is important because, if observed by all transactions, it will


guarantee a serializable, and thus correct, schedule. It may also help you
understand why some methods of locking permit some types of
inconsistencies.
The two-phase 1 Before operating on any row, a transaction must acquire a lock on that
locking protocol row.
2 After releasing a lock, a transaction must never acquire any more locks.
In practice, a transaction normally holds locks until it terminates with either a
COMMIT or ROLLBACK statement. Releasing locks before the end of the
transaction disallows the operation of rolling back the changes whenever
doing so would necessitate operating on rows to return them to an earlier
state.
The two-phase locking protocol allows the statement of the following
important theorem:

The two-phase locking theorem


If all transactions obey the two-phase locking protocol, then all possible
interleaved schedules are serializable.

In other words, if all transactions follow the two-phase locking protocol, then
none of the inconsistencies mentioned above are possible.
This protocol defines the operations necessary to ensure complete
consistency of your data, but you may decide that some types of
inconsistencies are permissible during some operations on your database.
Eliminating all inconsistency often means reducing the efficiency of your
database.
Write locks are placed on modified, inserted, and deleted rows regardless of
isolation level. They are always held until commit and rollback.
Read locks at Isolation level Read locks
different isolation
levels 0 None
1 On rows that appear in the result set;
they are held only when a cursor is
positioned on a row.
2 On rows that appear in the result set;
they are held until the user executes a
COMMIT or a ROLLBACK.
3 On all rows read and all insertion
points crossed in the computation of a
result set

423
How locking works

$ For more information, see "Serializable schedules" on page 394


The details of locking are best broken into two sections: what happens during
an INSERT, UPDATE, DELETE or SELECT and how the various isolation
levels affect the placement of read, anti-insert, and insert locks.
Although you can control the amount of locking that takes place within the
database server by setting the isolation level, there is a good deal of locking
that occurs at all levels, even at level 0. These locking operations are
fundamental. For example, once one transaction updates a row, no other
transaction can modify the same row before the first transaction completes.
Without this precaution, you could not rollback the first transaction.
The locking operations that the database server performs at isolation level 0
are the best to learn first exactly because they represent the foundation. The
other levels add locking features, but do not remove any present in the lower
levels. Thus, moving to higher isolation level adds operations not present at
lower levels.

Early release of read locks—an exception


At isolation level 3, a transaction acquires a read lock on every row it reads.
Ordinarily, a transaction never releases a lock before the end of the
transaction. Indeed, it is essential that a transaction does not release locks
early if the schedule is to be serializable.
Adaptive Server Anywhere always retains write locks until a transaction
completes. If it were to release a lock sooner, another transaction could
modify that row making it impossible to roll back the first transaction.
Read locks are released only in one, special circumstance. Under isolation
level 1, transactions acquire a read lock on a row only when it becomes the
current row of a cursor. Under isolation level 1, however, when that row is
no longer current, the lock is released. This behavior is acceptable because
the database server does not need to guarantee repeatable reads at isolation
level 1.
$ For more information about isolation levels, see "Choosing isolation
levels" on page 394.

Special optimizations
The previous sections describe the locks acquired when all transactions are
operating at a given isolation level. For example, when all transactions are
running at isolation level 2, locking is performed as described in the
appropriate section, above.

424
Chapter 14 Using Transactions and Isolation Levels

In practice, your database is likely to need to process multiple transactions


that are at different levels. A few transactions, such as the transfer of money
between accounts, must be serializable and so run at isolation level 3. For
other operations, such as updating an address or calculating average daily
sales, a lower isolation level will often suffice.
While the database server is not processing any transactions at level 3, it
optimizes some operations so as to improve performance. In particular, many
extra anti-insert and insert locks are often necessary to support a level 3
transaction. Under some circumstances, the database server can avoid either
placing or checking for some types of locks when no level 3 transactions are
present.
For example, the database server uses anti-insert locks to guard against two
distinct types of circumstances:
1 Ensure that deletes in tables with unique attributes can be rolled back.
2 Eliminate phantom rows in level 3 transactions.
If no level 3 transactions are using a particular table, then the database server
need not place anti-insert locks in the index of a table that contains no unique
attributes. If, however, even one level 3 transaction is present, all
transactions, even those at level 0, must place anti-insert locks so that the
level 3 transactions can identify their operations.
Naturally, the database server always attaches notes to a table when it
attempts the types of optimizations described above. Should a level 3
transaction suddenly start, you can be confident that the necessary locks will
be put in place for it.
You may have little control over the mix of isolation levels in use at one time
as so much will depend on the particular operations that the various users of
your database wish to perform. Where possible, however, you may wish to
select the time that level 3 operations execute because they have the potential
to cause significant slowing of database operations. The impact is magnified
because the database server is forced to perform extra operations for lower-
level operations.

425
Particular concurrency issues

Particular concurrency issues


This section discusses the following particular concurrency issues:
♦ "Primary key generation" on page 426
♦ "Data definition statements and concurrency" on page 427

Primary key generation


You will encounter situations where the database should automatically
generate a unique number. For example, if you are building a table to store
sales invoices you might prefer that the database assign unique invoice
numbers automatically, rather than require sales staff to pick them.
There are many methods for generating such numbers.
Example For example, invoice numbers could be obtained by adding 1 to the previous
invoice number. This method will not work when there is more than one
person adding invoices to the database. Two people may decide to use the
same invoice number.
There is more than one solution to the problem:
♦ Assign a range of invoice numbers to each person who adds new
invoices.
You could implement this scheme by creating a table with two columns
user name and invoice number. The table would have one row for each
user that adds invoices. Each time a user adds an invoice, the number in
the table would be incremented and used for the new invoice. In order to
handle all tables in the database, the table should have three columns:
table name, user name, and last key value. You should periodically
check that each person still has a sufficient supply of numbers.
♦ Create a table with two columns: table name and last key value.
One row in this table would contain the last invoice number used. Each
time someone adds an invoice, establish a new connection, increment
the number in the table, and commit the change immediately. The
incremented number can be used for the new invoice. Other users will
be able to grab invoice numbers because you updated the row with a
separate transaction that only lasted an instant.
♦ Probably the best solution is to use a column with a default value of
AUTOINCREMENT.
For example,

426
Chapter 14 Using Transactions and Isolation Levels

CREATE TABLE orders (


order_id INTEGER NOT NULL DEFAULT AUTOINCREMENT,
order_date DATE,
primary key( order_id )
)
On inserts into the table, if a value is not specified for the autoincrement
column, a unique value is generated. If a value is specified, it will be
used. If the value is larger than the current maximum value for the
column, that value will be used as a starting point for subsequent inserts.
The value of the most recently inserted row in an autoincrement column
is available as the global variable @@identity.

Unique values in replicated databases


Different techniques are required if you replicate your database and more
than one person can add entries which must later be merged.
$ See "Replication and concurrency" on page 428.

Data definition statements and concurrency


Data definition statements that change an entire table, such as CREATE
INDEX, ALTER TABLE, and TRUNCATE TABLE, are prevented
whenever the statement table is currently being used by another connection.
These data definition statements can be time consuming and the database
server will not process requests referencing the same table while the
command is being processed.
The CREATE TABLE statement does not cause any concurrency conflicts.
The GRANT statement, REVOKE statement, and SET OPTION statement
also do not cause concurrency conflicts. These commands affect any new
SQL statements sent to the database server, but do not affect existing
outstanding statements.
GRANT and REVOKE for a user are not allowed if that user is connected to
the database.

Data definition statements and replicated databases


Using data definition statements in replicated databases requires special
care. For more information see the separate manual entitled Data
Replication with SQL Remote.

427
Replication and concurrency

Replication and concurrency


Some computers on your network might be portable computers that people
take away from the office or which are occasionally connected to the
network. There may be several database applications that they would like to
use while not connected to the network.
Database replication is the ideal solution to this problem. Using SQL Remote
or MobiLink synchronization, you can publish information in a consolidated,
or master, database to any number of other computers. You can control
precisely the information replicated on any particular computer. Any person
can receive particular tables, or even portions of the rows or columns of a
table. By customizing the information each receives, you can ensure that
their copy of the database is no larger than necessary to contain the
information they require.
$ Extensive information on replication is provided in the separate manual
entitled Replication and Synchronization Guide. The information in this
section is, thus, not intended to be complete. Rather, it introduces concepts
related directly to locking and concurrency considerations.
SQL Remote and MobiLink allow replicated databases to be updated from a
central, consolidated database, as well as updating this same central data as
the results of transactions processed on the remote machine. Since updates
can occur in either direction, this ability is referred to as bi-directional
replication.
Since the results of transactions can affect the consolidated database, whether
they are processed on the central machine or on a remote one, the effect is
that of allowing concurrent transactions.
Transactions may happen at the same time on different machines. They may
even involve the same data. In this case, though, the machines may not be
physically connected. No means may exist by which the remote machine can
contact the consolidated database to set any form of lock or identify which
rows have changed. Thus, locks can not prevent inconsistencies as they do
when all transactions are processed by a single server.
An added complication is introduced by the fact that any given remote
machine may not hold a full copy of the database. Consider a transaction
executed directly on the main, consolidated database. It may affect rows in
two or more tables. The same transaction might not execute on a remote
database, as there is no guarantee that one or both of the affected tables is
replicated on that machine. Even if the same tables exist, they may not
contain exactly the same information, depending upon how recently the
information in the two databases has been synchronized.

428
Chapter 14 Using Transactions and Isolation Levels

To accommodate the above constraints, replication is not based on


transactions, but rather on operations. An operation is a change to one row
in a table. This change could be the result of an UPDATE, INSERT, or
DELETE statement. An operation resulting from an UPDATE or DELETE
identifies the initial values of each column and a transaction resulting from
an INSERT or UPDATE records the final values.
A transaction may result in none, one, or more than one operation. One
operation will never result from two or more transactions. If two transactions
modify a table, then two or more corresponding operations will result.
If an operation results from a transaction processed on a remote computer,
then it must be passed to the consolidated database so that the information
can be merged. If, on the other hand, an operation results from a transaction
on the consolidated computer, then the operation may need to be sent to
some remote sites, but not others. Since each remote site may contain a
replica of a portion of the complete database, SQL Remote knows to pass the
operation to a remote site only when it affects that portion of the database.
Transaction log SQL Remote uses a transaction log based replication mechanism. When
based replication you activate SQL Remote on a machine, it scans the transaction log to
identify the operations it must transfer and prepares one or more messages.
SQL Remote can pass these messages between computers using a number of
methods. It can create files containing the messages and store them in a
designated directory. Alternatively, SQL Remote can pass messages using
any of the most common messaging protocols. You likely can use your
present e-mail system.
Conflicts may arise when merging operations from remote sites into the
consolidated database. For example, two people, each at a different remote
site, may have changed the same value in the same table. Whereas the
locking facility built into Adaptive Server Anywhere can eliminate conflict
between concurrent transactions handled by the same server, it is impossible
to automatically eliminate all conflicts between two remote users who both
have permission to change the same value.
As the database administrator, you can avoid this potential problem through
suitable database design or by writing conflict resolution algorithms. For
example, you can decide that only one person will be responsible for
updating a particular range of values in a particular table. If such a restriction
is impractical, then you can instead use the conflict resolution facilities of
SQL Remote to implement triggers and procedures which resolve conflicts in
a manner appropriate to the data involved.
$ SQL Remote provides the tools and programming facilities you need to
take full advantage of database replication. For further information, see the
manual Replication and Synchronization Guide.

429
Summary

Summary
Transactions and locking are perhaps second only in importance to relations
between tables. The integrity and performance of any database can benefit
from the judicious use of locking and careful construction of transactions.
Both are essential to creating databases that must execute a large number of
commands concurrently.
Transactions group SQL statements into logical units of work. You may end
each by either rolling back any changes you have made or by committing
these changes and so making them permanent.
Transactions are essential to data recovery in the event of system failure.
They also play a pivotal role in interweaving statements from concurrent
transactions.
To improve performance, multiple transactions must be executed
concurrently. Each transaction is composed of component SQL statements.
When two or more transactions are to be executed concurrently, the database
server must schedule the execution of the individual statements. Concurrent
transactions have the potential to introduce new, inconsistent results that
could not arise were these same transactions executed sequentially.
Many types of inconsistencies are possible, but four typical types are
particularly important because they are mentioned in the ISO SQL/92
standard and the isolation levels are defined in terms of them.
♦ Dirty read One transaction reads data modified, but not yet committed,
by another.
♦ Non-repeatable read A transaction reads the same row twice and gets
different values.
♦ Phantom row A transaction selects rows, using a certain criterion,
twice and finds new rows in the second result set.
♦ Lost Update One transaction’s changes to a row are completely lost
because another transaction is allowed to save an update based on earlier
data.
A schedule is called serializable whenever the effect of executing the
statements according to the schedule is the same as could be achieved by
executing each of the transactions sequentially. Schedules are said to be
correct if they are serializable. A serializable schedule will cause none of the
above inconsistencies.

430
Chapter 14 Using Transactions and Isolation Levels

Locking controls the amount and types of interference permitted. Adaptive


Server Anywhere provides you with four levels of locking: isolation levels 0,
1, 2, and 3. At the highest isolation, level 3, Adaptive Server Anywhere
guarantees that the schedule is serializable, meaning that the effect of
executing all the transactions is equivalent to running them sequentially.
Unfortunately, locks acquired by one transaction may impede the progress of
other transactions. Because of this problem, lower isolation levels are
desirable whenever the inconsistencies they may allow are tolerable.
Increased isolation to improve data consistency frequently means lowering
the concurrency, the efficiency of the database at processing concurrent
transactions. You must frequently balance the requirements for consistency
against the need for performance to determine the best isolation level for
each operation.
Conflicting locking requirements between different transactions may lead to
blocking or deadlock. Adaptive Server Anywhere contains mechanisms for
dealing with both these situations, and provides you with options to control
them.
Transactions at higher isolation levels do not, however, always impact
concurrency. Other transactions will be impeded only if they require access
to locked rows. You can improve concurrency through careful design of your
database and transactions. For example, you can shorten the time that locks
are held by dividing one transaction into two shorter ones, or you might find
that adding an index allows your transaction to operate at higher isolation
levels with fewer locks.
The increased popularity of portable computers will frequently mean that
your database may need to be replicated. Replication is an extremely
convenient feature of Adaptive Server Anywhere, but it introduces new
considerations related to concurrency. These topics are covered in a separate
manual.

431
Summary

432
P A R T F O U R

Adding Logic to the Database

This part describes how to build logic into your database using SQL stored
procedures, triggers, and Java. Storing logic in the database makes it available
automatically to all applications, providing consistency, performance, and
security benefits. The combined Java/Stored Procedure debugger is a powerful
tool for debugging all kinds of logic.

433
434
C H A P T E R 1 5

Using Procedures, Triggers, and Batches

About this chapter Procedures and triggers store procedural SQL statements in the database for
use by all applications. They enhance the security, efficiency, and
standardization of databases. User-defined functions are one kind of
procedure that return a value to the calling environment for use in queries
and other SQL statements. Batches are sets of SQL statements submitted to
the database server as a group. Many features available in procedures and
triggers, such as control statements, are also available in batches.
$ For many purposes, server-side JDBC provides a more flexible way to
build logic into the database than SQL stored procedures. For information on
JDBC, see "Data Access Using JDBC" on page 591.
Contents
Topic Page
Procedure and trigger overview 437
Benefits of procedures and triggers 438
Introduction to procedures 439
Introduction to user-defined functions 446
Introduction to triggers 450
Introduction to batches 457
Control statements 459
The structure of procedures and triggers 462
Returning results from procedures 466
Using cursors in procedures and triggers 471
Errors and warnings in procedures and triggers 474
Using the EXECUTE IMMEDIATE statement in procedures 483
Transactions and savepoints in procedures and triggers 484
Some tips for writing procedures 485
Statements allowed in batches 487
Calling external libraries from procedures 488

435
Procedure and trigger overview

436
Chapter 15 Using Procedures, Triggers, and Batches

Procedure and trigger overview


Procedures and triggers store procedural SQL statements in a database for
use by all applications. They can include control statements that allow
repetition (LOOP statement) and conditional execution (IF statement and
CASE statement) of SQL statements.
Procedures are invoked with a CALL statement, and use parameters to
accept values and return values to the calling environment. Procedures can
return result sets to the caller, call other procedures or fire triggers. For
example, a user-defined function is a type of stored procedure that returns a
single value to the calling environment. User-defined functions do not
modify parameters passed to them, but rather, broaden the scope of functions
available to queries and other SQL statements.
Triggers are associated with specific database tables. They fire automatically
whenever someone inserts, updates or deletes rows of the associated table.
Triggers can call procedures and fire other triggers; however, they have no
parameters, nor can they be invoked by a CALL statement.
Procedure You can debug stored procedures and triggers using the combined stored
debugger procedure/Java debugger. For more information, see "Debugging Logic in
the Database" on page 621.

437
Benefits of procedures and triggers

Benefits of procedures and triggers


Definitions for procedures and triggers appear in the database, separately
from any one database application. This separation provides a number of
advantages.
Standardization Procedures and triggers standardize actions performed by more than one
application program. By coding the action once and storing it in the database
for future use, applications need only call the procedure or fire the trigger to
achieve the desired result repeatedly. And since changes occur in only one
place, all applications using the action automatically acquire the new
functionality if the implementation of the action changes.
Efficiency Procedures and triggers used in a network database server environment can
access data in the database without requiring network communication. This
means they execute faster and with less impact on network performance than
if they had been implemented in an application on one of the client
machines.
When you create a procedure or trigger, it is automatically checked for
correct syntax, and then stored in the system tables. The first time any
application calls or fires a procedure or trigger, it is compiled from the
system tables into the server’s virtual memory and executed from there. Since
one copy of the procedure or trigger remains in memory after the first
execution, repeated executions of the same procedure or trigger happen
instantly. As well, several applications can use a procedure or trigger
concurrently, or one application can use it recursively.
Procedures are less efficient if they contain simple queries and have many
arguments. For complex queries, procedures are more efficient.
Security Procedures and triggers provide security by allowing users limited access to
data in tables that they cannot directly examine or modify.
Triggers, for example, execute under the table permissions of the owner of
the associated table, but any user with permissions to insert, update or delete
rows in the table can fire them. Similarly, procedures (including user-defined
functions) execute with permissions of the procedure owner, but any user
granted permissions can call them. This means that procedures and triggers
can (and usually do) have different permissions than the user ID that invoked
them.

438
Chapter 15 Using Procedures, Triggers, and Batches

Introduction to procedures
To use procedures, you need to understand how to:
♦ Create procedures
♦ Call procedures from a database application
♦ Drop or remove procedures
♦ Control who has permissions to use procedures
This section discusses the above aspects of using procedures, as well as some
different applications of procedures.

Creating procedures
Adaptive Server Anywhere provides a number of tools that let you create a
new procedure.
In Sybase Central, you can use a wizard to provide necessary information
and then complete the code in a generic code editor. Sybase Central also
provides procedure templates (located in the Procedures & Functions folder)
that you can open and modify.
In Interactive SQL, you use the CREATE PROCEDURE statement to create
procedures. However, you must have RESOURCE authority. Where you
enter the statement depends on which tool you use.

v To create a new procedure (Sybase Central):


1 Connect to a database with DBA or Resource authority.
2 Open the Procedures & Functions folder of the database.
3 In the right pane, double-click Add Procedure/Function (Wizard).
4 Follow the instructions in the wizard.
5 When the Code Editor opens, complete the code of the procedure.
6 To execute the code in the database, choose File➤Save/Execute in
Database.
The new procedure appears in the Procedures & Functions folder.

v To create a new remote procedure (Sybase Central):


1 Connect to a database with DBA authority.

439
Introduction to procedures

2 Open the Procedures & Functions folder of the database.


3 In the right pane, double-click Add Remote Procedure (Wizard).
4 When the Code Editor opens, complete the code of the procedure.
5 Follow the instructions in the wizard.

Tip
You can also create a remote procedure by right-clicking a remote server
in the Remote Servers folder and choosing Add Remote Procedure from
the popup menu.

v To create a procedure (SQL):


1 Launch Interactive SQL and connect to a database using DBA authority.
2 Type the commands for the procedure in the SQL Statements pane of the
Interactive SQL viewer.

v To create a procedure using a different tool:


♦ Follow the instructions for your tool. You may need to change the
command delimiter away from the semicolon before entering the
CREATE PROCEDURE statement.
$ For more information about connecting, see "Connecting to a
Database" on page 33.
Example The following simple example creates the procedure new_dept, which carries
out an INSERT into the department table of the sample database, creating a
new department.
CREATE PROCEDURE new_dept (
IN id INT,
IN name CHAR(35),
IN head_id INT )
BEGIN
INSERT
INTO dba.department ( dept_id,
dept_name,
dept_head_id )
VALUES ( id, name, head_id );
END
The body of a procedure is a compound statement. The compound
statement starts with a BEGIN statement and concludes with an END
statement. In the case of new_dept, the compound statement is a single
INSERT bracketed by BEGIN and END statements.

440
Chapter 15 Using Procedures, Triggers, and Batches

Parameters to procedures are marked as one of IN, OUT, or INOUT. All


parameters to the new_dept procedure are IN parameters, as they are not
changed by the procedure.
$ For more information, see "CREATE PROCEDURE statement" on
page 453 of the book ASA Reference, "ALTER PROCEDURE statement" on
page 389 of the book ASA Reference, and "Using compound statements" on
page 460

Altering procedures
You can modify an existing procedure using either Sybase Central or
Interactive SQL. You must have DBA authority or be the owner of the
procedure.
In Sybase Central, you cannot rename an existing procedure directly. Instead,
you must create a new procedure with the new name, copy the previous code
to it, and then delete the old procedure.
In Interactive SQL, you can use an ALTER PROCEDURE statement to
modify an existing procedure. You must include the entire new procedure in
this statement (in the same syntax as in the CREATE PROCEDURE
statement that created the procedure). You must also reassign user
permissions on the procedure.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 120.
$ For information on granting or revoking permissions for procedures,
see "Granting permissions on procedures" on page 747 and "Revoking user
permissions" on page 750.

v To alter the code of a procedure (Sybase Central):


1 Open the Procedures & Functions folder.
2 Right-click the desired procedure.
3 From the popup menu, do one of the following:
♦ Choose Open as Watcom-SQL to edit the code in the Watcom-SQL
dialect.
♦ Choose Open as Transact-SQL to edit the code in the Transact-SQL
dialect.
4 In the Code Editor, edit the procedure’s code.
5 To execute the code in the database, choose File➤Save/Execute in
Database.

441
Introduction to procedures

v To alter the code of a procedure (SQL):


1 Connect to the database.
2 Execute an ALTER PROCEDURE statement. Include the entire new
procedure in this statement.
$ For more information, see "ALTER PROCEDURE statement" on
page 389 of the book ASA Reference, "CREATE PROCEDURE statement"
on page 453 of the book ASA Reference, and "Creating procedures" on
page 439.

Calling procedures
CALL statements invoke procedures. Procedures can be called by an
application program, or by other procedures and triggers.
$ For more information, see "CALL statement" on page 410 of the book
ASA Reference.
The following statement calls the new_dept procedure to insert an Eastern
Sales department:
CALL new_dept( 210, ’Eastern Sales’, 902 );
After this call, you may wish to check the department table to see that the
new department has been added.
All users who have been granted EXECUTE permissions for the procedure
can call the new_dept procedure, even if they have no permissions on the
department table.
$ For more information about EXECUTE permissions, see "EXECUTE
statement" on page 514 of the book ASA Reference.

Copying procedures in Sybase Central


In Sybase Central, you can copy procedures between databases. To do so,
select the procedures in the right pane of Sybase Central and drag it to the
Procedures & Functions folder of another connected database. A new
procedure is then created, and the original procedure’s code is copied to it.
Note that only the procedure code is copied to the new procedure. The other
procedure properties (permissions, etc.) are not copied. A procedure can be
copied to the same database, provided it is given a new name.

442
Chapter 15 Using Procedures, Triggers, and Batches

Deleting procedures
Once you create a procedure, it remains in the database until someone
explicitly removes it. Only the owner of the procedure or a user with DBA
authority can drop the procedure from the database.

v To delete a procedure (Sybase Central):


1 Connect to a database with DBA authority or as the owner of the
procedure.
2 Open the Procedures & Functions folder.
3 Right-click the desired procedure and choose Delete from the popup
menu.

v To delete a procedure (SQL):


1 Connect to a database with DBA authority or as the owner of the
procedure.
2 Execute a DROP PROCEDURE statement.

Example The following statement removes the procedure new_dept from the database:
DROP PROCEDURE new_dept

Returning procedure results in parameters


Procedures return results to the calling environment in one of the following
ways:
♦ Individual values are returned as OUT or INOUT parameters.
♦ Result sets can be returned.
♦ A single result can be returned using a RETURN statement.
This section describes how to return results from procedures as parameters.
The following procedure on the sample database returns the average salary of
employees as an OUT parameter.
CREATE PROCEDURE AverageSalary( OUT avgsal
NUMERIC (20,3) )
BEGIN
SELECT AVG( salary )
INTO avgsal
FROM employee;
END

443
Introduction to procedures

v To run this procedure and display its output (SQL):


1 Connect to the sample database from Interactive SQL with a user ID of
DBA and a password of SQL. For more information about connecting,
see "Connecting to a Database" on page 33.
2 In the SQL Statements pane, type the above procedure code.
3 Create a variable to hold the procedure output. In this case, the output
variable is numeric, with three decimal places, so create a variable as
follows:
CREATE VARIABLE Average NUMERIC(20,3)
4 Call the procedure using the created variable to hold the result:
CALL AverageSalary(Average)
If the procedure was created and run propertly, the Interactive SQL
Messages pane does not display any errors.
5 Execute the SELECT Average statement to inspect the value of the
variable.
Look at the value of the output variable Average. The Interactive SQL
Results pane displays the value 49988.623 for this variable, the average
employee salary.

Returning procedure results in result sets


In addition to returning results to the calling environment in individual
parameters, procedures can return information in result sets. A result set is
typically the result of a query. The following procedure returns a result set
containing the salary for each employee in a given department:
CREATE PROCEDURE SalaryList ( IN department_id INT)
RESULT ( "Employee ID" INT, Salary NUMERIC(20,3) )
BEGIN
SELECT emp_id, salary
FROM employee
WHERE employee.dept_id = department_id;
END
If Interactive SQL calls this procedure, the names in the RESULT clause are
matched to the results of the query and used as column headings in the
displayed results.
To test this procedure from Interactive SQL, you can CALL it, specifying
one of the departments of the company. The results appear in the Interactive
SQL Results pane.

444
Chapter 15 Using Procedures, Triggers, and Batches

Example To list the salaries of employees in the R & D department (department ID


100), type the following:
CALL SalaryList (100)

Employee ID Salary
102 45700.000
105 62000.000
160 57490.000
243 72995.000
247 48023.690

Interactive SQL can only return multiple result sets if you have this option
enabled on the Commands tab of the Options dialog. For more information,
see "Returning multiple result sets from procedures" on page 469.

445
Introduction to user-defined functions

Introduction to user-defined functions


User-defined functions are a class of procedures that return a single value to
the calling environment. This section introduces creating, using, and
dropping user-defined functions

Creating user-defined functions


You use the CREATE FUNCTION statement to create user-defined
functions. However, you must have RESOURCE authority.
The following simple example creates a function that concatenates two
strings, together with a space, to form a full name from a first name and a
last name.
CREATE FUNCTION fullname (firstname CHAR(30),
lastname CHAR(30))
RETURNS CHAR(61)
BEGIN
DECLARE name CHAR(61);
SET name = firstname || ’ ’ || lastname;
RETURN ( name );
END

v To create this example using Interactive SQL:


1 Connect to the sample database from Interactive SQL with a user ID of
DBA and a password of SQL. For more information about connecting,
see "Connecting to a Database" on page 33.
2 In the SQL Statements pane, type the above function code.

Note
If you are using a tool other than Interactive SQL or Sybase Central, you
may need to change the command delimiter away from the semicolon
before entering the CREATE FUNCTION statement.

$ For more information, see "CREATE FUNCTION statement" on


page 445 of the book ASA Reference.
The CREATE FUNCTION syntax differs slightly from that of the CREATE
PROCEDURE statement. The following are distinctive differences:
♦ No IN, OUT, or INOUT keywords are required, as all parameters are IN
parameters.

446
Chapter 15 Using Procedures, Triggers, and Batches

♦ The RETURNS clause is required to specify the data type being


returned.
♦ The RETURN statement is required to specify the value being returned.

Calling user-defined functions


A user-defined function can be used, subject to permissions, in any place you
would use a built-in non-aggregate function.
The following statement in Interactive SQL returns a full name from two
columns containing a first and last name:
SELECT fullname (emp_fname, emp_lname)
FROM employee;

Fullname (emp_fname, emp_lname)


Fran Whitney
Matthew Cobb
Philip Chin
...

The following statement in Interactive SQL returns a full name from a


supplied first and last name:
SELECT fullname (’Jane’, ’Smith’);

Fullname (’Jane’,’Smith’)
Jane Smith

Any user who has been granted EXECUTE permissions for the function can
use the fullname function.
Example The following user-defined function illustrates local declarations of
variables.
The customer table includes some Canadian customers sprinkled among
those from the USA, but there is no country column. The user-defined
function nationality uses the fact that the US zip code is numeric while the
Canadian postal code begins with a letter to distinguish Canadian and US
customers.
CREATE FUNCTION nationality( cust_id INT )
RETURNS CHAR( 20 )
BEGIN
DECLARE natl CHAR(20);

447
Introduction to user-defined functions

IF cust_id IN ( SELECT id FROM customer


WHERE LEFT(zip,1) > ’9’) THEN
SET natl = ’CDN’;
ELSE
SET natl = ’USA’;
END IF;
RETURN ( natl );
END
This example declares a variable natl to hold the nationality string, uses a
SET statement to set a value for the variable, and returns the value of the natl
string to the calling environment.
The following query lists all Canadian customers in the customer table:
SELECT *
FROM customer
WHERE nationality(id) = ’CDN’
Declarations of cursors and exceptions are discussed in later sections.
The same query restated without the function would perform better,
especially if an index on zip existed. For example,
Select *
FROM customer
WHERE zip > ’99999’

Notes While this function is useful for illustration, it may perform very poorly if
used in a SELECT involving many rows. For example, if you used the
SELECT query on a table containing 100 000 rows, of which 10 000 are
returned, the function will be called 10 000 times. If you use it in the
WHERE clause of the same query, it would be called 100 000 times.

Dropping user-defined functions


Once you create a user-defined function, it remains in the database until
someone explicitly removes it. Only the owner of the function or a user with
DBA authority can drop a function from the database.
The following statement removes the function fullname from the database:
DROP FUNCTION fullname

Permissions to execute user-defined functions


Ownership of a user-defined function belongs to the user who created it, and
that user can execute it without permission. The owner of a user-defined
function can grant permissions to other users with the GRANT EXECUTE
command.

448
Chapter 15 Using Procedures, Triggers, and Batches

For example, the creator of the function fullname could allow another_user
to use fullname with the statement:
GRANT EXECUTE ON fullname TO another_user
The following statement revokes permissions to use the function:
REVOKE EXECUTE ON fullname FROM another_user

$ For more information on managing user permissions on functions, see


"Granting permissions on procedures" on page 747.

449
Introduction to triggers

Introduction to triggers
You use triggers whenever referential integrity and other declarative
constraints are insufficient.
$ For information on referential integrity, see "Ensuring Data Integrity"
on page 357 and "CREATE TABLE statement" on page 466 of the book ASA
Reference.
You may want to enforce a more complex form of referential integrity
involving more detailed checking, or you may want to enforce checking on
new data but allow legacy data to violate constraints. Another use for triggers
is in logging the activity on database tables, independent of the applications
using the database.

Trigger execution permissions


Triggers execute with the permissions of the owner of the associated
table, not the user ID whose actions cause the trigger to fire. A trigger can
modify rows in a table that a user could not modify directly.

Triggers can be defined on one or more of the following triggering actions:

Action Description
INSERT Invokes the trigger whenever a new row is inserted into the
table associated with the trigger
DELETE Invokes the trigger whenever a row of the associated table is
deleted.
UPDATE Invokes the trigger whenever a row of the associated table is
updated.
UPDATE OF Invokes the trigger whenever a row of the associated table is
column-list updated such that a column in the column-list has been
modified

Triggers can be either row-level or statement-level. Row-level triggers


execute BEFORE or AFTER each row modified by the triggering insert,
update, or delete operation changes. Statement-level triggers execute after
the entire operation is performed.
Flexibility in trigger execution time is particularly useful for triggers that rely
on referential integrity actions such as cascaded updates or deletes being
carried out (or not) as they execute.

450
Chapter 15 Using Procedures, Triggers, and Batches

If an error occurs while a trigger is executing, the operation that fired the
trigger fails. INSERT, UPDATE, and DELETE are atomic operations (see
"Atomic compound statements" on page 461). When they fail, all effects of
the statement (including the effects of triggers and any procedures called by
triggers) revert back to their pre-operation state.

Creating triggers
You create triggers using either Sybase Central or Interactive SQL. In
Sybase Central, you can compose the code in a Code Editor. In Interactive
SQL, you can use a CREATE TRIGGER statement. For both tools, you must
have DBA or RESOURCE authority to create a trigger and you must have
ALTER permissions on the table associated with the trigger.
The body of a trigger consists of a compound statement: a set of semicolon-
delimited SQL statements bracketed by a BEGIN and an END statement.
You cannot use COMMIT and ROLLBACK and some ROLLBACK TO
SAVEPOINT statements within a trigger.
$ For more information, see the list of cross-references at the end of this
section.

v To create a new trigger for a given table (Sybase Central):


1 Open the Triggers folder of the desired table.
2 In the right pane, double-click Add Trigger.
3 Follow the instructions of the wizard.
4 When the wizard finishes and opens the Code Editor for you, complete
the code of the trigger.
5 To execute the code in the database, choose File➤Save/Execute in
Database.

v To create a new trigger for a given table (SQL):


1 Connect to a database.
2 Execute a CREATE TRIGGER statement.
Example 1: A row- The following trigger is an example of a row-level INSERT trigger. It checks
level INSERT that the birthdate entered for a new employee is reasonable:
trigger CREATE TRIGGER check_birth_date
AFTER INSERT ON Employee
REFERENCING NEW AS new_employee
FOR EACH ROW

451
Introduction to triggers

BEGIN
DECLARE err_user_error EXCEPTION
FOR SQLSTATE ’99999’;
IF new_employee.birth_date > ’June 6, 1994’ THEN
SIGNAL err_user_error;
END IF;
END
This trigger fires after any row is inserted into the employee table. It detects
and disallows any new rows that correspond to birth dates later than June 6,
1994.
The phrase REFERENCING NEW AS new_employee allows statements in
the trigger code to refer to the data in the new row using the alias
new_employee.
Signaling an error causes the triggering statement, as well as any previous
effects of the trigger, to be undone.
For an INSERT statement that adds many rows to the employee table, the
check_birth_date trigger fires once for each new row. If the trigger fails for
any of the rows, all effects of the INSERT statement roll back.
You can specify that the trigger fires before the row is inserted rather than
after by changing the first line of the example to:
CREATE TRIGGER mytrigger BEFORE INSERT ON Employee
The REFERENCING NEW clause refers to the inserted values of the row; it
is independent of the timing (BEFORE or AFTER) of the trigger.
You may find it easier in some cases to enforce constraints using declaration
referential integrity or CHECK constraints, rather than triggers. For example,
implementing the above example with a column check constraint proves
more efficient and concise:
CHECK (@col <= ’June 6, 1994’)

Example 2: A row- The following CREATE TRIGGER statement defines a row-level DELETE
level DELETE trigger:
trigger example CREATE TRIGGER mytrigger BEFORE DELETE ON employee
REFERENCING OLD AS oldtable
FOR EACH ROW
BEGIN
...
END
The REFERENCING OLD clause enables the delete trigger code to refer to
the values in the row being deleted using the alias oldtable.
You can specify that the trigger fires after the row is deleted rather than
before, by changing the first line of the example to:

452
Chapter 15 Using Procedures, Triggers, and Batches

CREATE TRIGGER check_birth_date AFTER DELETE ON employee


The REFERENCING OLD clause is independent of the timing (BEFORE or
AFTER) of the trigger.
Example 3: A The following CREATE TRIGGER statement is appropriate for statement-
statement-level level UPDATE triggers:
UPDATE trigger CREATE TRIGGER mytrigger AFTER UPDATE ON employee
example REFERENCING NEW AS table_after_update
OLD AS table_before_update
FOR EACH STATEMENT
BEGIN
...
END
The REFERENCING NEW and REFERENCING OLD clause allows the
UPDATE trigger code to refer to both the old and new values of the rows
being updated. The table alias table_after_update refers to columns in the
new row and the table alias table_before_update refers to columns in the old
row.
The REFERENCING NEW and REFERENCING OLD clause has a slightly
different meaning for statement-level and row-level triggers. For statement-
level triggers the REFERENCING OLD or NEW aliases are table aliases,
while in row-level triggers they refer to the row being altered.
$ For more information, see "CREATE TRIGGER statement" on
page 477 of the book ASA Reference, and "Using compound statements" on
page 460.

Executing triggers
Triggers execute automatically whenever an INSERT, UPDATE, or
DELETE operation is performed on the table named in the trigger. A row-
level trigger fires once for each row affected, while a statement-level trigger
fires once for the entire statement.
When an INSERT, UPDATE, or DELETE fires a trigger, the order of
operation is as follows:
1 BEFORE triggers fire.
2 Referential actions are performed.
3 The operation itself is performed.
4 AFTER triggers fire.

453
Introduction to triggers

If any of the steps encounter an error not handled within a procedure or


trigger, the preceding steps are undone, the subsequent steps are not
performed, and the operation that fired the trigger fails.

Altering triggers
You can modify an existing trigger using either Sybase Central or Interactive
SQL. You must be the owner of the table on which the trigger is defined, or
be DBA, or have ALTER permissions on the table and have RESOURCE
authority.
In Sybase Central, you cannot rename an existing trigger directly. Instead,
you must create a new trigger with the new name, copy the previous code to
it, and then delete the old trigger.
In Interactive SQL, you can use an ALTER TRIGGER statement to modify
an existing trigger. You must include the entire new trigger in this statement
(in the same syntax as in the CREATE TRIGGER statement that created the
trigger).
$ For information on altering database object properties, see "Setting
properties for database objects" on page 120.

v To alter the code of a trigger (Sybase Central):


1 Open the Triggers folder of the desired table.
2 Right-click the desired trigger.
3 From the popup menu, do one of the following:
♦ Choose Open as Watcom SQL to edit the code in the Watcom SQL
dialect.
♦ Choose Open as Transact SQL to edit the code in the Transact SQL
dialect.
4 In the Code Editor, edit the trigger’s code.
5 To execute the code in the database, choose File➤Save/Execute in
Database.

v To alter the code of a trigger (SQL):


1 Connect to the database.
2 Execute an ALTER TRIGGER statement. Include the entire new trigger
in this statement.

454
Chapter 15 Using Procedures, Triggers, and Batches

$ For more information, see "ALTER TRIGGER statement" on page 398


of the book ASA Reference.

Dropping triggers
Once you create a trigger, it remains in the database until someone explicitly
removes it. You must have ALTER permissions on the table associated with
the trigger to drop the trigger.

v To delete a trigger (Sybase Central):


1 Open the Triggers folder of the desired table.
2 Right-click the desired trigger and choose Delete from the popup menu.

v To delete a trigger (Sybase Central):


1 Connect to a database.
2 Execute a DROP TRIGGER statement.

Example The following statement removes the trigger mytrigger from the database:
DROP TRIGGER mytrigger

$ For more information, see "DROP statement" on page 505 of the book
ASA Reference.

Trigger execution permissions


You cannot grant permissions to execute a trigger, since users cannot execute
triggers: Adaptive Server Anywhere fires them in response to actions on the
database. Nevertheless, a trigger does have permissions associated with it as
it executes, defining its right to carry out certain actions.
Triggers execute using the permissions of the owner of the table on which
they are defined, not the permissions of the user who caused the trigger to
fire, and not the permissions of the user who created the trigger.
When a trigger refers to a table, it uses the group memberships of the table
creator to locate tables with no explicit owner name specified. For example,
if a trigger on user_1.Table_A references Table_B and does not specify the
owner of Table_B, then either Table_B must have been created by user_1 or
user_1 must be a member of a group (directly or indirectly) that is the owner
of Table_B. If neither condition is met, a table not found message results
when the trigger fires.

455
Introduction to triggers

Also, user_1 must have permissions to carry out the operations specified in
the trigger.

456
Chapter 15 Using Procedures, Triggers, and Batches

Introduction to batches
A simple batch consists of a set of SQL statements, separated by semicolons
or separated by a separate line with just the word go on it. The use of go is
recommended. For example, the following set of statements form a batch,
which creates an Eastern Sales department and transfers all sales reps from
Massachusetts to that department.
INSERT
INTO department ( dept_id, dept_name )
VALUES ( 220, ’Eastern Sales’ )
go
UPDATE employee
SET dept_id = 220
WHERE dept_id = 200
AND state = ’MA’
go
COMMIT
go
You can include this set of statements in an application and execute them
together.

Interactive SQL and batches


Interactive SQL parses a list of semicolon-separated statements, such as
the above, before sending them to the server. In this case, Interactive SQL
sends each statement to the server individually, not as a batch. Unless you
have such parsing code in your application, the statements would be sent
and treated as a batch. Putting a BEGIN and END around a set of
statements causes Interactive SQL to treat them as a batch.

Many statements used in procedures and triggers can also be used in batches.
You can use control statements (CASE, IF, LOOP, and so on), including
compound statements (BEGIN and END), in batches. Compound statements
can include declarations of variables, exceptions, temporary tables, or
cursors inside the compound statement.
The following batch creates a table only if a table of that name does not
already exist:
IF NOT EXISTS (
SELECT * FROM SYSTABLE
WHERE table_name = ’t1’ ) THEN
CREATE TABLE t1 (
firstcol INT PRIMARY KEY,
secondcol CHAR( 30 )
)
go

457
Introduction to batches

ELSE
MESSAGE ’Table t1 already exists’ TO CLIENT;
END IF
If you run this batch twice from Interactive SQL, it creates the table the first
time you run it and displays the message in the Interactive SQL Messages
window the next time you run it.

458
Chapter 15 Using Procedures, Triggers, and Batches

Control statements
There are a number of control statements for logical flow and decision
making in the body of the procedure or trigger, or in a batch. Available
control statements include:

Control statement Syntax


Compound statements BEGIN [ ATOMIC ]
Statement-list
$ For more information, see END
"BEGIN statement" on page 404
of the book ASA Reference.
Conditional execution: IF IF condition THEN
Statement-list
$ For more information, see ELSEIF condition THEN
"IF statement" on page 545 of the Statement-list
book ASA Reference. ELSE
Statement-list
END IF
Conditional execution: CASE CASE expression
WHEN value THEN
$ For more information, see Statement-list
"CASE statement" on page 412 of WHEN value THEN
the book ASA Reference. Statement-list
ELSE
Statement-list
END CASE
Repetition: WHILE, LOOP WHILE condition LOOP
Statement-list
$ For more information, see END LOOP
"LOOP statement" on page 567 of
the book ASA Reference.

Repetition: FOR cursor loop FOR loop-name


AS cursor-name
$ For more information, see CURSOR FOR select statement
"FOR statement" on page 528 of DO
the book ASA Reference. Statement-list
END FOR
Break: LEAVE LEAVE label

$ For more information, see


"LEAVE statement" on page 558
of the book ASA Reference.
CALL CALL procname( arg, ... )

$ For more information, see


"CALL statement" on page 410 of

459
Control statements

Control statement Syntax


the book ASA Reference.

$ For complete descriptions of each, see the entries in "SQL Statements"


on page 377 of the book ASA Reference

Using compound statements


A compound statement starts with the keyword BEGIN and concludes with
the keyword END. The body of a procedure or trigger is a compound
statement. Compound statements can also be used in batches. Compound
statements can be nested, and combined with other control statements to
define execution flow in procedures and triggers or in batches.
A compound statement allows a set of SQL statements to be grouped
together and treated as a unit. Delimit SQL statements within a compound
statement with semicolons.
$ For more information about compound statements, see the "BEGIN
statement" on page 404 of the book ASA Reference.

Declarations in compound statements


Local declarations in a compound statement immediately follow the BEGIN
keyword. These local declarations exist only within the compound statement.
Within a compound statement you can declare:
♦ Variables
♦ Cursors
♦ Temporary tables
♦ Exceptions (error identifiers)
Local declarations can be referenced by any statement in that compound
statement, or in any compound statement nested within it. Local declarations
are not visible to other procedures called from the compound statement.

460
Chapter 15 Using Procedures, Triggers, and Batches

Atomic compound statements


An atomic statement is a statement executed completely or not at all. For
example, an UPDATE statement that updates thousands of rows might
encounter an error after updating many rows. If the statement does not
complete, all changes revert back to their original state. The UPDATE
statement is atomic.
All noncompound SQL statements are atomic. You can make a compound
statement atomic by adding the keyword ATOMIC after the BEGIN
keyword.
BEGIN ATOMIC
UPDATE employee
SET manager_ID = 501
WHERE emp_ID = 467;
UPDATE employee
SET birth_date = ’bad_data’;
END
In this example, the two update statements are part of an atomic compound
statement. They must either succeed or fail as one. The first update statement
would succeed. The second one causes a data conversion error since the
value being assigned to the birth_date column cannot be converted to a date.
The atomic compound statement fails and the effect of both UPDATE
statements is undone. Even if the currently executing transaction is
eventually committed, neither statement in the atomic compound statement
takes effect.
You cannot use COMMIT and ROLLBACK and some ROLLBACK TO
SAVEPOINT statements within an atomic compound statement (see
"Transactions and savepoints in procedures and triggers" on page 484).
There is a case where some, but not all, of the statements within an atomic
compound statement are executed. This happens when an exception handler
within the compound statement deals with an error.
$ For more information, see "Using exception handlers in procedures and
triggers" on page 479.

461
The structure of procedures and triggers

The structure of procedures and triggers


The body of a procedure or trigger consists of a compound statement as
discussed in "Using compound statements" on page 460. A compound
statement consists of a BEGIN and an END, enclosing a set of SQL
statements. Semicolons delimit each statement.

SQL statements allowed in procedures and triggers


You can use almost all SQL statements within procedures and triggers,
including the following:
♦ SELECT, UPDATE, DELETE, INSERT and SET VARIABLE.
♦ The CALL statement to execute other procedures.
♦ Control statements (see "Control statements" on page 459).
♦ Cursor statements (see "Using cursors in procedures and triggers" on
page 471).
♦ Exception handling statements (see "Using exception handlers in
procedures and triggers" on page 479).
♦ The EXECUTE IMMEDIATE statement.
Some SQL statements you cannot use within procedures and triggers
include:
♦ CONNECT statement
♦ DISCONNECT statement.
You can use COMMIT, ROLLBACK and SAVEPOINT statements within
procedures and triggers with certain restrictions (see "Transactions and
savepoints in procedures and triggers" on page 484).
$ For details, see the Usage for each SQL statement in the chapter "SQL
Statements" on page 377 of the book ASA Reference

462
Chapter 15 Using Procedures, Triggers, and Batches

Declaring parameters for procedures


Procedure parameters appear as a list in the CREATE PROCEDURE
statement. Parameter names must conform to the rules for other database
identifiers such as column names. They must have valid data types (see
"SQL Data Types" on page 263 of the book ASA Reference), and must be
prefixed with one of the keywords IN, OUT or INOUT. These keywords
have the following meanings:
♦ IN The argument is an expression that provides a value to the
procedure.
♦ OUT The argument is a variable that could be given a value by the
procedure.
♦ INOUT The argument is a variable that provides a value to the
procedure, and could be given a new value by the procedure.
You can assign default values to procedure parameters in the CREATE
PROCEDURE statement. The default value must be a constant, which may
be NULL. For example, the following procedure uses the NULL default for
an IN parameter to avoid executing a query that would have no meaning:
CREATE PROCEDURE
CustomerProducts( IN customer_id
INTEGER DEFAULT NULL )
RESULT ( product_id INTEGER,
quantity_ordered INTEGER )
BEGIN
IF customer_id IS NULL THEN
RETURN;
ELSE
SELECT product.id,
sum( sales_order_items.quantity )
FROM product,
sales_order_items,
sales_order
WHERE sales_order.cust_id = customer_id
AND sales_order.id = sales_order_items.id
AND sales_order_items.prod_id = product.id
GROUP BY product.id;
END IF;
END
The following statement assigns the DEFAULT NULL, and the procedure
RETURNs instead of executing the query.
CALL customerproducts();

463
The structure of procedures and triggers

Passing parameters to procedures


You can take advantage of default values of stored procedure parameters
with either of two forms of the CALL statement.
If the optional parameters are at the end of the argument list in the CREATE
PROCEDURE statement, they may be omitted from the CALL statement. As
an example, consider a procedure with three INOUT parameters:
CREATE PROCEDURE SampleProc( INOUT var1 INT
DEFAULT 1,
INOUT var2 int DEFAULT 2,
INOUT var3 int DEFAULT 3 )
...
We assume that the calling environment has set up three variables to hold the
values passed to the procedure:
CREATE VARIABLE V1 INT;
CREATE VARIABLE V2 INT;
CREATE VARIABLE V3 INT;
The procedure SampleProc may be called supplying only the first parameter
as follows:
CALL SampleProc( V1 )
in which case the default values are used for var2 and var3.
A more flexible method of calling procedures with optional arguments is to
pass the parameters by name. The SampleProc procedure may be called as
follows:
CALL SampleProc( var1 = V1, var3 = V3 )
or as follows:
CALL SampleProc( var3 = V3, var1 = V1 )

Passing parameters to functions


User-defined functions are not invoked with the CALL statement, but are
used in the same manner that built-in functions are. For example, the
following statement uses the fullname function defined in "Creating user-
defined functions" on page 446 to retrieve the names of employees:

v To list the names of all employees:


♦ Type the following:
SELECT fullname(emp_fname, emp_lname) AS Name
FROM employee

464
Chapter 15 Using Procedures, Triggers, and Batches

Name
Fran Whitney
Matthew Cobb
Philip Chin
Julie Jordan
Robert Breault
...

Notes ♦ Default parameters can be used in calling functions. However,


parameters cannot be passed to functions by name.
♦ Parameters are passed by value, not by reference. Even if the function
changes the value of the parameter, this change is not returned to the
calling environment.
♦ Output parameters cannot be used in user-defined functions.
♦ User-defined functions cannot return result sets.

465
Returning results from procedures

Returning results from procedures


Procedures can return results in the form of a single row of data, or multiple
rows. Results consisting of a single row of data can be passed back as
arguments to the procedure. Results consisting of multiple rows of data are
passed back as result sets. Procedures can also return a single value given in
the RETURN statement.
$ For simple examples of how to return results from procedures, see
"Introduction to procedures" on page 439. For more detailed information, see
the following sections.

Returning a value using the RETURN statement


The RETURN statement returns a single integer value to the calling
environment, causing an immediate exit from the procedure. The RETURN
statement takes the form:
RETURN expression
The value of the supplied expression is returned to the calling environment.
To save the return value in a variable, use an extension of the CALL
statement:
CREATE VARIABLE returnval INTEGER ;
returnval = CALL myproc() ;

Returning results as procedure parameters


Procedures can return results to the calling environment in the parameters to
the procedure.
Within a procedure, parameters and variables can be assigned values using:
♦ the SET statement.
♦ a SELECT statement with an INTO clause.

Using the SET The following somewhat artificial procedure returns a value in an OUT
statement parameter assigned using a SET statement:
CREATE PROCEDURE greater ( IN a INT,
IN b INT,
OUT c INT)
BEGIN
IF a > b THEN
SET c = a;

466
Chapter 15 Using Procedures, Triggers, and Batches

ELSE
SET c = b;
END IF ;
END

Using single-row Single-row queries retrieve at most one row from the database. This type of
SELECT query uses a SELECT statement with an INTO clause. The INTO clause
statements follows the select list and precedes the FROM clause. It contains a list of
variables to receive the value for each select list item. There must be the
same number of variables as there are select list items.
When a SELECT statement executes, the server retrieves the results of the
SELECT statement and places the results in the variables. If the query results
contain more than one row, the server returns an error. For queries returning
more than one row, you must use cursors. For information about returning
more than one row from a procedure, see "Returning result sets from
procedures" on page 468.
If the query results in no rows being selected, a row not found warning
appears.
The following procedure returns the results of a single-row SELECT
statement in the procedure parameters.

v To return the number of orders placed by a given customer:


♦ Type the following:
CREATE PROCEDURE OrderCount (IN customer_ID INT,
OUT Orders INT)
BEGIN
SELECT COUNT(dba.sales_order.id)
INTO Orders
FROM dba.customer
KEY LEFT OUTER JOIN "dba".sales_order
WHERE dba.customer.id = customer_ID;
END

You can test this procedure in Interactive SQL using the following
statements, which show the number of orders placed by the customer with ID
102:
CREATE VARIABLE orders INT;
CALL OrderCount ( 102, orders );
SELECT orders;

Notes ♦ The customer_ID parameter is declared as an IN parameter. This


parameter holds the customer ID passed in to the procedure.
♦ The Orders parameter is declared as an OUT parameter. It holds the
value of the orders variable that returned to the calling environment.

467
Returning results from procedures

♦ No DECLARE statement is necessary for the Orders variable, as it is


declared in the procedure argument list.
♦ The SELECT statement returns a single row and places it into the
variable Orders.

Returning result sets from procedures


Result sets allow a procedure to return more than one row of results to the
calling environment.
The following procedure returns a list of customers who have placed orders,
together with the total value of the orders placed. The procedure does not list
customers who have not placed orders.
CREATE PROCEDURE ListCustomerValue ()
RESULT ("Company" CHAR(36), "Value" INT)
BEGIN
SELECT company_name,
CAST( sum( sales_order_items.quantity *
product.unit_price)
AS INTEGER ) AS value
FROM customer
INNER JOIN sales_order
INNER JOIN sales_order_items
INNER JOIN product
GROUP BY company_name
ORDER BY value DESC;
END
♦ Type the following:
CALL ListCustomerValue ()

Company Value
Chadwicks 8076
Overland Army Navy 8064
Martins Landing 6888
Sterling & Co. 6804
Carmel Industries 6780
... ...

Notes ♦ The number of variables in the RESULT list must match the number of
the SELECT list items. Automatic data type conversion is carried out
where possible if data types do not match.

468
Chapter 15 Using Procedures, Triggers, and Batches

♦ The RESULT clause is part of the CREATE PROCEDURE statement,


and does not have a command delimiter.
♦ The names of the SELECT list items do not need to match those of the
RESULT list.
♦ When testing this procedure, Interactive SQL displays only the first
result set by default. You can configure Interactive SQL to display more
than one result set by setting the Show multiple result sets option on the
Commands tab of the Options dialog.
♦ You can modify procedure result sets, unless they are generated from a
view. The user calling the procedure requires the appropriate
permissions on the underlying table to modify procedure results. This is
different than the usual permissions for procedure execution, where the
procedure owner must have permissions on the table.

Returning multiple result sets from procedures


Before Interactive SQL can return multiple result sets, you need to enable
this option on the Commands tab of the Options dialog. By default, this
option is disabled. If you change the setting, it takes effect in newly created
connections (such as new windows).

v To enable multiple result set functionality:


1 Choose Tools➤Options.
2 In the resulting Options dialog, click the Commands tab.
3 Select the Show Multiple Result Sets check box.
After you enable this option, a procedure can return more than one result set
to the calling environment. If a RESULT clause is employed, the result sets
must be compatible: they must have the same number of items in the
SELECT lists, and the data types must all be of types that can be
automatically converted to the data types listed in the RESULT list.
The following procedure lists the names of all employees, customers, and
contacts listed in the database:
CREATE PROCEDURE ListPeople()
RESULT ( lname CHAR(36), fname CHAR(36) )
BEGIN
SELECT emp_lname, emp_fname
FROM employee;
SELECT lname, fname
FROM customer;
SELECT last_name, first_name
FROM contact;
469
Returning results from procedures

END

Notes ♦ To test this procedure in Interactive SQL, enter the following statement
in the SQL Statements pane:
CALL ListPeople ()

Returning variable result sets from procedures


The RESULT clause is optional in procedures. Omitting the result clause
allows you to write procedures that return different result sets, with different
numbers or types of columns, depending on how they are executed.
If you do not use the variable result sets feature, you should use a RESULT
clause for performance reasons.
For example, the following procedure returns two columns if the input
variable is Y, but only one column otherwise:
CREATE PROCEDURE names( IN formal char(1))
BEGIN
IF formal = ’y’ THEN
SELECT emp_lname, emp_fname
FROM employee
ELSE
SELECT emp_fname
FROM employee
END IF
END
The use of variable result sets in procedures is subject to some limitations,
depending on the interface used by the client application.
♦ Embedded SQL You must DESCRIBE the procedure call after the
cursor for the result set is opened, but before any rows are returned, in
order to get the proper shape of result set.
$ For information about the DESCRIBE statement, see "DESCRIBE
statement" on page 500 of the book ASA Reference.
♦ ODBC Variable result set procedures can be used by ODBC
applications. The Adaptive Server Anywhere ODBC driver carries out
the proper description of the variable result sets.
♦ Open Client applications Open Client applications can use variable
result set procedures. Adaptive Server Anywhere carries out the proper
description of the variable result sets.

470
Chapter 15 Using Procedures, Triggers, and Batches

Using cursors in procedures and triggers


Cursors retrieve rows one at a time from a query or stored procedure with
multiple rows in its result set. A cursor is a handle or an identifier for the
query or procedure, and for a current position within the result set.

Cursor management overview


Managing a cursor is similar to managing a file in a programming language.
The following steps manage cursors:
1 Declare a cursor for a particular SELECT statement or procedure using
the DECLARE statement.
2 Open the cursor using the OPEN statement.
3 Use the FETCH statement to retrieve results one row at a time from the
cursor.
4 The warning Row Not Found signals the end of the result set.
5 Close the cursor using the CLOSE statement.
By default, cursors are automatically closed at the end of a transaction (on
COMMIT or ROLLBACK statements). Cursors are opened using the WITH
HOLD clause will stay open for subsequent transactions until someone
explicitly closes them.
$ For more information on positioning cursors, see "Cursor positioning"
on page 275.

Using cursors on SELECT statements in procedures


The following procedure uses a cursor on a SELECT statement. Based on the
same query used in the ListCustomerValue procedure described in
"Returning result sets from procedures" on page 468, it illustrates several
features of the stored procedure language.
CREATE PROCEDURE TopCustomerValue
( OUT TopCompany CHAR(36),
OUT TopValue INT )
BEGIN
-- 1. Declare the "error not found" exception
DECLARE err_notfound
EXCEPTION FOR SQLSTATE ’02000’;
-- 2. Declare variables to hold
-- each company name and its value

471
Using cursors in procedures and triggers

DECLARE ThisName CHAR(36);


DECLARE ThisValue INT;
-- 3. Declare the cursor ThisCompany
-- for the query
DECLARE ThisCompany CURSOR FOR
SELECT company_name,
CAST( sum( sales_order_items.quantity *
product.unit_price ) AS INTEGER )
AS value
FROM customer
INNER JOIN sales_order
INNER JOIN sales_order_items
INNER JOIN product
GROUP BY company_name;
-- 4. Initialize the values of TopValue
SET TopValue = 0;
-- 5. Open the cursor
OPEN ThisCompany;
-- 6. Loop over the rows of the query
CompanyLoop:
LOOP
FETCH NEXT ThisCompany
INTO ThisName, ThisValue;
IF SQLSTATE = err_notfound THEN
LEAVE CompanyLoop;
END IF;
IF ThisValue > TopValue THEN
SET TopCompany = ThisName;
SET TopValue = ThisValue;
END IF;
END LOOP CompanyLoop;
-- 7. Close the cursor
CLOSE ThisCompany;
END

Notes The TopCustomerValue procedure has the following notable features:


♦ The "error not found" exception is declared. This exception signals, later
in the procedure, when a loop over the results of a query completes.
$ For more information about exceptions, see "Errors and warnings
in procedures and triggers" on page 474.
♦ Two local variables ThisName and ThisValue are declared to hold the
results from each row of the query.
♦ The cursor ThisCompany is declared. The SELECT statement produces
a list of company names and the total value of the orders placed by that
company.
♦ The value of TopValue is set to an initial value of 0, for later use in the
loop.

472
Chapter 15 Using Procedures, Triggers, and Batches

♦ The ThisCompany cursor opens.


♦ The LOOP statement loops over each row of the query, placing each
company name in turn into the variables ThisName and ThisValue. If
ThisValue is greater than the current top value, TopCompany and
TopValue are reset to ThisName and ThisValue.
♦ The cursor closes at the end of the procedure.
♦ You can also write this procedure without a loop by adding an ORDER
BY value DESC clause to the SELECT statement. Then, only the first
row of the cursor needs to be fetched.
The LOOP construct in the TopCompanyValue procedure is a standard form,
exiting after the last row processes. You can rewrite this procedure in a more
compact form using a FOR loop. The FOR statement combines several
aspects of the above procedure into a single statement.
CREATE PROCEDURE TopCustomerValue2(
OUT TopCompany CHAR(36),
OUT TopValue INT )
BEGIN
-- Initialize the TopValue variable
SET TopValue = 0;
-- Do the For Loop
FOR CompanyFor AS ThisCompany
CURSOR FOR
SELECT company_name AS ThisName ,
CAST( sum( sales_order_items.quantity *
product.unit_price ) AS INTEGER )
AS ThisValue
FROM customer
INNER JOIN sales_order
INNER JOIN sales_order_items
INNER JOIN product
GROUP BY ThisName
DO
IF ThisValue > TopValue THEN
SET TopCompany = ThisName;
SET TopValue = ThisValue;
END IF;
END FOR;
END

473
Errors and warnings in procedures and triggers

Errors and warnings in procedures and triggers


After an application program executes a SQL statement, it can examine a
status code. This status code (or return code) indicates whether the statement
executed successfully or failed and gives the reason for the failure. You can
use the same mechanism to indicate the success or failure of a CALL
statement to a procedure.
Error reporting uses either the SQLCODE or SQLSTATE status
descriptions. For full descriptions of SQLCODE and SQLSTATE error and
warning values and their meanings, see "Database Error Messages" on
page 649 of the book ASA Reference. Whenever a SQL statement executes, a
value appears in special procedure variables called SQLSTATE and
SQLCODE. That value indicates whether or not there were any unusual
conditions encountered while the statement was being performed. You can
check the value of SQLSTATE or SQLCODE in an IF statement following a
SQL statement, and take actions depending on whether the statement
succeeded or failed.
For example, the SQLSTATE variable can be used to indicate if a row is
successfully fetched. The TopCustomerValue procedure presented in section
"Using cursors on SELECT statements in procedures" on page 471 used the
SQLSTATE test to detect that all rows of a SELECT statement had been
processed.

Default error handling in procedures and triggers


This section describes how Adaptive Server Anywhere handles errors that
occur during a procedure execution, if you have no error handling built in to
the procedure.
$ If you want to have a different behavior, you can use exception
handlers, described in "Using exception handlers in procedures and triggers"
on page 479. Warnings are handled in a slightly different manner from
errors: for a description, see "Default handling of warnings in procedures and
triggers" on page 478.
There are two ways of handling errors without using explicit error handling:
♦ Default error handling The procedure or trigger fails and returns an
error code to the calling environment.
♦ ON EXCEPTION RESUME If the ON EXCEPTION RESUME clause
appears in the CREATE PROCEDURE statement, the procedure carries
on executing after an error, resuming at the statement following the one
causing the error.

474
Chapter 15 Using Procedures, Triggers, and Batches

$ The precise behavior for procedures that use ON EXCEPTION


RESUME is dictated by the ON_TSQL_ERROR option setting. For
more information, see "ON_TSQL_ERROR option" on page 202 of the
book ASA Reference.

Default error Generally, if a SQL statement in a procedure or trigger fails, the procedure or
handling trigger terminates execution and control returns to the application program
with an appropriate setting for the SQLSTATE and SQLCODE values. This
is true even if the error occurred in a procedure or trigger invoked directly or
indirectly from the first one. In the case of a trigger, the operation causing
the trigger is also undone and the error is returned to the application.
The following demonstration procedures show what happens when an
application calls the procedure OuterProc, and OuterProc in turn calls the
procedure InnerProc, which then encounters an error.
CREATE PROCEDURE OuterProc()
BEGIN
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
MESSAGE ’SQLSTATE set to ’,
SQLSTATE,’ in OuterProc.’ TO CLIENT
END
CREATE PROCEDURE InnerProc()
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE ’52003’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’SQLSTATE set to ’,
SQLSTATE, ’ in InnerProc.’ TO CLIENT;
END

Notes ♦ The DECLARE statement in InnerProc declares a symbolic name for


one of the predefined SQLSTATE values associated with error
conditions already known to the server.
♦ The MESSAGE statement sends a message to the Interactive SQL
Messages window.
♦ The SIGNAL statement generates an error condition from within the
InnerProc procedure.

The following statement executes the OuterProc procedure:


CALL OuterProc();
The Interactive SQL Messages window displays the following:
Hello from OuterProc.
Hello from InnerProc.

475
Errors and warnings in procedures and triggers

None of the statements following the SIGNAL statement in InnerProc


execute: InnerProc immediately passes control back to the calling
environment, which in this case is the procedure OuterProc. None of the
statements following the CALL statement in OuterProc execute. The error
condition returns to the calling environment to be handled there. For
example, Interactive SQL handles the error by displaying a message window
describing the error.
The TRACEBACK function provides a list of the statements that were
executing when the error occurred. You can use the TRACEBACK function
from Interactive SQL by typing the following statement:
SELECT TRACEBACK(*)

Error handling with ON EXCEPTION RESUME


If the ON EXCEPTION RESUME clause appears in the CREATE
PROCEDURE statement, the procedure checks the following statement
when an error occurs. If the statement handles the error, then the procedure
continues executing, resuming at the statement after the one causing the
error. It does not return control to the calling environment when an error
occurred.
$ The behavior for procedures that use ON EXCEPTION RESUME can
be modified by the ON_TSQL_ERROR option setting. For more
information, see "ON_TSQL_ERROR option" on page 202 of the book ASA
Reference.
Error-handling statements include the following:
♦ IF
♦ SELECT @variable =
♦ CASE
♦ LOOP
♦ LEAVE
♦ CONTINUE
♦ CALL
♦ EXECUTE
♦ SIGNAL
♦ RESIGNAL
♦ DECLARE

476
Chapter 15 Using Procedures, Triggers, and Batches

♦ SET VARIABLE
The following example illustrates how this works.
Drop the Remember to drop both the InnerProc and OuterProc procedures by entering
procedures the following commands in the command window before continuing with the
tutorial:
DROP PROCEDURE OuterProc;
DROP PROCEDURE InnerProc
The following demonstration procedures show what happens when an
application calls the procedure OuterProc; and OuterProc in turn calls the
procedure InnerProc, which then encounters an error. These demonstration
procedures are based on those used earlier in this section:
CREATE PROCEDURE OuterProc()
ON EXCEPTION RESUME
BEGIN
DECLARE res CHAR(5);
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
SELECT @res=SQLSTATE;
IF res=’52003’ THEN
MESSAGE ’SQLSTATE set to ’,
res, ’ in OuterProc.’ TO CLIENT;
END IF
END;

CREATE PROCEDURE InnerProc()


ON EXCEPTION RESUME
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE ’52003’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’SQLSTATE set to ’,
SQLSTATE, ’ in InnerProc.’ TO CLIENT;
END
The following statement executes the OuterProc procedure:
CALL OuterProc();
The Interactive SQL Messages window then displays the following:
Hello from OuterProc.
Hello from InnerProc.
SQLSTATE set to 52003 in OuterProc.
The execution path is as follows:
1 OuterProc executes and calls InnerProc

477
Errors and warnings in procedures and triggers

2 In InnerProc, the SIGNAL statement signals an error.


3 The MESSAGE statement is not an error-handling statement, so control
is passed back to OuterProc and the message is not displayed.
4 In OuterProc, the statement following the error assigns the SQLSTATE
value to the variable named res. This is an error-handling statement, and
so execution continues and the OuterProc message is displayed.

Default handling of warnings in procedures and triggers


Errors and warnings are handled differently. While the default action for
errors is to set a value for the SQLSTATE and SQLCODE variables, and
return control to the calling environment in the event of an error, the default
action for warnings is to set the SQLSTATE and SQLCODE values and
continue execution of the procedure.
Drop the Remember to drop both the InnerProc and OuterProc procedures by entering
procedures the following commands in the command window before continuing with the
tutorial:
DROP PROCEDURE OuterProc;
DROP PROCEDURE InnerProc
The following demonstration procedures illustrate default handling of
warnings. These demonstration procedures are based on those used in
"Default error handling in procedures and triggers" on page 474. In this case,
the SIGNAL statement generates a row not found condition, which is a
warning rather than an error.
CREATE PROCEDURE OuterProc()
BEGIN
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
MESSAGE ’SQLSTATE set to ’,
SQLSTATE,’ in OuterProc.’ TO CLIENT;
END
CREATE PROCEDURE InnerProc()
BEGIN
DECLARE row_not_found
EXCEPTION FOR SQLSTATE ’02000’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL row_not_found;
MESSAGE ’SQLSTATE set to ’,
SQLSTATE, ’ in InnerProc.’ TO CLIENT;
END
The following statement executes the OuterProc procedure:
CALL OuterProc();

478
Chapter 15 Using Procedures, Triggers, and Batches

The Interactive SQL Messages window then displays the following:


Hello from OuterProc.
Hello from InnerProc.
SQLSTATE set to 02000 in InnerProc.
SQLSTATE set to 00000 in OuterProc.
The procedures both continued executing after the warning was generated,
with SQLSTATE set by the warning (02000).
Execution of the second MESSAGE statement in InnerProc resets the
warning. Successful execution of any SQL statement resets SQLSTATE to
00000 and SQLCODE to 0. If a procedure needs to save the error status, it
must do an assignment of the value immediately after execution of the
statement which caused the error warning.

Using exception handlers in procedures and triggers


It is often desirable to intercept certain types of errors and handle them
within a procedure or trigger, rather than pass the error back to the calling
environment. This is done through the use of an exception handler.
You define an exception handler with the EXCEPTION part of a compound
statement (see "Using compound statements" on page 460). Whenever an
error occurs in the compound statement, the exception handler executes.
Unlike errors, warnings do not cause exception handling code to be executed.
Exception handling code also executes if an error appears in a nested
compound statement or in a procedure or trigger invoked anywhere within
the compound statement.
Drop the Remember to drop both the InnerProc and OuterProc procedures by entering
procedures the following commands in the command window before continuing with the
tutorial:
DROP PROCEDURE OuterProc;
DROP PROCEDURE InnerProc
The demonstration procedures used to illustrate exception handling are based
on those used in "Default error handling in procedures and triggers" on
page 474. In this case, additional code handles the column not found error in
the InnerProc procedure.
CREATE PROCEDURE OuterProc()
BEGIN
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
MESSAGE ’SQLSTATE set to ’,
SQLSTATE,’ in OuterProc.’ TO CLIENT

479
Errors and warnings in procedures and triggers

END
CREATE PROCEDURE InnerProc()
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE ’52003’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’Line following SIGNAL.’ TO CLIENT;
EXCEPTION
WHEN column_not_found THEN
MESSAGE ’Column not found handling.’ TO
CLIENT;
WHEN OTHERS THEN
RESIGNAL ;
END
The EXCEPTION statement declares the exception handler itself. The lines
following the EXCEPTION statement do not execute unless an error occurs.
Each WHEN clause specifies an exception name (declared with a
DECLARE statement) and the statement or statements to be executed in the
event of that exception. The WHEN OTHERS THEN clause specifies the
statement(s) to be executed when the exception that occurred does not appear
in the preceding WHEN clauses.
In this example, the statement RESIGNAL passes the exception on to a
higher-level exception handler. RESIGNAL is the default action if WHEN
OTHERS THEN is not specified in an exception handler.
The following statement executes the OuterProc procedure:
CALL OuterProc();
The Interactive SQL Messages window then displays the following:
Hello from OuterProc.
Hello from InnerProc.
Column not found handling.
SQLSTATE set to 00000 in OuterProc.

Notes ♦ The EXCEPTION statements execute, rather than the lines following the
SIGNAL statement in InnerProc.
♦ As the error encountered was a column not found error, the MESSAGE
statement included to handle the error executes, and SQLSTATE resets
to zero (indicating no errors).
♦ After the exception handling code executes, control passes back to
OuterProc, which proceeds as if no error was encountered.

480
Chapter 15 Using Procedures, Triggers, and Batches

♦ You should not use ON EXCEPTION RESUME together with explicit


exception handling. The exception handling code is not executed if ON
EXCEPTION RESUME is included.
♦ If the error handling code for the column not found exception is simply a
RESIGNAL statement, control passes back to the OuterProc procedure
with SQLSTATE still set at the value 52003. This is just as if there were
no error handling code in InnerProc. Since there is no error handling
code in OuterProc, the procedure fails.

Exception handling When an exception is handled inside a compound statement, the compound
and atomic statement completes without an active exception and the changes before the
compound exception are not reversed. This is true even for atomic compound
statements statements. If an error occurs within an atomic compound statement and is
explicitly handled, some but not all of the statements in the atomic
compound statement are executed.

Nested compound statements and exception handlers


The code following a statement that causes an error executes only if an ON
EXCEPTION RESUME clause appears in a procedure definition.
You can use nested compound statements to give you more control over
which statements execute following an error and which do not.
Drop the Remember to drop both the InnerProc and OuterProc procedures by entering
procedures the following commands in the command window before continuing with the
tutorial:
DROP PROCEDURE OuterProc;
DROP PROCEDURE InnerProc
The following demonstration procedure illustrates how nested compound
statements can be used to control flow. The procedure is based on that used
as an example in "Default error handling in procedures and triggers" on
page 474.
CREATE PROCEDURE InnerProc()
BEGIN
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE VALUE ’52003’;
MESSAGE ’Hello from InnerProc’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’Line following SIGNAL’ TO CLIENT
EXCEPTION
WHEN column_not_found THEN
MESSAGE ’Column not found handling’ TO
CLIENT;

481
Errors and warnings in procedures and triggers

WHEN OTHERS THEN


RESIGNAL;
END;
MESSAGE ’Outer compound statement’ TO CLIENT;
END
The following statement executes the InnerProc procedure:
CALL InnerProc();
The Interactive SQL Messages window then displays the following:
Hello from InnerProc
Column not found handling
Outer compound statement
When the SIGNAL statement that causes the error is encountered, control
passes to the exception handler for the compound statement, and the Column
not found handling message prints. Control then passes back to the outer
compound statement and the Outer compound statement message prints.
If an error other than column not found is encountered in the inner compound
statement, the exception handler executes the RESIGNAL statement. The
RESIGNAL statement passes control directly back to the calling
environment, and the remainder of the outer compound statement is not
executed.

482
Chapter 15 Using Procedures, Triggers, and Batches

Using the EXECUTE IMMEDIATE statement in


procedures
The EXECUTE IMMEDIATE statement allows statements to be constructed
inside procedures using a combination of literal strings (in quotes) and
variables.
For example, the following procedure includes an EXECUTE IMMEDIATE
statement that creates a table.
CREATE PROCEDURE CreateTableProc(
IN tablename char(30) )
BEGIN
EXECUTE IMMEDIATE ’CREATE TABLE ’ || tablename ||
’(column1 INT PRIMARY KEY)’
END
In ATOMIC compound statements, you cannot use an EXECUTE
IMMEDIATE statement that causes a COMMIT, as COMMITs are not
allowed in that context.
The EXECUTE IMMEDIATE statement does not support statements or
queries that return result sets.
$ For more information about the EXECUTE IMMEDIATE statement,
see "EXECUTE IMMEDIATE statement" on page 518 of the book ASA
Reference.

483
Transactions and savepoints in procedures and triggers

Transactions and savepoints in procedures and


triggers
SQL statements in a procedure or trigger are part of the current transaction
(see "Using Transactions and Isolation Levels" on page 381). You can call
several procedures within one transaction or have several transactions in one
procedure.
COMMIT and ROLLBACK are not allowed within any atomic statement
(see "Atomic compound statements" on page 461). Note that triggers are
fired due to an INSERT, UPDATE, or DELETE which are atomic
statements. COMMIT and ROLLBACK are not allowed in a trigger or in any
procedures called by a trigger.
Savepoints (see "Savepoints within transactions" on page 385) can be used
within a procedure or trigger, but a ROLLBACK TO SAVEPOINT
statement can never refer to a savepoint before the atomic operation started.
Also, all savepoints within an atomic operation are released when the atomic
operation completes.

484
Chapter 15 Using Procedures, Triggers, and Batches

Some tips for writing procedures


This section provides some pointers for developing procedures.

Check if you need to change the command delimiter


You do not need to change the command delimiter in Interactive SQL or
Sybase Central when you write procedures. However, if you create and test
procedures and triggers from some other browsing tool, you may need to
change the command delimiter from the semicolon to another character.
Each statement within the procedure ends with a semicolon. For some
browsing applications to parse the CREATE PROCEDURE statement itself,
you need the command delimiter to be something other than a semicolon.
If you are using an application that requires changing the command
delimiter, a good choice is to use two semicolons as the command delimiter
(;;) or a question mark (?) if the system does not permit a multicharacter
delimiter.

Remember to delimit statements within your procedure


You should terminate each statement within the procedure with a semicolon.
Although you can leave off semicolons for the last statement in a statement
list, it is good practice to use semicolons after each statement.
The CREATE PROCEDURE statement itself contains both the RESULT
specification and the compound statement that forms its body. No semicolon
is needed after the BEGIN or END keywords, or after the RESULT clause.

Use fully-qualified names for tables in procedures


If a procedure has references to tables in it, you should always preface the
table name with the name of the owner (creator) of the table.
When a procedure refers to a table, it uses the group memberships of the
procedure creator to locate tables with no explicit owner name specified. For
example, if a procedure created by user_1 references Table_B and does not
specify the owner of Table_B, then either Table_B must have been created by
user_1 or user_1 must be a member of a group (directly or indirectly) that is
the owner of Table_B. If neither condition is met, a table not found message
results when the procedure is called.

485
Some tips for writing procedures

You can minimize the inconvenience of long fully qualified names by using
a correlation name to provide a convenient name to use for the table within a
statement. Correlation names are described in "FROM clause" on page 532
of the book ASA Reference.

Specifying dates and times in procedures


When dates and times are sent to the database from procedures, they are sent
as strings. The date part of the string is interpreted according to the current
setting of the DATE_ORDER database option. As different connections may
set this option to different values, some strings may be converted incorrectly
to dates, or the database may not be able to convert the string to a date.
You should use the unambiguous date format yyyy-mm-dd or yyyy/mm/dd
when using data strings within procedures. The server interprets these strings
unambiguously as dates, regardless of the DATE_ORDER database option
setting.
$ For more information on dates and times, see "Date and time data
types" on page 277 of the book ASA Reference.

Verifying that procedure input arguments are passed correctly


One way to verify input arguments is to display the value of the parameter in
the Interactive SQL Messages window using the MESSAGE statement. For
example, the following procedure simply displays the value of the input
parameter var:
CREATE PROCEDURE message_test (IN var char(40))
BEGIN
MESSAGE var TO CLIENT;
END
You can also use the stored procedure debugger.

486
Chapter 15 Using Procedures, Triggers, and Batches

Statements allowed in batches


All SQL statements are acceptable in batches (including data definition
statements such as CREATE TABLE, ALTER TABLE, and so on), with the
exception of the following:
♦ CONNECT or DISCONNECT statement
♦ ALTER PROCEDURE or ALTER FUNCTION statement
♦ CREATE TRIGGER statement
♦ Interactive SQL commands such as INPUT or OUTPUT
♦ You cannot use host variables in batches.
The CREATE PROCEDURE statement is allowed, but must be the final
statement of the batch. Therefore a batch can contain only a single CREATE
PROCEDURE statement.

Using SELECT statements in batches


You can include one or more SELECT statements in a batch.
The following is a valid batch:
IF EXISTS( SELECT *
FROM SYSTABLE
WHERE table_name=’employee’ )
THEN
SELECT emp_lname AS LastName,
emp_fname AS FirstName
FROM employee;
SELECT lname, fname
FROM customer;
SELECT last_name, first_name
FROM contact;
END IF
The alias for the result set is necessary only in the first SELECT statement,
as the server uses the first SELECT statement in the batch to describe the
result set.
A RESUME statement is necessary following each query to retrieve the next
result set.

487
Calling external libraries from procedures

Calling external libraries from procedures


You can call a function in an external library from a stored procedure or
user-defined function. You can call functions in a DLL under Windows
operating systems, in an NLM under NetWare, and in a shared object on
UNIX. You cannot call external functions on Windows CE.
This section describes how to use the external library calls in procedures.
External libraries called from procedures share the memory of the server. If
you call a DLL from a procedure and the DLL contains memory-handling
errors, you can crash the server or corrupt your database. Ensure that you
thoroughly test your libraries before deploying them on production
databases.
The API described in this section replaces an older API. Libraries written to
the older API, used in versions before version 7.0, are still supported, but in
new development you should use the new API.
Adaptive Server Anywhere includes a set of system procedures that make
use of this capability, for example to send MAPI e-mail messages.
$ For information on system procedures, see "System Procedures and
Functions" on page 961 of the book ASA Reference.

Creating procedures and functions with external calls


This section presents some examples of procedures and functions with
external calls.

DBA authority required


You must have DBA authority to create procedures or functions that
reference external libraries. This requirement is more strict than the
RESOURCE authority required for creating other procedures or functions.

Syntax You can create a procedure that calls a function function_name in DLL
library.dll as follows:
CREATE PROCEDURE dll_proc ( parameter-list )
EXTERNAL NAME ’function_name@library.dll’
If you call an external DLL from a procedure, the procedure cannot carry out
any other tasks; it just forms a wrapper around the DLL.
An analogous CREATE FUNCTION statement is as follows:

488
Chapter 15 Using Procedures, Triggers, and Batches

CREATE FUNCTION dll_func ( parameter-list )


RETURNS data-type
EXTERNAL NAME ’function_name@library.dll’
In these statements, function_name is the exported name of a function in the
dynamic link library, and library.dll is the name of the library. The arguments
in parameter-list must correspond in type and order to the argument expected
by the library function. The library function accesses the procedure argument
using an API described in "External function prototypes" on page 490.
Any value returned by the external function is in turn returned by the
procedure to the calling environment.
No other A procedure that references an external function can include no other
statements statements: its sole purpose is to take arguments for a function, call the
permitted function, and return any value and returned arguments from the function to
the calling environment. You can use IN, INOUT, or OUT parameters in the
procedure call in the same way as for other procedures: the input values get
passed to the external function, and any parameters modified by the function
are returned to the calling environment in OUT or INOUT parameters.
System-dependent You can specify operating-system dependent calls, so that a procedure calls
calls one function when run on one operating system, and another function
(presumably analogous) on another operating system. The syntax for such
calls involves prefixing the function name with the operating system name.
For example:
CREATE PROCEDURE dll_proc ( parameter-list )
EXTERNAL NAME
’Windows95:95_fn@95_lib.dll;WindowsNT:nt_fn@nt_lib.dll’
The operating system identifier must be one of WindowsNT, Windows95,
UNIX, or NetWare.
If the list of functions does not contain an entry for the operating system on
which the server is running, but the list does contain an entry without an
operating system specified, the database server calls the function in that
entry.
NetWare calls have a slightly different format than the other operating
systems. All symbols are globally known under NetWare, so any symbol
(such as a function name) exported must be unique to all NLMs on the
system. Consequently, the NLM name is not necessary in the call, and the
call has the following syntax:
CREATE PROCEDURE dll_proc ( parameter-list )
EXTERNAL NAME ’NetWare:nw_fn’
There is no need to provide a library name. If you do provide one, it is
ignored.

489
Calling external libraries from procedures

$ For a full description of the CREATE PROCEDURE statement syntax,


see "CREATE PROCEDURE statement" on page 453 of the book ASA
Reference.
$ For a full description of the CREATE FUNCTION statement syntax,
see "CREATE FUNCTION statement" on page 445 of the book ASA
Reference.

External function prototypes


This section describes the API for functions in external libraries.
The API is defined by a header file named extfnapi.h, in the h subdirectory of
your SQL Anywhere Studio installation directory. This header file handles
the platform-dependent features of external function prototypes.
Declaring the API To notify the database server that the library is not written using the old API,
version you must provide a function as follows:
uint32 extfn_use_new_api( )
The function returns an unsigned 32-bit integer. If the return value is non-
zero, the database server assumes that you are not using the old API.
If the function is not exported by the DLL, the database server assumes that
the old API is in use. When using the new API, the returned value must be
the API version number defined in extfnapi.h.
Function The name of the function must match that referenced in the CREATE
prototypes PROCEDURE or CREATE FUNCTION statement. The function declaration
must be as follows:
void function-name( an_extfn_api *api, void *argument-handle )
The function must return void, and must take as arguments a structure used
to pass the arguments, and a handle to the arguments provided by the SQL
procedure.
The an_extfn_api structure has the following form:

490
Chapter 15 Using Procedures, Triggers, and Batches

typedef struct an_extfn_api {


short (SQL_CALLBACK *get_value)(
void * arg_handle,
a_sql_uint32 arg_num,
an_extfn_value *value
);
short (SQL_CALLBACK *get_piece)(
void * arg_handle,
a_sql_uint32 arg_num,
an_extfn_value *value,
a_sql_uint32 offset
);
short (SQL_CALLBACK *set_value)(
void * arg_handle,
a_sql_uint32 arg_num,
an_extfn_value *value
short append
);
void (SQL_CALLBACK *set_cancel)(
void * arg_handle,
void * cancel_handle
);
} an_extfn_api;
The an_extfn_value structure has the following form:
typedef struct an_extfn_value {
void * data;
a_sql_uint32 piece_len;
union {
a_sql_uint32 total_len;
a_sql_uint32 remain_len;
} len;
a_sql_data_type type;
} an_extfn_value;

Notes Calling get_value on an OUT parameter returns the data type of the
argument, and returns data as NULL.
The get_piece function for any given argument can only be called
immediately after the get_value function for the same argument,
To return NULL, set data to NULL in an_extfn_value.
The append field of set_value determines whether the supplied data replaces
(false) or appends to (true) the existing data. You must call set_value with
append=FALSE before calling it with append=TRUE for the same argument.
The append field is ignored for fixed length data types.
The header file itself contains some additional notes.
$ For information about passing parameters to external functions, see
"Passing parameters to external functions" on page 492.

491
Calling external libraries from procedures

Implementing An external function that expects to be canceled must inform the database
cancel processing server by calling the set_cancel API function. You must export a special
function to enable external operations to be canceled. This function must
have the following form:
void an_extfn_cancel( void * cancel_handle )
If the DLL does not export this function, the database server ignores any user
interrupts for functions in the DLL. In this function, cancel_handle is a
pointer provided by the function being cancelled to the database server upon
each call to the external function by the set_cancel API function listed in the
an_extfn_api structure, above.

Passing parameters to external functions


Data types
The following SQL data types can be passed to an external library:

SQL data type C type


CHAR Character data, with a specified length
VARCHAR Character data, with a specified length
LONG VARCHAR Character data, with a specified length
BINARY Binary data, with a specified length
LONG BINARY Character data, with a specified length
TINYINT 1-byte integer
[ UNSIGNED ] SMALLINT [ Unsigned ] 2-byte integer
[ UNSIGNED ] INT [ Unsigned ] 4-byte integer
[ UNSIGNED ] BIGINT [ Unsigned ] 8-byte integer
VARBINARY Binary data, with a specified length
REAL Single precision floating point number
DOUBLE Double precision floating point number

You cannot use date or time data types, and you cannot use the exact
NUMERIC data type.
To provide values for INOUT or OUT parameters, use the set_value API
function. To read IN and INOUT parameters, use the get_value API
function.
Passing NULL You can pass NULL as a valid value for all arguments. Functions in external
libraries can supply NULL as a return type for any data type.

492
Chapter 15 Using Procedures, Triggers, and Batches

External function The following table lists the supported return types, and how they map to the
return types return type of the SQL function or procedure.

C data type SQL data type


void Used for external procedures.
char * function returning CHAR().
long function returning INTEGER
float function returning FLOAT
double function returning DOUBLE.

If a function in the external library returns NULL, and the SQL external
function was declared to return CHAR(), then the return value of the SQL
extended function is NULL.

493
Calling external libraries from procedures

494
C H A P T E R 1 6

Automating Tasks Using Schedules and


Events

About this chapter This chapter describes how to use scheduling and event handling features of
Adaptive Server Anywhere to automate database administration and other
tasks.
Contents
Topic Page
Introduction 496
Understanding schedules 498
Understanding events 500
Understanding event handlers 504
Schedule and event internals 506
Scheduling and event handling tasks 508

495
Introduction

Introduction
Many database administration tasks are best carried out systematically. For
example, a regular backup procedure is an important part of proper database
administration procedures.
You can automate routine tasks in Adaptive Server Anywhere by adding an
event to a database, and providing a schedule for the event. Whenever one
of the times in the schedule passes, a sequence of actions called an event
handler is executed by the database server.
Database administration also requires taking action when certain conditions
occur. For example, it may be appropriate to e-mail a notification to a system
administrator when a disk containing the transaction log is filling up, so that
the administrator can handle the situation. These tasks too can be automated
by defining event handlers for one of a set of system events.
Chapter contents This chapter contains the following material:
♦ An introduction to scheduling and event handling (this section).
♦ Concepts and background information to help you design and use
schedules and event handlers:
♦ "Understanding schedules" on page 498.
♦ "Understanding events" on page 500.
♦ A discussion of techniques for developing event handlers:
♦ "Developing event handlers" on page 504.
♦ Internals information:
♦ "Schedule and event internals" on page 506.
♦ Step by step instructions for how to carry out automation tasks.
♦ "Scheduling and event handling tasks" on page 508.
Questions and To answer the question... Consider reading...
answers
What is a schedule? "Understanding schedules" on
page 498.
What is a system event? "Understanding events" on page 500
What is an event handler? "Understanding event handlers" on
page 504
How do I debug event handlers? "Developing event handlers" on
page 504

496
Chapter 16 Automating Tasks Using Schedules and Events

To answer the question... Consider reading...


How does the database server use "How the database server checks for
schedules to trigger event handlers? scheduled times" on page 506
How can I schedule regular backups? For an example, see "Understanding
schedules" on page 498.
What kind of system events can the "Understanding events" on page 500.
database server use to trigger event
handlers? "CREATE EVENT statement" on
page 435 of the book ASA Reference.
What connection do event handlers get "How event handlers are executed" on
executed on? page 507.
How do event handlers get information "Developing event handlers" on
about what triggered them? page 504
"EVENT_PARAMETER function" on
page 336 of the book ASA Reference

497
Understanding schedules

Understanding schedules
By scheduling activities you can ensure that a set of actions is executed at a
set of preset times. The scheduling information and the event handler are
both stored in the database itself.
You can define complex schedules by associating more than one schedule
with a named event.
The following examples give some ideas for scheduled actions that may be
useful.
Examples Carry out an incremental backup daily at 1:00 am:
create event IncrementalBackup
schedule
start time ’1:00 AM’ every 24 hours
handler
begin
backup database directory ’c:\\backup’
transaction log only
transaction log rename match
end
Summarize orders at the end of each business day:
create event Summarize
schedule
start time ’6:00 pm’
on ( ’Mon’, ’Tue’, ’Wed’, ’Thu’, ’Fri’ )
handler
begin
insert into dba.OrderSummary
select max( date_ordered ),
count( * ),
sum( amount )
from dba.Orders
where date_ordered = current date
end

Defining schedules
Schedule definitions have several components to them, to permit flexibility:
♦ Name Each schedule definition has a name. You can assign more than
one schedule to a particular event, which can be useful in designing
complex schedules.
♦ Start time You can define a start time for the event, which is the time
that it is first executed.
498
Chapter 16 Automating Tasks Using Schedules and Events

♦ Range As an alternative to a start time, you can specify a range of


times for which the event is active.
♦ Recurrence Each schedule can have a recurrence. The event is
triggered on a frequency that can be given in hours, minutes, or seconds,
on a set of days that can be specified as days of the week or days of the
month.

499
Understanding events

Understanding events
The database server tracks several kinds of system events. Event handlers are
triggered when the system event is checked by the database server, and
satisfies a provided trigger condition.
By defining event handlers to execute when a chosen system event occurs
and satisfies a trigger condition that you define, you can improve the security
and safety of your data, and help to ease administration.
$ For information on the available system events, see "Choosing a system
event" on page 500. For information on trigger conditions, see "Defining
trigger conditions for events" on page 501.

Choosing a system event


Adaptive Server Anywhere tracks several system events. Each system event
provides a hook on which you can hang a set of actions. The database server
tracks the events for you, and executes the actions (as defined in the event
handler) when needed.
The available system events include the following:
♦ Backup You can use the BackupEnd event type to take actions at the
end of a backup.
♦ DatabaseStart The database is started.
♦ Connection events When a connection is made (Connect) or when a
connection attempt fails (ConnectFailed). You may want to use these
events for security purposes.
♦ Free disk space Tracks the available disk space on the device holding
the database file (DBDiskSpace), the log file (LogDiskSpace), or
temporary file (TempDiskSpace). This system event is not available on
the following operating systems:
♦ Windows 95 before OSR2.
♦ Windows CE
You may want to use disk space events to alert administrators in case of
a disk space shortage.
♦ File size The file reaches a specified size. This can be used for the
database file (GrowDB), the transaction log (GrowLog), or the
temporary file (GrowTemp).

500
Chapter 16 Automating Tasks Using Schedules and Events

You may want to use file size events to track unusual actions on the
database, or monitor bulk operations.
♦ SQL errors When an error is triggered, you can use the RAISERROR
event type to take actions.
♦ Idle time The database server has been idle for a specified time. You
may want to use this event type to carry out routine maintenance
operations at quiet times.

Defining trigger conditions for events


Each event definition has a system event associated with it. It also has one or
more trigger conditions. The event handler is triggered when the trigger
conditions for the system event are satisfied.
The trigger conditions are included in the WHERE clause of the CREATE
EVENT statement, and can be combined using the AND keyword. Each
trigger condition is of the following form:
event_condition( condition-name ) comparison-operator value
The condition-name argument is one of a set of preset strings, which are
appropriate for different event types. For example, you can use DBSize (the
database file size in Megabytes) to build a trigger condition suitable for the
GrowDB system event. The database server does not check that the
condition-name matches the event type: it is your responsibility to ensure
that the condition is meaningful in the context of the event type.
Examples ♦ Limit the transaction log size to 10Mb:
create event LogLimit
type GrowLog
where event_condition( ’LogSize’ ) > 10
handler
begin
backup database
directory ’c:\\logs’
transaction log only
transaction log rename match
end
♦ Notify an administrator when free disk space on the device containing
the database file falls below 10%, but do not execute the handler more
than once every five minutes (300 seconds):

501
Understanding events

create event LowDBSpace


type DBDiskSpace
where event_condition( ’DBFreePercent’ ) < 10
and event_condition( ’Interval’ ) >= 300
handler
begin
call xp_sendmail( recipient=’DBAdmin’,
subject=’Low disk space’,
"message"=’Database free disk space ’
|| event_parameter( ’DBFreeSpace’ ) );
end
♦ Notify an administrator of a possible attempt to break into the database:
create event SecurityCheck
type ConnectFailed
handler
begin
declare num_failures int;
declare mins int;

insert into FailedConnections( log_time )


values ( current timestamp );

select count( * ) into num_failures


from FailedConnections
where log_time >= dateadd( minutes, -5,
current timestamp );

if( num_failures >= 3 ) then


select datediff( minutes, last_notification,
current timestamp ) into mins
from Notification;

if( mins > 30 ) then


update Notification
set last_notification = current timestamp;

call xp_sendmail( recipient=’DBAdmin’,


subject=’Security Check’,
"message"=
’over 3 failed connections in last 5 minutes’ )
end if
end if
end
♦ Run a process when the server has been idle for ten minutes. Do not
execute more frequently than once per hour:

502
Chapter 16 Automating Tasks Using Schedules and Events

create event Soak


type ServerIdle
where event_condition( ’IdleTime’ ) >= 600
and event_condition( ’Interval’ ) >= 3600
handler
begin
message ’ Insert your code here ... ’
end

503
Understanding event handlers

Understanding event handlers


Event handlers execute on a separate connection from the action that
triggered the event, and so do not interact with client applications. They
execute with the permissions of the creator of the event.

Developing event handlers


Event handlers, whether for scheduled events or for system event handling,
contain compound statements, and are similar in many ways to stored
procedures. You can add loops, conditional execution, and so on, and you
can use the Adaptive Server Anywhere debugger to debug event handlers.
Context One difference between event handlers and stored procedures is that event
information for handlers do not take any arguments. Certain information about the context in
event handlers which an event was triggered is available through the event_parameter
function, which supplies information about the connection that caused an
event to be triggered (connection ID, user ID), as well as the event name and
the number of times it has been executed.
$ For more information, see "EVENT_PARAMETER function" on
page 336 of the book ASA Reference.
Testing event During development, you want event handlers to be triggered at convenient
handlers times. You can use the TRIGGER EVENT statement to explicitly cause an
event to execute, even when the trigger condition or scheduled time has not
occurred. However, TRIGGER EVENT does not cause disabled event
handlers to be executed.
$ For more information, see "TRIGGER EVENT statement" on page 630
of the book ASA Reference.
While it is not good practice to develop event handlers on a production
database, you can disable event handlers from Sybase Central or explicitly
using the ALTER EVENT statement.
It can be useful to use a single set of actions to handle multiple events. For
example, you may want to take a notification action if disk space is limited
on any of the devices holding the database or log files. To do this, create a
stored procedure and call it in the body of each event handler.
Debugging event Debugging event handlers is very similar to debugging stored procedures.
handlers The event handlers appear in the procedures list.
One difference is that, because each event handler runs on its own
connection, you must be sure to select All connections before setting a
breakpoint in an event handler.

504
Chapter 16 Automating Tasks Using Schedules and Events

$ For step-by-step instructions, see "Debugging an event handler" on


page 510.

505
Schedule and event internals

Schedule and event internals


This section describes how the database server processes schedules and event
definitions.

How the database server checks for events


Events are classified according to their event type, as specified directly in
the CREATE EVENT statement or using Sybase Central. The event types
are of two kinds:
♦ Active event types Some event types are the result of action by the
database server itself. These active event types include growing database
files, or the start and end of different database actions (BackupEnd and
so on) or RAISERROR.
When the database server takes the action, it checks to see whether the
trigger conditions defined in the WHERE clause are satisfied, and if so
triggers any events defined for that event type.
♦ Polled event types Some event types are not triggered solely by
database actions. The free disk space types (DBDiskSpace and so on) as
well as the IdleTime types are of this kind.
For these types of events, the database server polls every thirty seconds,
starting approximately thirty seconds after the database server is started.
For the IdleTime event type, the database server checks whether the
server has been idle for the entire thirty seconds. If no requests have
started and none are currently active, it adds the idle check interval time
in seconds to the idle time total; otherwise, the idle time total is reset to
0. The value for IdleTime is therefore always a multiple of thirty
seconds. When IdleTime is greater than the interval specified in the
trigger condition, event handlers associated with IdleTime are fired.

How the database server checks for scheduled times


The calculation of scheduled event times is done when the database server
starts, and each time a scheduled event handler completes.
The calculation of the next scheduled time is based on the increment
specified in the schedule definition, with the increment being added to the
previous start time. If the event handler takes longer to execute than the
specified increment, so that the next time is earlier than the current time, the
database server increments until the next scheduled time is in the future.

506
Chapter 16 Automating Tasks Using Schedules and Events

An event handler that takes sixty-five minutes to execute and is requested to


run every hour between 9:00 and 5:00 will run every two hours, at 9:00,
11:00, 1:00, and so on.
To run a process such that it operates between 9:00 and 5:00 and delays for
some period before the next execution, you could define a handler to loop
until its completion time has passed, with a sleep instruction, perhaps using
xp_cmdshell, between each iteration.
If you are running a database server intermittently, and it is not running at a
scheduled time, the event handler does not run at startup. Instead, the next
scheduled time is computed at startup. If, for example, you schedule a
backup to take place every night at one o’clock, but regularly shut down the
database server at the end of each work day, the backup never takes place.

How event handlers are executed


When an event handler is triggered, a temporary internal connection is made,
on which the event handler is executed. The handler is not executed on the
connection that caused the handler to be triggered, and consequently
statements such as MESSAGE .. TO CLIENT, which interact with the client
application, are not meaningful within event handlers.
The temporary connection on which the handler is executed does not count
towards the connection limit for licensing purposes.
Event creation requires DBA authority, and events execute with the
permissions of their creator. If you wish event handlers to execute with non-
DBA authority, you can call a procedure from within the handler, as stored
procedures run with the permissions of their creator.
Any event errors are logged to the server console.

507
Scheduling and event handling tasks

Scheduling and event handling tasks


This section collects together instructions for tasks related to automating
tasks with schedules and events.

Adding a schedule or event to a database


Schedules and events are handled in a similar fashion, both from Sybase
Central and in SQL.
$ For background information, see "Understanding schedules" on
page 498, and "Understanding events" on page 500.

v To add a schedule or event to a database (Sybase Central):


1 Connect to the database as a user with DBA authority.
2 Open the Events folder for your database.
3 Double-click Add Event. The Event Creation wizard appears.
4 Follow the instructions in the wizard.
The Wizard contains many options, depending on the schedule or event
you wish to create. These are explained in detail in other tasks.

v To add a schedule or event to a database (SQL):


1 Connect to the database as a user with DBA authority.
2 Execute a CREATE EVENT statement.
The CREATE EVENT contains many options, depending on the
schedule or event you wish to create. These are explained in detail in
other tasks.
$ For more information, see "CREATE EVENT statement" on
page 435 of the book ASA Reference.

Adding a manually-triggered event to a database


If you create an event handler without a schedule or system event to trigger
it, it is executed only when manually triggered.

v To add a manually-triggered event to a database (Sybase Central):


1 Connect to the database as a user with DBA authority.
508
Chapter 16 Automating Tasks Using Schedules and Events

2 Open the Events folder for your database.


3 Double-click Add Event. The Event Creation Wizard is displayed.
4 Enter a name for the event, and click Next.
5 Select Triggered Manually, and click Next.
6 Enter the SQL statements for your event handler, and click Next.
7 Select Event is Enabled, and select Execute at all Locations, and click
Next.
8 Enter a comment describing the event, and click Finish to add the event
to the database.
If you wish to accept the default values for all remaining options, you can
click Finish at an earlier stage of the wizard.

v To add a manually-triggered event to a database (SQL):


1 Connect to the database as a user with DBA authority.
2 Execute a CREATE EVENT statement with no schedule or WHERE
clause. The restricted syntax of the CREATE EVENT is as follows:
CREATE EVENT event-name
HANDLER
BEGIN
… event handler
END

If you are developing event handlers, you can add schedules or system events
to control the triggering of an event later, either using Sybase Central or the
ALTER EVENT statement.
$ See also:
♦ For information on triggering events, see "Triggering an event handler"
on page 510.
♦ For information on altering events, see "ALTER EVENT statement" on
page 387 of the book ASA Reference.

509
Scheduling and event handling tasks

Triggering an event handler


Any event handler can be triggered manually, in addition to those occasions
when it executes because of a schedule or system event. Triggering events
manually can be useful during development of event handlers, and also, for
certain events, in production environments. For example, you may have a
monthly sales report scheduled, but from time to time you may want to
obtain a sales report for a reason other than the end of the month.
$ For information on developing event handlers, see "Developing event
handlers" on page 504.

v To trigger an event handler (Sybase Central):


1 Connect to the database as a user with DBA authority.
2 Open the Events folder for your database.
3 Right-click the event you wish to trigger and choose Trigger from the
popup menu. The Event Parameters dialog is displayed.
4 Supply any parameters the event handler requires, in the following form:
parameter=value;parameter=value
Click OK to trigger the event handler.

v To trigger an event handler (SQL):


1 Connect to the database as a user with DBA authority.
2 Execute the TRIGGER EVENT statement, supplying the name of the
event. For example:
TRIGGER EVENT sales_report_event

$ For more information, see "TRIGGER EVENT statement" on


page 630 of the book ASA Reference.

Debugging an event handler


Debugging is a regular part of any software development. Event handlers can
be debugged during the development process.
$ For information on developing event handlers, see "Developing event
handlers" on page 504.
$ For information on using the debugger, see "Debugging Logic in the
Database" on page 621.

510
Chapter 16 Automating Tasks Using Schedules and Events

v To debug an event handler:


1 Start the Adaptive Server Anywhere debugger.
From the Start menu, choose Programs➤Sybase SQL Anywhere 7➤
Adaptive Server Anywhere 7➤Debug Database Objects.
2 In the Connections window, double click All Connections.
3 In the procedures window, double click the event you wish to debug.
The event definition is displayed in the Source window.
4 In the Source window, set a breakpoint.
5 From Interactive SQL or another application, trigger the event handler
using the TRIGGER EVENT statement.
6 The execution stops at the breakpoint you have set. You can now use the
debugger features to trace execution, local variables, and so on.

511
Scheduling and event handling tasks

512
C H A P T E R 1 7

Welcome to Java in the Database

About this chapter This chapter provides motivation and concepts for using Java in the database.
Adaptive Server Anywhere is a runtime environment for Java or Java
platform. Java provides a natural extension to SQL, turning Adaptive Server
Anywhere into a platform for the next generation of enterprise applications.
Contents
Topic Page
Introduction to Java in the database 514
Java in the database Q & A 517
A Java seminar 523
The runtime environment for Java in the database 533
A Java in the database exercise 541

513
Introduction to Java in the database

Introduction to Java in the database


Adaptive Server Anywhere is a runtime environment for Java. This means
that Java classes can be executed in the database server. Building a runtime
environment for Java classes into the database server provides powerful new
ways of managing and storing data and logic.
Java in the database offers the following:
♦ You can reuse Java components in the different layers of your
application—client, middle-tier, or server—and use them wherever
makes most sense to you. Adaptive Server Anywhere becomes a
platform for distributed computing.
♦ Java is a more powerful language than stored procedures for building
logic into the database.
♦ Java classes become rich user-defined data types.
♦ Methods of Java classes provide new functions accessible from SQL.
♦ Java can be used in the database without jeopardizing the integrity,
security and robustness of the database.

The SQLJ The Adaptive Server Anywhere Java implementation is based on the SQLJ
proposed standard Part 1 and SQLJ Part 2 proposed standards. SQLJ Part 1 provides
specifications for calling Java static methods as SQL stored procedures and
user-defined functions. SQLJ Part 2 provides specifications for using Java
classes as SQL domains.

Learning about Java in the database


Java is a relatively new programming language with a growing, but still
limited, knowledge base. Intended for a variety of Java developers, this
documentation will be useful for everyone from the experienced Java
developer to the many readers who are unfamiliar with the language, its
possibilities, syntax and use.
For those readers familiar with Java, there is much to learn about using Java
in a database. Sybase not only extends the capabilities of the database with
Java, but also extends the capabilities of Java with the database.
Java The following table outlines the documentation regarding the use of Java in
documentation the database.

514
Chapter 17 Welcome to Java in the Database

Title Purpose
"Welcome to Java in the Java concepts and how to apply them in
Database" on page 513 (this Adaptive Server Anywhere.
chapter)
"Using Java in the Database" Practical steps to using Java in the database.
on page 549
"Data Access Using JDBC" Accessing data from Java classes, including
on page 591 distributed computing.
"Debugging Logic in the Testing and debugging Java code running in the
Database" on page 621 database.
Adaptive Server Anywhere The Reference Manual includes material on the
Reference. SQL extensions that support Java in the
database.
Reference guide to Sun’s Java Online guide to Java API classes, fields and
API methods. Available as Windows Help only.
Thinking in Java by Bruce Online book that teaches how to program in
Eckel. Java. Supplied in Adobe PDF format in the
jxmp subdirectory of your Adaptive Server
Anywhere installation directory.

Using the Java documentation


The following table is a guide to which parts of the Java documentation
apply to you, depending on your interests and background. It is a guide only
and should not limit your efforts to learn more about Java in the database.

If you ... Consider reading ...


Are new to object-oriented programming. "A Java seminar" on page 523
Thinking in Java by Bruce Eckel.
Want an explanation of terms such as "A Java seminar" on page 523
instantiated, field and class method.
Are a Java developer who wants to just "The runtime environment for Java in
get started. the database" on page 533
"A Java in the database exercise" on
page 541
Want to know the key features of Java in "Java in the database Q & A" on
the database. page 517

515
Introduction to Java in the database

If you ... Consider reading ...


Want to find out how to access data from "Data Access Using JDBC" on
Java. page 591
Want to prepare a database for Java. "Java-enabling a database" on
page 553
Want a complete list of supported Java "Java class data types" on page 288
APIs. of the book ASA Reference
Are trying to use a Java API class and The online guide to Java API classes
need Java reference information. (Windows Help only).
Want to see an example of distributed "Creating distributed applications" on
computing. page 616.

516
Chapter 17 Welcome to Java in the Database

Java in the database Q & A


This section describes the key features of Java in Adaptive Server Anywhere.

What are the key features of Java in the database?


Detailed explanations of all the following points appear in later sections.
♦ You can run Java in the database server An internal Java Virtual
Machine (VM) runs Java code in the database server.
♦ You can call Java from SQL You can call Java functions (methods)
from SQL statements. Java methods provide a more powerful language
than SQL stored procedures for adding logic to the database.
♦ You can access data from Java An internal JDBC driver lets you
access data from Java.
♦ You can debug Java in the database You can use the Sybase Java
debugger to test and debug your Java classes in the database.
♦ You can use Java classes as data types Every Java class installed
in a database becomes available as a data type that can be used as the
data type of a column in a table or a variable.
♦ You can save Java objects in tables An instance of a Java class (a
Java object) can be saved as a value in a table. You can insert Java
objects into a table, execute SELECT statements against the fields and
methods of objects stored in a table, and retrieve Java objects from a
table.
With this ability, Adaptive Server Anywhere becomes an object-
relational database, supporting objects while not degrading existing
relational functionality.
♦ SQL is preserved The use of Java does not alter the behavior of
existing SQL statements or other aspects of non-Java relational database
behavior.

How do I store Java instructions in the database?


Java is an object-oriented language, so its instructions (source code) come in
the form of classes. To execute Java in a database, you write the Java
instructions outside the database, and compile them outside the database into
compiled classes (byte code), which are binary files holding Java
instructions.

517
Java in the database Q & A

You then install these compiled classes into a database. Once installed, you
can execute these classes in the database server.
Adaptive Server Anywhere is a runtime environment for Java classes, not a
Java development environment. You need a Java development environment,
such as Sybase PowerJ or the Sun Microsystems Java Development Kit, to
write and compile Java.

How does Java get executed in a database?


Adaptive Server Anywhere includes a Java Virtual Machine (VM), which
runs in the database environment. The Sybase Java VM interprets compiled
Java instructions and runs them in the database server.
In addition to the VM, the SQL request processor in the database server has
been extended so it can call into the VM to execute Java instructions. It can
also process requests from the VM, to enable data access from Java.
Differences from a There is a difference between executing Java code using a standard VM such
standalone VM as the Sun Java VM java.exe and executing Java code in the database. The
Sun VM is run from a command line, while the Adaptive Server Anywhere
Java VM is available at all times to perform a Java operation whenever it is
required as part of the execution of a SQL statement.
You cannot access the Sybase Java interpreter externally. It is only used
when the execution of a SQL statement requires a Java operation to take
place. The database server starts the VM automatically when needed: you do
not have to take any explicit action to start or stop the VM.

Why Java?
Java provides a number of features that make it ideal for use in the database:
♦ Thorough error checking at compile time.
♦ Built-in error handing with a well-defined error handling methodology.
♦ Built-in garbage collection (memory recovery).
♦ Elimination of many bug-prone programming techniques.
♦ Strong security features.
♦ Java code is interpreted, so no operations get executed without being
acceptable to the VM.

518
Chapter 17 Welcome to Java in the Database

On what platforms is Java in the database supported?


Java in the database is not supported on Windows CE. It is supported on
Windows 95/98, Windows NT, UNIX, and NetWare.

How do I use Java and SQL together?


A guiding principle for the design of Java in the database is that it provides a
natural, open extension to existing SQL functionality.
♦ Java operations are invoked from SQL Sybase has extended the
range of SQL expressions to include properties and methods of Java
objects, so you can include Java operations in a SQL statement.
♦ Java classes become domains You store Java classes using the
same SQL statements as those used for traditional SQL data types.
You can use many of the classes that are part of the Java API as included in
the Sun Microsystems Java Development Kit version 1.1.8. You can also use
classes created and compiled by Java developers.

What is the Java API?


The Java Application Programmer’s Interface (API) is a set of classes created
by Sun Microsystems. It provides a range of base functionality that can be
used and extended by Java developers. It is at the core of ’what you can do’
with Java.
The Java API offers a tremendous amount of functionality in its own right. A
large portion of the Java API is available to any database able to use Java
code. This exposes the majority of non-visual classes from the Java API that
should be familiar to developers currently using the Sun Microsystems Java
Development Kit (JDK).
$ For a complete list of supported Java APIs, see "Supported Java
packages" on page 288 of the book ASA Reference.

How do I access Java from SQL?


In addition to using the Java API in classes, you can use it in stored
procedures and SQL statements. You can treat the Java API classes as
extensions to the available built-in functions provided by SQL.

519
Java in the database Q & A

For example, the SQL function PI(*) returns the value for pi. The Java API
class java.lang.Math has a parallel field named PI returning the same value.
But java.lang.Math also has a field named E that returns the base of the
natural logarithms, as well as a method that computes the remainder
operation on two arguments as prescribed by the IEEE 754 standard.
Other members of the Java API offer even more specialized functionality.
For example, java.util.Stack generates a last-in, first-out queue that can
store ordered lists; java.util.HashTable maps values to keys;
java.util.StringTokenizer breaks a string of characters into individual word
units.

Which Java classes are supported?


The database does not support all Java API classes. Some classes, for
example the java.awt package containing user interface components for
applications, are inappropriate inside a database server. Other classes,
including parts of java.io, deal with writing information to disk, and this also
is unsupported in the database server environment.
$ For a list of supported and unsupported classes, see "Supported Java
packages" on page 288 of the book ASA Reference, and "Unsupported Java
packages and classes" on page 289 of the book ASA Reference.

How can I use my own Java classes in databases?


You can install your own Java classes into a database. For example, a
developer could design, write in Java, and compile with a Java compiler, a
user-created Employee class or Package class.
User-created Java classes can contain both information about the subject and
some computational logic. Once installed in a database, Adaptive Server
Anywhere lets you use these classes in all parts and operations of the
database and execute their functionality (in the form of class or instance
methods) as easily as calling a stored procedure.

Java classes and stored procedures are different


Java classes are different from stored procedures. Whereas stored
procedures are written in SQL, Java classes provide a more powerful
language, and can be called from client applications as easily and in the
same way as stored procedures.

520
Chapter 17 Welcome to Java in the Database

When a Java class gets installed in a database, it becomes available as a new


domain. You can use a Java class in any situation where you would use built-
in SQL data types: as a column type in a table or a variable type.
For example, if a class called Address has been installed into a database, a
column in a table called Addr can be of type Address, which means only
objects based on the Address class can be saved as row values for that
column.

Can I access data using Java?


The JDBC interface is an industry standard, designed specifically to access
database systems. The JDBC classes are designed to connect to a database,
request data using SQL statements, and return result sets that can be
processed in the client application.
Normally, client applications use JDBC classes, and the database system
vendor supplies a JDBC driver that allows the JDBC classes to establish a
connection.
You can connect from a client application to Adaptive Server Anywhere via
JDBC, using jConnect or a JDBC/ODBC bridge. Adaptive Server Anywhere
also provides an internal JDBC driver, which permits Java classes installed
in a database to use JDBC classes that execute SQL statements.

Can I move classes from client to server?


You can create Java classes that can be moved between levels of an
enterprise application. The same Java class can be integrated into either the
client application, a middle tier, or the database—wherever is most
appropriate.
You can move a class containing business logic, data or a combination of
both to any level of the enterprise system, including the server, allowing you
complete flexibility to make the most appropriate use of resources. It also
enables enterprise customers to develop their applications using a single
programming language in a multi-tier architecture with unparalleled
flexibility.

521
Java in the database Q & A

Can I create distributed applications?


You can create an application that has some pieces operating in the database
and some on the client machine. You can pass Java objects from the server to
the client just as you pass SQL data such as character strings and numeric
values.
$ For an example, see "Creating distributed applications" on page 616.

What can I not do with Java in the database?


Adaptive Server Anywhere is a runtime environment for Java classes, not a
Java development environment.
You cannot carry out the following tasks in the database:
♦ Edit class source files (*.java files).
♦ Compile Java class source files (*.java files).
♦ Execute unsupported Java APIs, such as applet and visual classes.
♦ Execute Java methods that require the execution of native methods. All
user classes installed into the database must be 100% Java.
The Java classes used in Adaptive Server Anywhere must be written and
compiled using a Java application development tool, and then installed into a
database for use, testing, and debugging.

522
Chapter 17 Welcome to Java in the Database

A Java seminar
This section introduces key Java concepts. After reading this section you
should be able to examine Java code, such as a simple class definition or the
invocation of a method, and understand what is taking place.

Java examples directory


Some of the classes used as examples in this manual are located in the
Java examples directory, jxmp, which is a subdirectory of your Adaptive
Server Anywhere installation directory.
Two files represent each Java class example: the Java source and the
compiled class. You can immediately install to a database (without
modification) the compiled version of the Java class examples.

Understanding Java classes


A Java class combines data and functionality-the ability to hold information
and perform computational operations. One way of understanding the
concept of a class is to view it as an entity, an abstract representation of a
thing.
You could design an Invoice class, for example, to mimic paper invoices,
such as those used every day in business operations. Just as a paper invoice
contains certain information (line-item details, who is being invoiced, the
date, payment amount, payment due-date), so also does an instance of an
Invoice class. Classes hold information in fields.
In addition to describing data, a class can make calculations and perform
logical operations. For example, the Invoice class could calculate the tax on a
list of line items for every Invoice object, and add it to the sub total to
produce a final total, without any user intervention. Such a class could also
ensure all essential pieces of information are present in the Invoice and even
indicate when payment is over due or partially paid. Calculations and other
logical operations are carried out by the methods of the class.
Example The following Java code declares a class called Invoice. This class
declaration would be stored in a file named Invoice.java, and then compiled
into a Java class using a Java compiler.

523
A Java seminar

Compiling Java classes


Compiling the source for a Java class creates a new file with the same
name as the source file but with a different extension. Compiling
Invoice.java creates a file called Invoice.class which could be used in a
Java application and executed by a Java VM.
The Sun JDK tool for compiling class declarations is javac.exe.

public class Invoice {


// So far, this class does nothing and knows nothing
}
The class keyword is used, followed by the name of the class. There is an
opening and closing brace: everything declared between the braces, such as
fields and methods, becomes part of the class.
In fact, no Java code exists outside class declarations. Even the Java
procedure that a Java interpreter runs automatically to create and manage
other objects — the main method that is often the start of your application
— is itself located within a class declaration.

Subclasses in Java
You can define classes as subclasses of other classes. A class that is a
subclass of another class can use the fields and method of its parent: this is
called inheritance. You can define additional methods and fields that apply
only to the subclass, and redefine the meaning of inherited fields and
methods.
Java is a single-hierarchy language, meaning that all classes you create or use
eventually inherit from one class. This means the low-level classes (classes
further up in the hierarchy) must be present before higher-level classes can
be used. The base set of classes required to run Java applications is called the
runtime Java classes, or the Java API.

Understanding Java objects


A class is a template that defines what an object is capable of doing, just as
an invoice form is a template that defines what information the invoice
should contain.

524
Chapter 17 Welcome to Java in the Database

Classes contain no specific information about objects. Rather, your


application creates or instantiates objects based on the class (template), and
the objects hold the data or perform calculations. The instantiated object is an
instance of the class. For example, an Invoice object is an instance of the
Invoice class. The class defines what the object is capable of but the object is
the incarnation of the class that gives the class meaning and usefulness.
In the invoice example, the invoice form defines what all invoices based on
that form can accomplish. There is one form and zero or many invoices
based on the form. The form contains the definition but the invoice
encapsulates the usefulness.
The Invoice object is created, stores information, is stored, retrieved, edited,
updated, and so on.
Just as one invoice template can create many invoices, with each invoice
separate and distinct from the other in its details, you can generate many
objects from one class.
Methods and fields A method is a part of a class that does something—a function that performs
a calculation or interacts with other objects—on behalf of the class. Methods
can accept arguments, and return a value to the calling function. If no return
value is necessary, a method can return void. Classes can have any number
of methods.
A field is a part of a class that holds information. When you create an object
of type JavaClass, the fields in JavaClass hold the state unique to that object.

Class constructors
You create an object by invoking a class constructor. Constructors are
methods that have the following properties:
♦ A constructor method has the same name as the class, and has no
declared data type. For example, a simple constructor for the Product
class would be declared as follows:
Product () {
...constructor code here...
}
♦ If you include no constructor method in your class definition, a default
method is used that is provided by the Java base object.
♦ You can supply more than one constructor for each class, with different
numbers and types of arguments. When a constructor is invoked, the one
with the proper number and type of arguments is used.

525
A Java seminar

Understanding fields
There are two categories of Java fields:
♦ Instance fields Each object has its own set of instance fields, created
when the object was created. They hold information specific to that
instance. For example, a lineItem1Description field in the Invoice class
holds the description for a line item on a particular invoice. You can
access instance fields only through an object reference.
♦ Class fields A class field holds information that is independent of any
particular instance. A class field is created when the class is first loaded,
and no further instances are created no matter how many objects are
created. Class fields can be accessed either through the class name or the
object reference.
To declare a field in a class, state its type, then its name, followed by a
semicolon. To declare a class field, use the static Java keyword in the
declaration. You declare fields in the body of the class and not within a
method; declaring a variable within a method makes it a part of the method,
not of the class.
Examples The following declaration of the class Invoice has four fields, corresponding
to information that might be contained on two line items on an invoice.
public class Invoice {

// Fields of an invoice contain the invoice data


public String lineItem1Description;
public int lineItem1Cost;

public String lineItem2Description;


public int lineItem2Cost;

Understanding methods
There are two categories of Java methods:
♦ Instance methods A totalSum method in the Invoice class could
calculate and add the tax, and return the sum of all costs, but would only
be useful if it is called in conjunction with an Invoice object, one that
had values for its line item costs. The calculation can only be performed
for an object, since the object (not the class) contains the line items of
the invoice.

526
Chapter 17 Welcome to Java in the Database

♦ Class methods Class methods (also called static methods), can be


invoked without first creating an object. Only the name of the class and
method is necessary to invoke a class method.
Similar to instance methods, class methods accept arguments and return
values. Typically, class methods perform some sort of utility or
information function related to the overall functionality of the class.
Class methods cannot access instance fields.
To declare a method, you state its return type, its name and any parameters it
takes. Like a class declaration, the method uses an opening and closing brace
to identify the body of the method where the code goes.
public class Invoice {

// Fields
public String lineItem1Description;
public double lineItem1Cost;

public String lineItem2Description;


public double lineItem2Cost;

// A method
public double totalSum() {
double runningsum;

runningsum = lineItem1Cost + lineItem2Cost;


runningsum = runningsum * 1.15;

return runningsum;
}
}
Within the body of the totalSum method, a variable named runningsum is
declared. First, this holds the sub total of the first and second line item cost.
This sub total is then multiplied by 15 per cent (the rate of taxation) to
determine the total sum.
The local variable (as it is known within the method body) is then returned to
the calling function. When you invoke the totalSum method, it returns the
sum of the two line item cost fields plus the cost of tax on those two items.
Example The parseInt method of the java.lang.Integer class, which is supplied with
Adaptive Server Anywhere, is one example of a class method. When given a
string argument, the parseInt method returns the integer version of the
string.
For example given the string value "1", the parseInt method returns 1, the
integer value, without requiring an instance of the java.lang.Integer class to
first be created, as illustrated by this Java code fragment:

527
A Java seminar

String num = "1";


int i = java.lang.Integer.parseInt( num );

Example The following version of the Invoice class now includes both an instance
method and a class method. The class method named rateOfTaxation
returns the rate of taxation used by the class to calculate the total sum of the
invoice.
The advantage of making the rateOfTaxation method a class method (as
opposed to an instance method or field) is that other classes and procedures
can use the value returned by this method without having to create an
instance of the class first. Only the name of the class and method is required
to return the rate of taxation used by this class.
Making rateofTaxation a method, as opposed to a field, allows the
application developer to change how the rate is calculated without adversely
affecting any objects, applications or procedures that use its return value.
Future versions of Invoice could make the return value of the
rateOfTaxation class method based on a more complicated calculation
without affecting other methods that use its return value.
public class Invoice {
// Fields
public String lineItem1Description;
public double lineItem1Cost;
public String lineItem2Description;
public double lineItem2Cost;
// An instance method
public double totalSum() {
double runningsum;
double taxfactor = 1 + Invoice.rateOfTaxation();

runningsum = lineItem1Cost + lineItem2Cost;


runningsum = runningsum * taxfactor;

return runningsum;
}
// A class method
public static double rateOfTaxation() {
double rate;
rate = .15;

return rate;
}
}

528
Chapter 17 Welcome to Java in the Database

Object oriented and procedural languages


If you are more familiar with procedural languages such as C, or the SQL
stored procedure language, than object-oriented languages, this section
explains some of the key similarities and differences between procedural and
object-oriented languages.
Java is based on The main structural unit of code in Java is a class.
classes
A Java class could be looked at as just a collection of procedures and
variables that have been grouped together because they all relate to a
specific, identifiable category.
However the manner in which a class gets used sets object oriented
languages apart from procedural languages. When an application written in a
procedural language is executed, it is typically loaded into memory once and
takes the user down a pre-defined course of execution.
In object-oriented languages such as Java, a class is used like a template: a
definition of potential program execution. Multiple copies of the class can be
created and loaded dynamically, as needed, with each instance of the class
capable of containing its own data, values and course of execution. Each
loaded class could be acted on or executed independently of any other class
loaded into memory.
A class that is loaded into memory for execution it is said to have been
instantiated. An instantiated class is called an object: it is an application
derived from the class that is prepared to hold unique values or have its
methods executed in a manner independent of other class instances.

A Java glossary
The following items outline some of the details regarding Java classes. It is
by no means an exhaustive source of knowledge about the Java language but
may aid in the use of Java classes in Adaptive Server Anywhere.
$ For a thorough examination of the Java language, see the online book
Thinking in Java, by Bruce Eckel, included with Adaptive Server Anywhere
in the file jxmp\Tjava.pdf.
Packages A package is a grouping of classes that share a common purpose or
category. One member of a package has special privileges to access data and
methods in other members of the package, hence the protected access
modifier.
A package is the Java equivalent of a library. It is a collection of classes,
which can be made available using the import statement. The following Java
statement imports the utility library from the Java API:

529
A Java seminar

import java.util.*
Packages are typically held in Jar files, which have the extension .jar or .zip.
Public versus An access modifier determines the visibility (essentially the public, private
private or protected keyword used in front of any declaration) of a field, method or
class to other Java objects.
♦ A public class, method, or field is visible everywhere.
♦ A private class, method, or field is visible only in methods defined
within that class.
♦ A protected method or field is visible to methods defined within that
class, within sublclasses of the class, or within other classes in the same
package.
♦ The default visibility, known as package, means that the method or field
is visible within the class and to other classes in the same package.
Constructors A constructor is a special method of a Java class that is called when an
instance of the class is created.
Classes can define their own constructors, including multiple, overriding
constructors. Which arguments were used in the attempt to create the object
determine which constructor is used. When the type, number and order of
arguments used to create an instance of the class match one of the class’s
constructors, that constructor is used when creating the object.
Garbage collection Garbage collection automatically removes any object with no references to
it, with the exception of objects stored as values in a table.
There is no such thing as a destructor method in Java (as there is in C++).
Java classes can define their own finalize method for clean up operations
when an object is discarded during garbage collection.
Interfaces Java classes can inherit only from one class. Java uses interfaces instead of
multiple-inheritance. A class can implement multiple interfaces, each
interface defines a set of methods and method profiles that must be
implemented by the class for the class to be compiled.
An interface defines what methods and static fields the class must declare.
The implementation of the methods and fields declared in an interface is
located within the class that uses the interface: the interface defines what the
class must declare, it is up to the class to determine how it is implemented.

Java error handling


Java error handling code is separate from the code for normal processing.

530
Chapter 17 Welcome to Java in the Database

Errors generate an exception object representing the error. This is called


throwing an exception. A thrown exception terminates a Java program
unless it is caught and handled properly at some level of the application.
Both Java API classes and custom-created classes can throw exceptions. In
fact, users can create their own exception classes which throw their own
custom-created classes.
If there is no exception handler in the body of the method where the
exception occurred, then the search for an exception handler continues up the
call stack. If the top of the call stack is reached and no exception handler has
been found, the default exception handler of the Java interpreter running the
application is called and the program terminates.
In Adaptive Server Anywhere, if a SQL statement calls a Java method, and
an unhandled exception is thrown, a SQL error is generated.
Error types in Java All errors in Java come from two types of error classes: Exception and
Error. Usually, Exception-based errors are handled by error handling code
in your method body. Error type errors are specifically for internal errors and
resource exhaustion errors inside the Java run-time system.
Exception class errors are thrown and caught. Exception handling code is
characterized by try, catch and finally code blocks.
A try block executes code that may generate an error. A catch block is code
that executes if the execution of a try block generates (or throws) an error.
A finally block defines a block of code that executes regardless of whether
an error was generated and caught and is typically used for cleanup
operations. It is used for code that, under no circumstances, can be omitted.
There are two types of exception class errors: those that are runtime
exceptions and those that are not runtime exceptions.
Errors generated by the runtime system are known as implicit exceptions, in
that they do not have to be explicitly handled as part of every class or method
declaration.
For example, an array out of bounds exception can occur whenever an array
is used, but the error does not have to be part of the declaration of the class
or method that uses the array.
All other exceptions are explicit. If the method being invoked can throw an
error, it must be explicitly caught by the class using the exception-throwing
method, or this class must explicitly throw the error itself by identifying the
exception it may generate in its class declaration. Essentially, explicit
exceptions must be dealt with explicitly. A method must declare all the
explicit errors it throws, or catch all the explicit errors that may potentially
be thrown.

531
A Java seminar

Non-runtime exceptions are checked at compile time. Runtime exceptions


are usually caused by errors in programming. Java catches many such errors
during compilation, before running the code.
Every Java method is given an alternative path of execution so that all Java
methods complete, even if they are unable to complete normally. If the type
of error thrown is not caught, it’s passed to the next code block or method in
the stack.

532
Chapter 17 Welcome to Java in the Database

The runtime environment for Java in the


database
This section describes the Sybase runtime environment for Java, and how it
differs from a standard Java runtime environment.

Java version
The Sybase Java VM executes a subset of JDK version 1.1.8.
Between release 1.0 of the Java Developer’s Kit (JDK) and release 1.1,
several new APIs were introduced. As well, a number were deprecated-the
use of certain APIs became no longer recommended and support for them
may be dropped in future releases.
A Java class file using deprecated APIs generates a warning when compiled,
but does still execute on a Java virtual machine built to release 1.1 standards,
such as the Sybase VM.
The internal JDBC driver supports JDBC version 1.1.
$ For more information on the JDK 1.1 APIs that are supported, please
see "Supported Java packages" on page 288 of the book ASA Reference.

The runtime Java classes


The runtime Java classes are the low-level classes that are made available to
a database when it is created or Java-enabled. These classes include a subset
of the Java API.
The runtime classes provide basic functionality on which to build
applications. The runtime classes are always available to classes in the
database.
You can incorporate the runtime Java classes in your own user-created
classes: either inheriting their functionality or using it within a calculation or
operation in a method.
Examples Some Java API classes included in the runtime Java classes include:
♦ Primitive Java data types All primitive (native) data types in Java
have a corresponding class. In addition to being able to create objects of
these types, the classes have additional, often useful functionality.
The Java int data type has a corresponding class in java.lang.Integer.

533
The runtime environment for Java in the database

♦ The utility package The package java.util.* contains a number of


very helpful classes whose functionality has no parallel in the SQL
functions available in Adaptive Server Anywhere.
Some of the classes include:
♦ Hashtable which maps keys to values.
♦ StringTokenizer which breaks a String down into individual
words.
♦ Vector which holds an array of objects whose size can change
dynamically
♦ Stack which holds a last-in, first-out stack of objects.
♦ JDBC for SQL operations The package java.sql.* contains the
classes needed by Java objects to extract data from the database using
SQL statements.
Unlike user-defined classes, the runtime classes are not stored in the
database. Instead, they are stored in files in the java subdirectory of the
Adaptive Server Anywhere installation directory.

User-defined classes
User-defined classes are installed into a database using the INSTALL
statement. Once installed, they become available to other classes in the
database. If they are public classes, they are available from SQL as domains.
$ For information on installing classes, see "Installing Java classes into a
database" on page 558.

Identifying Java methods and fields


The dot in SQL In SQL statements, the dot identifies columns of tables, as in the following
query:
SELECT employee.emp_id
FROM employee
The dot also indicates object ownership in qualified object names:
SELECT emp_id
FROM dba.employee

The dot in Java In Java, the dot is an operator that invokes the methods or access the fields
of a Java class or object. It is also part of an identifier, used to identify class
names, as in the fully qualified class name java.util.Hashtable.

534
Chapter 17 Welcome to Java in the Database

In the following Java code fragment, the dot is part of an identifier on the
first line of code. On the second line of code, it is an operator.
java.util.Random rnd = new java.util.Random();
int i = rnd.nextInt();

Invoking Java In SQL, the dot operator can be replaced with the double right angle bracket
methods from SQL (>>). The dot operator is more Java-like, but can lead to ambiguity with
respect to existing SQL names. The use of >> removes this ambiguity.

>> in SQL is not the same as >> in Java


You can only use the double right angle bracket operator in SQL
statements where a Java dot operator is otherwise expected. Within a Java
class, the double right angle bracket is not a replacement for the dot
operator and has a completely different meaning in its role as the right bit
shift operator.

For example, the following batch of SQL statements is valid:


CREATE VARIABLE rnd java.util.Random;
SET rnd = NEW java.util.Random();
SELECT rnd>>nextInt();
The result of the SELECT statement is a randomly generated integer.
Using the variable created in the previous SQL code example, the following
SQL statement illustrates the correct use of a class method.
SELECT java.lang.Math>>abs( rnd>>nextInt() );

Java is case sensitive


Java syntax works as you would expect it to, and SQL syntax is unaltered by
the presence of Java classes. This is true even if the same SQL statement
contains both Java and SQL syntax. It’s a simple statement but with far-
reaching implications.
Java is case sensitive. The Java class FindOut is a completely different class
from the class Findout. SQL is case insensitive with respect to keywords and
identifiers.
Java case sensitivity is preserved even when embedded in a SQL statement
that is case insensitive. The Java parts of the statement must be case
sensitive, even though the parts previous to and following the Java syntax
can be in either upper or lower case.
For example the following SQL statements successfully execute because the
case of Java objects, classes and operators is respected, even though there is
variation in the case of the remaining SQL parts of the statement.

535
The runtime environment for Java in the database

SeLeCt java.lang.Math.random();

Data types When you use a Java class as a data type for a column, it is a user-defined
SQL data type. However, it is still case sensitive. This convention prevents
ambiguities with Java classes that differ only in case.

Strings in Java and SQL


A set of double quotes identifies string literals in Java, as in the following
Java code fragment:
String str = "This is a string";
In SQL, however, single quotes mark strings, and double quotes indicate an
identifier, as illustrated by the following SQL statement:
INSERT INTO TABLE DBA.t1
VALUES( ’Hello’ )
You should always use the double quote in Java source code, and single
quotes in SQL statements.
For example, the following SQL statements are valid.
CREATE VARIABLE str char(20);
SET str = NEW java.lang.String( ’Brand new object’ )
The following Java code fragment is also valid, if used within a Java class.
String str = new java.lang.String(
"Brand new object" );

Printing to the command line


Printing to the standard output is a quick way of checking variable values
and execution results at various points of code execution. When the method
in the second line of the following Java code fragment is encountered, the
string argument it accepts prints out to standard output.
String str = "Hello world";
System.out.println( str );
In Adaptive Server Anywhere, standard output is the server window, so the
string appears there. Executing the above Java code within the database is the
equivalent of the following SQL statement.
MESSAGE ’Hello world’

536
Chapter 17 Welcome to Java in the Database

Using the main method


When a class contains a main method matching the following declaration,
most Java run time environments, such as the Sun Java interpreter, execute it
automatically. Normally, this static method executes only if it is the class
being invoked by the Java interpreter
public static void main( String args[ ] ) { }
Useful for testing the functionality of Java objects, you are always
guaranteed this method will be called first, when the Sun Java runtime
system starts.
In Adaptive Server Anywhere the Java runtime system is always available.
The functionality of objects and methods can be tested in an ad hoc, dynamic
manner using SQL statements. In many ways this is far more flexible for
testing Java class functionality.

Scope and persistence


SQL variables are persistent only for the duration of the connection. This is
unchanged from previous versions of Adaptive Server Anywhere, and is
unaffected by whether the variable is a Java class or a native SQL data type.
The persistence of Java classes is analogous to tables in a database: Tables
exist in the database until you drop them, regardless of whether they hold
data or even whether they are ever used. Java classes installed to a database
are similar: they are available for use until you explicitly remove them with a
REMOVE statement.
$ For more information on removing classes, see "REMOVE statement"
on page 589 of the book ASA Reference.
A class method in an installed Java class can be called at any time from a
SQL statement. You can execute the following statement anywhere you can
execute SQL statements.
SELECT java.lang.Math.abs(-342)
A Java object is only available in two forms: as the value of a variable, or as
a value in a table.

Java escape characters in SQL statements


In Java code, you can use escape characters to insert certain special
characters into strings. Consider the following code, which inserts a new line
and tab in front of a sentence containing an apostrophe.

537
The runtime environment for Java in the database

String str = "\n\t\This is an object\’s string literal";


Adaptive Server Anywhere permits the use of Java escape characters only
when being used by Java classes. From within SQL, however, you must
follow the rules that apply to strings in SQL.
For example, to pass a string value to a field using a SQL statement, you
could use the following statement, but the Java escape characters could not.
SET obj.str = ’\nThis is the object’’s string field’;

$ For more information on SQL string handling rules, see "Strings" on


page 224 of the book ASA Reference.

Keyword conflicts
SQL keywords can conflict with the names of Java classes, including API
classes. This occurs when the name of a class, such as the Date class, which
is a member of the java.util.* package, is referenced. SQL reserves the word
Date for use as a keyword, even though it also the name of a Java class.
When such ambiguities appear, you can use double quotes to identify that
you are not using the word in question as the SQL reserved word. For
example, the following SQL statement causes an error because Date is a
keyword and SQL reserves its use.
-- This statement is incorrect
CREATE VARIABLE dt java.util.Date
However the following two statements work correctly because the word Date
is within quotation marks.
CREATE VARIABLE dt java.util."Date";
SET dt = NEW java.util."Date"(1997, 11, 22, 16, 11, 01)
The variable dt now contains the date: November 22, 1997, 4:11 p.m.

Use of import statements


It is common in a Java class declaration to include an import statement to
access classes in another package. You can reference imported classes using
unqualified class names.
For example, you can reference the Stack class of the java.util package in
two ways:
♦ explicitly using the name java.util.Stack, or
♦ using the name Stack, and including the following import statement:
import java.util.*;

538
Chapter 17 Welcome to Java in the Database

Classes further up Aclass referenced by another class, either explicitly with a fully qualified
in the hierarchy name or implicitly using an import statement, must also be installed in the
must also be database.
installed.
The import statement works as intended within compiled classes. However,
within the Adaptive Server Anywhere runtime environment, no equivalent to
the import statement exists. All class names used in SQL statements or stored
procedures must be fully qualified. For example, to create a variable of type
String, you would reference the class using the fully qualified name:
java.lang.String.

Using the CLASSPATH variable


Sun’s Java runtime environment and the Sun JDK Java compiler use the
CLASSPATH environment variable to locate classes referenced within Java
code. A CLASSPATH variable provides the link between Java code and the
actual file path or URL location of the classes being referenced. For
example...
import java.io.*
... allows all the classes in the java.io package to be referenced without a
fully qualified name. Only the class name is required in the following Java
code to use classes from the java.io package. The CLASSPATH
environment variable on the system where the Java class declaration is to be
compiled must include the location of the java directory, the root of the
java.io package.
CLASSPATH The CLASSPATH environment variable does not affect the Adaptive Server
ignored at runtime Anywhere runtime environment for Java during the execution of Java
operations because the classes are stored in the database, instead of in
external files or archives.
CLASSPATH used The CLASSPATH variable can, however, be used to locate a file during the
to install classes installation of classes. For example, the following statement installs a user-
created Java class to a database, but only specifies the name of the file, not
its full path and name. (Note that this statement involves no Java operations.)
INSTALL JAVA NEW
FROM FILE ’Invoice.class’
If the file specified is in a directory or zip file specified by the CLASSPATH
environmental variable, Adaptive Server Anywhere will successfully locate
the file and install the class.

539
The runtime environment for Java in the database

Public fields
It is a common practice in object oriented programming to define class fields
as private and make their values available only through public methods.
Many of the examples used in this documentation render fields public to
make examples more compact and easier to read. Using public fields in
Adaptive Server Anywhere also offers a performance advantage over
accessing public methods.
The general convention followed in this documentation is that a user-created
Java class designed for use in Adaptive Server Anywhere exposes its main
values in its fields. Methods contain computational automation and logic that
may act on these fields.

540
Chapter 17 Welcome to Java in the Database

A Java in the database exercise


This section is a primer for invoking Java operations on Java classes and
objects using SQL statements. The examples use the Invoice class created in
"A Java seminar" on page 523.

Case sensitivity
Java is case sensitive, so the portions of the following examples in this
section pertaining to Java syntax are written using the correct case. SQL
syntax is rendered in upper case.

A sample Java class


The examples in this section use the following class declaration.

Compiled code available


Adaptive Server Anywhere includes source code and compiled versions of
all Java classes outlined in the documentation. You can compile and
install the file Invoice.java into a database.

public class Invoice {

// Fields
public String lineItem1Description;
public double lineItem1Cost;

public String lineItem2Description;


public double lineItem2Cost;

// An instance method
public double totalSum() {
double runningsum;
double taxfactor = 1 + Invoice.rateOfTaxation();

runningsum = lineItem1Cost + lineItem2Cost;


runningsum = runningsum * taxfactor;

return runningsum;
}

// A class method
public static double rateOfTaxation() {
double rate;
rate = .15;

541
A Java in the database exercise

return rate;
}
}

Caution: use a Java-enabled database


The following section assumes you are connected to a Java-enabled
database. For more information, see "Java-enabling a database" on
page 553.

Installing Java classes


Any Java class must be installed to a database before it can be used. You can
install classes from Sybase Central or Interactive SQL.

v To install the Invoice class to the sample database from Sybase


Central:
1 Start Sybase Central and choose Tools➤Connect.
2 On the Identification tab of the Connect dialog, select the ODBC Data
Source Name option, and choose the ASA 7.0 Sample data source from
the dropdown list.
3 Click OK to connect to the sample database.
4 Open the Java Objects folder and double-click Add Java Class or JAR.
The Install a New Java Object wizard appears.
5 In the wizard, select the Java Class File option and click Next.
6 Use the Browse button to locate Invoice.class (in the jxmp subdirectory
of your Adaptive Server Anywhere installation directory).
7 Click Open to select the class file, and click Finish to exit the wizard.

v To install the Invoice class to the sample database from Interactive


SQL:
1 Start Interactive SQL.
2 On the Identification tab of the Connect dialog, select the ODBC Data
Source Name option and choose the ASA 7.0 Sample data source from
the dropdown list.
3 Click OK to connect to the sample database.
4 In the SQL Statements pane of the main Interactive SQL viewer, type
the following:

542
Chapter 17 Welcome to Java in the Database

INSTALL JAVA NEW


FROM FILE ’path\jxmp\Invoice.class’;
where path is your Adaptive Server Anywhere installation directory.

Notes ♦ At this point no Java operations have taken place. The class has been
installed into the database and is ready for use as the data type of a
variable or column in a table.
♦ Changes made to the class file from now on are not automatically
reflected in the copy of the class in the database. You must re-install the
classes if you want the changes reflected.
$ For more information on installing classes, and for information on
updating an installed class, see "Installing Java classes into a database" on
page 558.

Creating SQL variables of Java class type


The following statement creates a SQL variable named Inv of type Invoice,
where Invoice is the Java class you installed to a database.
CREATE VARIABLE Inv Invoice;
Once you create a variable, it can only be assigned a value if its data type and
declared data type are identical or if the valoue is a subclass of the declared
data type. In this case, the variable Inv can only contain a reference to an
object of type Invoice or a subclass of Invoice.
Initially, the variable Inv is NULL because no value has been passed to it.
You can use the following statement to identify the current value of the
variable Inv.
SELECT IFNULL(Inv,
’No object referenced’,
’Variable not null: contains object reference’)
To assign a value to Inv, you must instatiate an instance of the Invoice class.
The NEW keyword indicates a constructor is being invoked and an object
reference is being returned.
SET Inv = NEW Invoice();
The Inv variable now has a reference to a Java object. To verify this, you can
execute a number of select statements using the variable.
The Inv variable should contain a reference to a Java object of type Invoice.
Using this reference, you can access any of the object’s fields or invoke any
of its methods.

543
A Java in the database exercise

Invoking Java operations


If a variable (or column value in a table) contains a reference to a Java
object, then the fields of the object can be passed values and its methods can
be invoked.
For example, a variable of type Invoice (a user-created class) that contains a
reference to an Invoice object will have four fields, the value of which can be
set using SQL statements.
Passing values to The following SQL statements set the field values for just such a variable.
fields SET Inv.lineItem1Description = ’Work boots’;
SET Inv.lineItem1Cost = ’79.99’;
SET Inv.lineItem2Description = ’Hay fork’;
SET Inv.lineItem2Cost = ’37.49’;
Each line in the SQL statements above passes a value to a field in the Java
object referenced by Inv. You can see this by performing a select statement
against the variable. Any of the following SQL statements return the current
value of a field in the Java object referenced by Inv.
SELECT Inv.lineItem1Description;
SELECT Inv.lineItem1Cost;
SELECT Inv.lineItem2Description;
SELECT Inv.lineItem2Cost;
You can now use each of the above lines as an expression in other SQL
statements. For example, you can execute the following SQL statement if
you are currently connected to the sample database, asademo.db, and have
executed the above SQL statements.
SELECT * FROM PRODUCT
WHERE unit_price < Inv.lineItem2Cost;

Invoking methods The Invoice class has one instance method, which you can invoke once an
you create an Invoice object.
The following SQL statement invokes the totalSum() method of the object
referenced by the variable Inv. It returns the sum of the two cost fields plus
the tax charged on this sum.
SELECT Inv.totalSum();

Calling methods Method names are always followed by parentheses, even when they take no
versus referencing arguments. Field names are not followed by parentheses.
fields
The totalSum() method takes no arguments but returns a value. The brackets
are used because a Java operation is being invoked even though the method
takes no arguments.

544
Chapter 17 Welcome to Java in the Database

Field access is faster than method invokation. Accessing a field does not
require the Java VM to be invoked, while invoking a method requires the
VM to execute the method.
As indicated by the Invoice class definition outlined at the beginning of this
section, the totalSum instance method makes use of the class method
rateOfTaxation.
You can access this class method directly from a SQL statement.
SELECT Invoice.rateOfTaxation();
Notice the name of the class is used, not the name of a variable containing a
reference to an Invoice object. This is consistent with the way Java handles
class methods, even though it is being used in a SQL statement. A class
method can be invoked even if no object based on that class has been
instantiated.
Class methods do not require an instance of the class to work properly, but
they can still be invoked on an object. The following SQL statement yields
the same results as the previously executed SQL statement.
SELECT Inv.rateOfTaxation();

Saving Java objects in tables


When you install a class in a database, it is available as a new data type.
Columns in a table can be of type Javaclass where Javaclass is the name of
an installed public Java class.
For example, using the Invoice class installed at the beginning of this
section, you can execute the following SQL statement.
CREATE TABLE T1 (
ID int,
JCol Invoice
);
The column named JCol only accepts objects of type Invoice or one of its
subclasses.
There are at least two methods for creating a Java object and adding it to a
table as the value of a column. The first method, creating a variable, was
outlined in a previous section "Creating SQL variables of Java class type" on
page 543.
Assuming the variable Inv contains a reference to a Java object of type
Invoice, the following SQL statement adds a row to the table T1.
INSERT INTO T1
VALUES( 1, Inv );

545
A Java in the database exercise

Once an object has been added to the table T1, you can issue select
statements involving the fields and methods of the objects in the table.
For example the following SQL statement returns the value of the field
lineItem1Description for all the objects in the table T1 (right now, there
should only be one object in the table).
SELECT ID, JCol.lineItem1Description
FROM T1;
You can execute similar select statements involving other fields and methods
of the object.
A second method for creating a Java object and adding it to a table involves
the following expression, which always creates a Java object and returns a
reference to it:
NEW Javaclassname()
You can use this expression in a number of ways. For example, the following
SQL statement creates a Java object and inserts it into the table T1.
INSERT INTO T1
VALUES ( 2, NEW Invoice() );
The following SQL statement verifies that these two objects have been saved
as values of column JCol in the table T1.
SELECT ID, JCol.totalSum()
FROM t1
The results of the JCol column (the second row returned by the above
statement) should be 0, because the fields in that object have no values and
the totalSum method is a calculation of those fields.

Returning an object using a query


You can also retrieve an object from a table with a Java class the same as the
type of one of its columns. The following series of statements creates a new
variable and passes a value (it can only contain an object reference where the
object is of type Invoice). The object reference passed to the variable was
generated using the table T1.
CREATE VARIABLE Inv2 Invoice;
SET Inv2 = (select JCol from T1 where ID = 2);
SET Inv2.lineItem1Description = ’Sweet feed’;
SET Inv2.lineItem2Description = ’Drive belt’;
Take note that the value for the lineItem1Description field and
lineItem2Description have been changed in the variable Inv2 but not in the
table that was the source for the value of this variable.

546
Chapter 17 Welcome to Java in the Database

This is consistent with the way SQL variables are currently handled: the
variable Inv contains a reference to a Java object. The value in the table that
was the source of the variable’s reference is not altered until an UPDATE
statement is executed.

547
A Java in the database exercise

548
C H A P T E R 1 8

Using Java in the Database

About this chapter This chapter describes how to add Java classes and objects to your database,
and how to use these objects in a relational database.
Contents
Topic Page
Overview of using Java 550
Java-enabling a database 553
Installing Java classes into a database 558
Creating columns to hold Java objects 563
Inserting, updating, and deleting Java objects 565
Querying Java objects 570
Comparing Java fields and objects 572
Special features of Java classes in the database 575
How Java objects are stored 580
Java database design 583
Using computed columns with Java classes 586
Configuring memory for Java 589

Before you begin To run the examples in this chapter, first run the file jdemo.sql, included in
the jxmp subdirectory of your installation directory.
$ For full instructions, see "Setting up the Java examples" on page 550.

549
Overview of using Java

Overview of using Java


This chapter describes how to accomplish tasks using Java in the database,
including the following:
♦ How to Java-enable a database You need to follow certain steps to
enable your database to use Java.
♦ Installing Java classes You need to install Java classes in a database
to make them available for use in the server.
♦ Properties of Java columns This section describes how columns with
Java class data types fit into the relational model.
♦ Java database design This section provides tips for designing
databases that use Java classes.

Setting up the Java examples


Many of the examples in this chapter require you to use a set of classes and
tables added to the sample database. The tables hold the same information as
tables of the same name in the sample database, but the user ID named jdba
owns them. They use Java class data types instead of simple relational types
to hold the information.

Sample tables designed for tutorial use only


The sample tables illustrate different Java features. They are not a
recommendation for how to redesign your database. You should consider
your own situation in evaluating where to incorporate Java data types and
other features.

v To add Java classes and tables to the sample database using


Interactive SQL:
1 Ensure that the database server has 8 Mb of cache available.
2 Start Interactive SQL.
3 On the Identification tab of the Connect dialog, select the ODBC Data
Source Name option, and choose ASA 7.0 Sample from the dropdown
list.
4 Click OK when finished to connect to the sample database.
5 In the SQL Statements pane of the main Interactive SQL viewer, type
the following statement:
READ "path\jxmp\jdemo.sql"

550
Chapter 18 Using Java in the Database

where path is your Adaptive Server Anywhere installation directory.


This runs the instructions in the jdemo.sql command file. The
instructions may take some time to complete.
You can view the jdemo.sql script using a text editor. It executes the
following steps:
1 Installs the JDBCExamples class.
2 Creates a user ID named JDBA with password SQL and DBA authority,
and sets the current user to be JDBA.
3 Installs a JAR file named asademo.jar. This file contains the class
definitions used in the tables.
4 Creates the following tables under the JDBA user ID:
♦ product
♦ contact
♦ customer
♦ employee
♦ sales_order
♦ sales_order_items
This is a subset of the tables in the sample database.
5 Adds the data from the standard tables of the same names into the Java
tables. This step uses INSERT from SELECT statements. This step may
take some time.
6 Creates some indexes and foreign keys to add integrity constraints to the
schema.

Tip
You can also start Interactive SQL and connect to the ASA 7.0 Sample
data source from the command line:
dbisql -c "dsn=ASA 7.0 Sample"

Managing the runtime environment for Java


The runtime environment for Java consists of:
♦ The Sybase Java Virtual Machine Running within the database
server, the Sybase Java Virtual Machine interprets and executes the
compiled Java class files.

551
Overview of using Java

♦ The runtime Java classes When you create a database, a set of Java
classes becomes available to the database. Java applications in the
database require these runtime classes to work properly.

Management tasks To provide a runtime environment for Java, you need to carry out the
for java following tasks:
♦ Java-enable your database This task involves ensuring the
availability of built-in classes and the upgrading of the database to
Version 7 standards.
$ For more information, see "Java-enabling a database" on page 553.
♦ Install other classes your users need This task involves ensuring
that classes other than the runtime classes are installed and up to date.
$ For more information, see "Installing Java classes into a database"
on page 558.
♦ Configuring your server You must configure your server to make the
necessary memory available to run Java tasks.
$ For more information, see "Configuring memory for Java" on
page 589.

Tools for managing You can carry out all these tasks from Sybase Central or from Interactive
Java SQL.

552
Chapter 18 Using Java in the Database

Java-enabling a database
The Adaptive Server Anywhere Runtime environment for Java requires a
Java VM and the Sybase runtime Java classes. The Java VM is always
available as part of the database server, but you need to Java-enable a
database for it to be able to use the runtime Java classes.

New databases are Java-enabled by default


By default, databases created with Adaptive Server Anywhere are Java-
enabled. You can also Java-enable a database when you upgrade.

Java is a single-hierarchy language, meaning that all classes you create or use
eventually inherit from one class. This means the low-level classes (classes
further up in the hierarchy) must be present before you can use higher-level
classes. The base set of classes required to run Java applications are the
runtime Java classes, or the Java API.
When not to Java- Java-enabling a database adds many entries into the system tables. This adds
enable a database to the size of the database and, more significantly, adds about 200K to the
memory requirements for running the database, even if you do not use any
Java functionality.
If you are not going to use Java, and if you are running in a limited-memory
environment, you may wish to not Java-enable your database.

The Sybase runtime Java classes


The following system zip files contain the Sybase runtime Java classes. The
system zip files are in the Java subdirectory of your Adaptive Server
Anywhere installation directory:
♦ classes.zip This file, licensed from Sun Microsystems, contains the
Sun Microsystem Java runtime classes.
♦ asajdbc.zip This file contains Sybase internal JDBC driver classes.
♦ jdbcdrv.zip This file contains Sybase external JDBC driver classes.

Where the runtime The Sybase runtime Java classes are held on disk rather than stored in a
classes are held database like other classes.
When you Java-enable a database, you also update the system tables with a
list of available classes from the system JAR files. You can then browse the
class hierarchy from Sybase Central, but the classes themselves are not
present in the database.
JAR files The database stores runtime class names the under the following JAR files:

553
Java-enabling a database

♦ ASAJDBC Class names from asajdbc.zip are held here.


♦ ASAJDBCDRV Class names from jdbcdrv.zip are held here.
♦ ASASystem Class names from classes.zip are held here.

Installed packages These runtime classes include the following packages:


♦ java Packages stored here include the supported Java runtime classes
from Sun Microsystems. For a list of the supported Java runtime classes,
see "Supported Java packages" on page 288 of the book ASA Reference.
♦ com.sybase Packages stored here provide server-side JDBC support.
♦ sun Sun Microsystems provides the packages stored here.
♦ sybase.sql Packages stored here are part of the Sybase server-side
JDBC support.

Caution: do not install classes from another version of Sun’s JDK


Classes in Sun’s JDK share names with the Sybase runtime Java classes
that must be installed in any database intended to execute Java
operations.
You must not replace the classes.zip file included with Adaptive Server
Anywhere. Using another version of these classes could cause
compatibility problems with the Sybase Java Virtual Machine.
You must only Java-enable a database using the methods outlined in this
section.

Ways of Java-enabling a database


You can Java-enable databases when you create them, when you upgrade
them, or at a later time.
Creating You can create a Java-enabled database using:
databases
♦ the CREATE DATABASE statement. For details of the syntax, see
"CREATE DATABASE statement" on page 427 of the book ASA
Reference.
♦ the dbinit command-line utility. For details, see "The dbinit command-
line utility" on page 99 of the book ASA Reference.
♦ Sybase Central. For details, see "Creating a database" on page 115.
Upgrading You can upgrade a Version 5 database to a Java-enabled Version 7 database
databases using:

554
Chapter 18 Using Java in the Database

♦ the ALTER DATABASE statement. For details of the syntax, see


"ALTER DATABASE statement" on page 383 of the book ASA
Reference.
♦ the dbupgrad.exe upgrade utility. For details, see "The dbupgrad
command-line utility" on page 145 of the book ASA Reference.
If you choose not to install the Sybase runtime Java classes in a database, all
database operations not involving Java operations remain fully functional
and work as expected.

New databases and Java


By default, Adaptive Server Anywhere installs Sybase runtime Java classes
each time you create a database, thereby making all new databases Java-
enabled. The installation of these classes, however, is optional, and
controlled by the method you use to create the database.
CREATE The CREATE DATABASE SQL statement has an option called JAVA. To
DATABASE Java-enable a database, you can set the option to ON.To disable Java, set the
options option to OFF. This option is is set to ON by default.
For example, the following statement creates a Java-enabled database file
named temp.db:
CREATE DATABASE ’c:\\sybase\\asa7\\temp’
The following statement creates a database file named temp2.db, which does
not support Java.
CREATE DATABASE ’c:\\sybase\\asa7\\temp2’
JAVA OFF

Database You can create databases using the dbinit.exe command-line database
initialization utility initialization utility. The utility has a –j switch that controls whether or not
to install the Sybase runtime Java classes in the newly-created database.
Using the –j switch prevents the Sybase runtime Java classes from being
installed. Not using the switch installs the Java classes by default.
The same option is available when creating databases using Sybase Central.

Upgrading databases and Java

You can upgrade existing databases created with Sybase SQL Anywhere
Version 5 or earlier using the command-line database upgrade utility or from
Sybase Central.

555
Java-enabling a database

Database upgrade You can upgrade databases to Adaptive Server Anywhere Version 7
utility standards using the dbupgrad.exe command-line utility. Using the –j switch
prevents the installation of Sybase runtime Java classes. Not using the switch
installs the Java classes by default.

Java-enabling a Version 7 database


If you have created a Version 7 database, or upgraded a database to
Version 7 standards, but have chosen not to Java-enable the database, you
can add the necessary Java classes at a later date, using either Sybase Central
or Interactive SQL.

v To add the Java runtime classes to a database (Sybase Central):


1 Connect to the database from Sybase Central as a user with DBA
authority.
2 Open the Java Objects folder.
3 Double-click Add Base Java Classes. If you cannot see this icon, it
means that your database already has the Java runtime classes installed.
4 Follow the instructions in the wizard.

v To add the Java runtime classes to a database (SQL):


1 Connect to the database from Interactive SQL as a user with DBA
authority.
2 Run the script instjava.sql from the scripts directory:
read "path/scripts/instjava.sql"
where path is the name of your Adaptive Server Anywhere installation
directory.

Using Sybase Central to Java-enable a database


You can use Sybase Central to create and upgrade databases using wizards.
During the creation or upgrade of a database, the wizard prompts you to
choose whether or not you have the Sybase runtime Java classes installed. By
default, this option Java-enables the database.
Using Sybase Central, you can create or upgrade a database by choosing:
♦ Create database from the Utilities folder, or

556
Chapter 18 Using Java in the Database

♦ Upgrade database from the Utilities folder to upgrade a Version 5 or


Version 4 database to a Version 7 database with Java capabilities.

557
Installing Java classes into a database

Installing Java classes into a database


Before you install a Java class into a database, you must compile it. You can
install Java classes into a database as:
♦ A single class You can install a single class into a database from a
compiled class file. Class files typically have extension .class.
♦ A jar You can install a set of classes all at once if they are in either a
compressed or uncompressed jar file. JAR files typically have the
extension .jar or .zip. Adaptive Server Anywhere supports all
compressed jar files created with the Sun jar utility, and some other jar
compression schemes as well.
This section describes how to install Java classes once you have compiled
them. You must have DBA authority to install a class or jar.

Creating a class
Although the details of each step may differ depending on whether you are
using a Java development tool such as Sybase PowerJ, the steps involved in
creating your own class generally include the following:

v To create a class:
1 Define your class Write the Java code that defines your class. If you
are using the Sun Java SDK then you can use a text editor. If you are
using a development tool such as Sybase PowerJ, the development tool
provides instructions.

Use only supported classes


If your class uses any runtime Java classes, make certain they are
among the list of supported classes as listed in "Supported Java
packages" on page 288 of the book ASA Reference.
User classes must be 100% Java. Native methods are not allowed.

2 Name and save your class Save your class declaration (Java code) in
a file with the extension .java. Make certain the name of the file is the
same as the name of the class and that the case of both names is
identical.
For example, a class called Utility should be saved in a file called
Utility.java.

558
Chapter 18 Using Java in the Database

3 Compile your class This step turns your class declaration containing
Java code into a new, separate file containing byte code. The name of
the new file is the same as the Java code file but has an extension of
.class. You can run a compiled Java class in a Java runtime
environment, regardless of the platform you compiled it on or the
operating system of the runtime environment.
The Sun JDK contains a Java compiler, Javac.exe.

Java-enabled databases only


You can install any compiled Java class file in a database. However,
Java operations using an installed class can only take place if the
database has been Java-enabled as described in "Java-enabling a
database" on page 553.

Installing a class
To make your Java class available within the database, you install the class
into the database either from Sybase Central, or using the INSTALL
statement from Interactive SQL or other application. You must know the
path and file name of the class you wish to install.
You require DBA authority to install a class.

v To install a class (Sybase Central):


1 Connect to a database with DBA authority.
2 Open the Java Objects folder for the database.
3 Double-click Add Java Class or JAR.
4 Follow the instructions in the wizard.

v To install a class (SQL):


1 Connect to a database with DBA authority.
2 Execute the following statement:
INSTALL JAVA NEW
FROM FILE ’path\\ClassName.class’
where path is the directory where the class file is, and ClassName.class
is the name of the class file.
The double backslash ensures that the backslash is not treated as an
escape character.

559
Installing Java classes into a database

For example, to install a class in a file named Utility.class, held in the


directory c:\source, you would enter the following statement:
INSTALL JAVA NEW
FROM FILE ’c:\\source\\Utility.class’
If you use a relative path, it must be relative to the current working directory
of the database server.
$ For more information, see "INSTALL statement" on page 556 of the
book ASA Reference, and "Deleting Java objects, classes, and JAR files" on
page 569.

Installing a JAR
It is useful and common practice to collect sets of related classes together in
packages, and to store one or more packages in a JAR file. For information
on JAR files and packages, see the accompanying online book, Thinking in
Java, or another book on programming in Java.
You install a JAR file the same way as you install a class file. A JAR file can
have the extension JAR or ZIP. Each JAR file must have a name in the
database. Usually, you use the same name as the JAR file, without the
extension. For example, if you install a JAR file named myjar.zip, you would
generally give it a JAR name of myjar.

v To install a JAR (Sybase Central):


1 Connect to a database with DBA authority.
2 Open the Java Objects folder for the database.
3 Double-click Add Java Class or JAR.
4 Follow the instructions in the wizard.

v To install a JAR (SQL):


1 Connect to a database with DBA authority.
2 Enter the following statement:
INSTALL JAVA NEW
JAR ’jarname’
FROM FILE ’path\\JarName.jar’

$ For more information, see "INSTALL statement" on page 556 of the


book ASA Reference, and "Deleting Java objects, classes, and JAR files" on
page 569.

560
Chapter 18 Using Java in the Database

Updating classes and JARs


You can update classes and JAR files using Sybase Central or by entering an
INSTALL statement in Interactive SQL or some other client application.
To update a class or JAR, you must have DBA authority and a newer version
of the compiled class file or JAR file available in a file on disk.
Existing Java You may have instances of a Java class stored as Java objects in your
objects and database, or as values in a column that uses the class as its data type.
updated classes
Despite updating the class, these old values will still be available, even if the
fields and methods stored in the tables are incompatible with the new class
definition.
Any new rows you insert, however, need to be compatible with the new
definition.
When updated Only new connections established after installing the class, or which use the
classes take effect class for the first time after installing the class, use the new definition. Once
the Virtual Machine loads a class definition, it stays in memory until the
connection closes.
If you have been using a Java class or objects based on a class in the current
connection, you need to disconnect and reconnect to use the new class
definition.
$ To understand why the updated classes take effect in this manner, you
need to know a little about how the VM works. For information, see
"Configuring memory for Java" on page 589.
Objects stored in Java objects can use the updated class definition because they are stored in
serialized form serialized form. The serialization format, designed specifically for the
database, is not the Sun Microsystems serialization format. The internal
Sybase VM carries out all serialization and deserialization , so there are no
compatibility issues.

v To update a class or JAR (Sybase Central):


1 Connect to a database with DBA authority.
2 Open the Java Objects folder.
3 Locate the class or JAR file you wish to update.
4 Right-click the class or JAR file and choose Update from the popup
menu.
5 In the resulting dialog, specify the name and location of the class or JAR
file to be updated. You can click Browse to search for it.

561
Installing Java classes into a database

Tip
You can also update a Java class or JAR file by clicking Update Now on
the General tab of its property sheet.

v To update a class or JAR (SQL):


1 Connect to a database with DBA authority.
2 Execute the following statement:
INSTALL JAVA UPDATE
[ JAR ’jarname’ ]
FROM FILE ’filename’
If you are updating a JAR, you must enter the name by which the JAR is
known in the database.
$ See also
♦ "INSTALL statement" on page 556 of the book ASA Reference
♦ "Update Java Class dialog" on page 1041
♦ "Update JAR dialog" on page 1041
♦ "Java Objects properties" on page 1089

562
Chapter 18 Using Java in the Database

Creating columns to hold Java objects


This section describes how columns of Java class data types fit into the
standard SQL framework.

Creating columns with Java data types


You can use any installed Java class as a data type. You must use the fully
qualified name for the data type.
For example, the following CREATE TABLE statement includes a column
that has columns of Java data types asademo.Name and
asademo.ContactInfo. Here, Name and ContactInfo are classes within the
asademo package.
CREATE TABLE jdba.customer
(
id integer NOT NULL,
company_name CHAR(35) NOT NULL,
JName asademo.Name NOT NULL,
JContactInfo asademo.ContactInfo NOT NULL,
PRIMARY KEY (id)
)

Case sensitivity Unlike other SQL data types, Java data types are case sensitive. You must
supply the proper case of all parts of the data type.

Using defaults and NULL on Java columns


You can use defaults on Java columns, and Java columns can hold NULL
entries.

Java columns and defaults Columns can have as default values any
function of the proper data type, or any preset default. You can use any
function of the proper data type (for example, of the same class as the
column) as a default value for Java columns.

Java columns and NULL Java columns can allow NULL. If a nullable
column with Java data type has no default value, the column contains NULL.
If a Java value is not set, it has a Java null value. This Java null maps onto
the SQL NULL, and you can use the IS NULL and IS NOT NULL search
conditions against the values. For example, suppose the description of a
Product Java object in a column named JProd was not set, you can query all
products with non-null values for the description as follows:
SELECT *

563
Creating columns to hold Java objects

FROM product
WHERE JProd>>description IS NULL

564
Chapter 18 Using Java in the Database

Inserting, updating, and deleting Java objects


This section describes how the standard SQL data manipulation statements
apply to Java columns.
Throughout the section, concrete examples based on the Product table of the
sample database and a class named Product, illustrate points. You should
first look at the file Product.java held in the jxmp\asademo subdirectory of
your installation directory.
Create the Java The examples in this section assume that you have added the Java tables to
sample tables the sample database, and that you are connected as user ID jDBA with
password SQL.
$ For further instructions, see "Setting up the Java examples" on
page 550.

A sample class
This section describes a class that is used in examples throughout the
following sections.
The Product.java class definition, included in the jxmp\asademo directory
under your installation directory, is reproduced in part below:
package asademo;

public class Product implements java.io.Serializable {

// public fields
public String name ;
public String description ;
public String size ;
public String color;
public int quantity ;
public java.math.BigDecimal unit_price ;

// Default constructor
Product () {
unit_price = new java.math.BigDecimal( 10.00 );
name = "Unknown";
size = "One size fits all";
}

// Constructor using all available arguments


Product ( String inColor,
String inDescription,
String inName,
int inQuantity,

565
Inserting, updating, and deleting Java objects

String inSize,
java.math.BigDecimal inUnit_price
) {
color = inColor;
description = inDescription;
name = inName;
quantity = inQuantity;
size = inSize;
unit_price=inUnit_price;
}

public String toString() {


return size + " " + name + ": " +
unit_price.toString();
}

Notes ♦ The Product class has several public fields that correspond to some of
the columns of the dba.Product table that will be collected together in
this class.
♦ The toString method is provided for convenience. When you include an
object name in a select-list, the toString method is executed and its
return string displayed.
♦ Some methods are provided to set and get the fields. It is common to use
such methods in object-oriented programming rather than to address the
fields directly. Here the fields are public for convenience in tutorials.

Inserting Java objects


When you INSERT a row in a table that has a Java column, you need to
insert a Java object into the Java column.
You can insert a Java object in two ways: from SQL or from other Java
classes running inside the database, using JDBC.

Inserting a Java object from SQL


You can insert a Java object using a constructor, or you can use SQL
variables to build up a Java object before inserting it.
Inserting an object When you insert a value into a column that has a Java class data type, you
using a constructor are inserting a Java object. To insert an object with the proper set of
properties, the new object must have proper values for any public fields, and
you will want to call any methods that set private fields.

566
Chapter 18 Using Java in the Database

v To insert a Java object:


♦ INSERT a new instance of the Product class into the table product as
follows:
INSERT
INTO product ( ID, JProd )
VALUES ( 702, NEW asademo.Product() )
You can run this example against the sample database from the user ID
jdba once the jdemo.sql script has been run.
The NEW keyword invokes the default constructor for the Product class in
the asademo package.
Inserting an object You can also set the values of the fields of the object individually, as
from a SQL opposed to through the constructor, in a SQL variable of the proper class.
variable
v To insert a Java object using SQL variables:
1 Create a SQL variable of the Java class type:
CREATE VARIABLE ProductVar asademo.Product
2 Assign a new object to the variable, using the class constructor:
SET ProductVar = NEW asademo.Product()
3 Assign values to the fields of the object, where required:
SET ProductVar>>color = ’Black’;
SET ProductVar>>description = ’Steel tipped boots’;
SET ProductVar>>name = ’Work boots’;
SET ProductVar>>quantity = 40;
SET ProductVar>>size = ’Extra Large’;
SET ProductVar>>unit_price = 79.99;
4 Insert the variable into the table:
INSERT
INTO Product ( id, JProd )
VALUES ( 800, ProductVar )
5 Check that the value is inserted:
SELECT *
FROM product
WHERE id=800
6 Undo the changes you have made in this exercise:
ROLLBACK

567
Inserting, updating, and deleting Java objects

The use of SQL variables is typical of stored procedures and other uses of
SQL to build programming logic into the database. Java provides a more
powerful way of accomplishing this task. You can use server-side Java
classes together with JDBC to insert objects into tables.

Inserting an object from Java


You can insert an object into a table using a JDBC prepared statement.
A prepared statement uses placeholders for variables. You can then use the
setObject method of the PreparedStatement object.
You can use prepared statements to insert objects from either client-side or
server-side JDBC.
$ For more information on using prepared statements to work with
objects, see "Inserting and retrieving objects" on page 610.

Updating Java objects


You may wish to update a Java column value in either of the following ways:
♦ Update the entire object.
♦ Update some of the fields of the object.

Updating the entire You can update the object in much the same way as you insert objects:
object
♦ From SQL, you can use a constructor to update the object to a new
object as the constructor creates it. You can then update individual fields
if you need to.
♦ From SQL, you can use a SQL variable to hold the object you need, and
then update the row to hold the variable.
♦ From JDBC, you can use a prepared statement and the
PreparedStatement.setObject method.

Updating fields of Individual fields of an object have data types that correspond to SQL data
the object types, using the SQL to Java data type mapping described in "Java / SQL
data type conversion" on page 294 of the book ASA Reference.
You can update individual fields using a standard UPDATE statement:
UPDATE Product
SET JProd.unit_price = 16.00
WHERE ID = 302
In the initial release of Java in the database, it was necessary to use a special
function (EVALUATE) to carry out updates. This is no longer necessary.

568
Chapter 18 Using Java in the Database

To update a Java field, the Java data type of the field must map to a SQL
type, the expression on the right hand side of the SET clause must match this
type. You may need to use the CAST function to cast the data types
appropriate.
$ For information on data type mappings between Java and SQL, see
"Java / SQL data type conversion" on page 294 of the book ASA Reference.
Using set methods It is common practice in Java programming not to address fields directly, but
to use methods to get and set the value. It is also common practice for these
methods to return void. You can use set methods in SQL to update a column:
UPDATE jdba.Product
SET JProd.setName( ’Tank Top’)
WHERE id=302
Using methods is slower than addressing the field directly, because the Java
VM must run.
$ For more information, see "Return value of methods returning void" on
page 576.

Deleting Java objects, classes, and JAR files


Deleting rows containing Java objects is no different than deleting other
rows. The WHERE clause in the DELETE statement can include Java
objects or Java fields and methods. For more information, see "DELETE
statement" on page 496 of the book ASA Reference.
Using Sybase Central, you can also delete an entire Java class or JAR file.

v To delete a Java class or JAR file (Sybase Central):


1 Open the Java Objects folder.
2 Locate the class or JAR you would like to delete.
3 Right-click the class or JAR file and choose Delete from the popup
menu.
$ See also
♦ "Installing a class" on page 559
♦ "Installing a JAR" on page 560

569
Querying Java objects

Querying Java objects


You may wish to retrieve a Java column value in either of the following
ways:
♦ Retrieve the entire object.
♦ Retrieve some of the fields of the object.

Retrieving the From SQL, you can create a variable of the appropriate type, and select the
entire object value from the object into that variable. However, the obvious place in which
you may wish to make use of the entire object is in a Java application.
You can retrieve an object into a server-side Java class using the getObject
method of the ResultSet of a query. You can also retrieve an object to a
client-side Java application.
$ For a description of retrieving objects using JDBC, see "Queries using
JDBC" on page 607.
Retrieving fields of Individual fields of an object have data types that correspond to SQL data
the object types, using the SQL to Java data type mapping described in "Java / SQL
data type conversion" on page 294 of the book ASA Reference.
♦ You can retrieve individual fields by including them in the select-list of
a query, as in the following simple example:
SELECT JProd>>unit_price
FROM product
WHERE ID = 400
♦ If you use methods to set and get the values of your fields, as is common
in object oriented programming, you can include a getField method in
your query:
SELECT JProd>>getName()
FROM Product
WHERE ID = 401

$ For information on using objects in the WHERE clause and other issues
in comparing objects, see "Comparing Java fields and objects" on page 572.

Performance tip
Getting a field directly is faster than invoking a method that gets the field,
because method invocations require starting the Java VM.

The results of You can list the column name in a query select list, as in the following query:
SELECT column- SELECT JProd
name FROM jdba.product

570
Chapter 18 Using Java in the Database

This query returns the Sun serialization of the object to the client application.
When you execute a query that retrieves an object in Interactive SQL, it
displays the return value of the object’s toString method. For the Product
class, the toString method lists, in one string, the size, name, and unit price
of the object. The results of the query are as follows:

JProd
Small Tee Shirt: 9.00
Medium Tee Shirt: 14.00
One size fits all Tee Shirt: 14.00
One size fits all Baseball Cap: 9.00
One size fits all Baseball Cap: 10.00
One size fits all Visor: 7.00
One size fits all Visor: 7.00
Large Sweatshirt: 24.00
Large Sweatshirt: 24.00
Medium Shorts: 15.00

571
Comparing Java fields and objects

Comparing Java fields and objects


Public Java classes are domains with much more richness than traditional
SQL domains. This raises issues about how Java columns behave in a
relational database, compared to columns based on traditional SQL data
types.
In particular, the issue of how objects are compared has implications for the
following:
♦ Queries with an ORDER BY clause, a GROUP BY clause, a DISTINCT
keyword, or using an aggregate function.
♦ Statements that use equality or inequality comparison conditions
♦ Indexes and unique columns
♦ Primary and foreign key columns.

Ways of comparing Sorting and ordering rows, whether in a query or in an index, implies a
Java objects comparison between values on each row. If you have a Java column, you can
carry out comparisons in the following ways:
♦ Compare on a public field You can compare on a public field in the
same way you compare on a regular row. For example, you could
execute the following query:
SELECT name, JProd.unit_price
FROM Product
ORDER BY JProd.unit_price
You can use this kind of comparison in queries, but not for indexes and
key columns.
♦ Compare using a compareTo method You can compare Java objects
that have implemented a compareTo method. The Product class on
which the JProd column is based has a compareTo method that
compares objects based on the unit_price field. This permits the
following query:
SELECT name, JProd.unit_price
FROM Product
ORDER BY JProd
The comparison needed for the ORDER BY clause is automatically
carried out based on the compareTo method.

572
Chapter 18 Using Java in the Database

Comparing Java objects


To compare two objects of the same type, you must implement a compareTo
method:
♦ For columns of Java data types to be used as primary keys, indexes, or
as unique columns, the column class must implement a compareTo
method.
♦ To use ORDER BY, GROUP BY, or DISTINCT clauses in a query, you
must be comparing the values of the column. The column class must
have a compareTo method for any of these clauses to be valid.
♦ Functions that employ comparisons, such as MAX and MIN, can only
be used on a Java classes with a compareTo method.

Requirements of The compareTo method must have the following properties:


the compareTo
♦ Scope The method must be visible externally, and so should be a
method
public method.
♦ Arguments The method takes a single argument, which is an object of
the current type. The current object is compared to the supplied object.
For example, Product.compareTo has the following argument:
compareTo( Product anotherProduct )
The method compares the anotherProduct object, of type Product, to
the current object.
♦ Return values The compareTo method must return an int data type,
with the following meanings:
♦ Negative integer The current object is less than the supplied
object. It is recommended that you return -1 for this case, for
compatibility with compareTo methods in base Java classes.
♦ Zero The current object has the same value as the supplied object.
♦ Positive integer The current object is greater than the supplied
object. It is recommended that you return 1 for this case, for
compatibility with compareTo methods in base Java classes.

Example The Product class installed into the sample database with the example
classes has a compareTo method as follows:
public int compareTo( Product anotherProduct ) {
// Compare first on the basis of price
// and then on the basis of toString()
int lVal = unit_price.intValue();
int rVal = anotherProduct.unit_price.intValue();
if ( lVal > rVal ) {
return 1;

573
Comparing Java fields and objects

}
else if (lVal < rVal ) {
return -1;
}
else {
return toString().compareTo(
anotherProduct.toString() );{
}
}
}
This method compares the unit price of each object. If the unit prices are the
same, then the names are compared (using Java string comparison, not the
database string comparison). Only if both the unit price and the name are the
same are the two objects considered the same when comparing.
Make toString and When you include a Java column in the select list of a query, and execute it
compareTo in Interactive SQL, the value of the toString method is displayed. When
compatible comparing columns, the compareTo method is used. If the toString and
compareTo methods are not implemented consistently with each other, you
can get inappropriate results such as DISTINCT queries that appear to return
duplicate rows.
For example, suppose the Product class in the sample database had a
toString method that returned the product name, and a compareTo method
based on the price. Then the following query, executed in Interactive SQL,
would display duplicate values:
SELECT DISTINCT JProd
FROM product

JProd
Tee Shirt
Tee Shirt
Baseball Cap
Visor
Sweatshirt
Shorts

Here, the returned value being displayed is determined by toString. The


DISTINCT keyword eliminates duplicates as determined by compareTo. As
these have been implemented in ways that are not related to each other,
duplicate rows appear to have been returned.

574
Chapter 18 Using Java in the Database

Special features of Java classes in the database


This section describes features of Java classes when used in the database.

Supported classes
You cannot use all classes from the JDK. The runtime Java classes available
for use in the database server belong to a subset of the Java API.
$ For a list of all supported packages, see "Supported Java packages" on
page 288 of the book ASA Reference.

Using threads in Java applications


You can use multiple threads in a Java application, by using features of the
java.lang.Thread package. Each Java thread is an engine thread, and comes
from the number of threads permitted by the -gn command-line option
database server command-line option.
You can synchronize, suspend, resume, interrupt, or stop threads in Java
applications.
$ For information on database server threads, see "–gn command-line
option" on page 28 of the book ASA Reference.
Serialization of All calls to the server-side JDBC driver are serialized, such that only one
JDBC calls thread is actively executing JDBC at any one time.

Procedure Not Found error


If you supply an incorrect number of arguments when calling a Java method,
or if you use an incorrect data type, the server responds with a Procedure Not
Found error. You should check the number and type of arguments.
$ For a list of type conversions between SQL and Java, see "Java / SQL
data type conversion" on page 294 of the book ASA Reference.

The main method


You typically start Java applications (outside the database) by running the
Java VM on a class that has a main method.

575
Special features of Java classes in the database

For example, the JDBCExamples class in the jxmp subdirectory of the


Adaptive Server Anywhere installation directory has a main method. When
you execute the class from the command line using a command such as the
following,
java JDBCExamples
it is the main method that executes.
$ For information on how to run the JDBCExamples class, see
"Establishing JDBC connections" on page 597.
The main method must be declared as follows:
public static void main( java.lang.String[] args ){
...
}
You can call the main method of classes installed into a database from SQL.
Each argument you provide becomes an element of the String array, and so
must be a CHAR or VARCHAR data type, or a literal string.
Example The following class contains a main method, which writes out the arguments
in reverse order:
public class ReverseWrite {
public static void main( String[] args ){
int i:
for( i = args.length; i > 0 ; i-- ){
System.out.print( args[ i-1 ] );
}
}
}
You can execute this method from SQL as follows:
call ReverseWrite.main( ’ one’, ’ two’, ’three’ )
The database server window displays the output:
three two one

Return value of methods returning void


You can use Java methods in SQL statements wherever you can use an
expression. You must ensure that the Java method return data type maps to
the appropriate SQL data type.
$ For a list of Java/SQL data type mappings, see "Java / SQL data type
conversion" on page 294 of the book ASA Reference.

576
Chapter 18 Using Java in the Database

When a method returns void, however, the value this is returned to SQL; that
is, the object itself. The feature only affects calls made from SQL, not from
Java.
This feature is particularly useful in UPDATE statements, where set methods
commonly return void. You can use the following UPDATE statement in the
sample database:
update jdba.product
set JProd = JProd.setName(’Tank Top’)
where id=302
The setName method returns void, and so implicitly returns the product
object to SQL.

Returning result sets from Java methods


This section describes how to make result sets available from Java methods.
You must write a Java method that returns a result set to the calling
environment, and wrap this method in a SQL stored procedure declared to be
EXTERNAL NAME of LANGUAGE JAVA.

v To return result sets from a Java method:


1 Ensure that the Java method is declared as public and static, in a public
class.
2 For each result set you expect the method to return, ensure that the
method has a parameter of type java.sql.ResultSet[]. These result-set
parameters must all occur at the end of the parameter list.
3 In the method, first create an instance of java.sql.ResultSet and then
assign it to one of the ResultSet[] parameters.
4 Create a SQL stored procedure of type EXTERNAL NAME
LANGUAGE JAVA. This type of procedure is a wrapper around a Java
method. You can use a cursor on the SQL procedure result set in the
same way as any other procedure that returns result sets.
$ For information on the syntax for stored procedures that are
wrappers for Java methods, see "CREATE PROCEDURE statement" on
page 453 of the book ASA Reference.

Example The following simple class has a single method, which executes a query and
passes the result set back to the calling environment.
import java.sql.*;

public class MyResultSet {


public static void return_rset( ResultSet[] rset1 )

577
Special features of Java classes in the database

throws SQLException {
Connection conn = DriverManager.getConnection(
"jdbc:default:connection" );
Statement stmt = conn.createStatement();
ResultSet rset =
stmt.executeQuery (
"SELECT CAST( JName.lastName " +
"AS CHAR( 50 ) )" +
"FROM jdba.contact " );
rset1[0] = rset;
}
}
You can expose the result set using a CREATE PROCEDURE statement that
indicates the number of result sets returned from the procedure and the
signature of the Java method.
A CREATE PROCEDURE statement indicating a result set could be defined
as follows:
CREATE PROCEDURE result_set()
DYNAMIC RESULT SETS 1
EXTERNAL NAME
’MyResultSet.return_rset ([Ljava/sql/ResultSet;)V’
LANGUAGE JAVA

You can open a cursor on this procedure just as you can with any ASA
procedure returning result sets.
The string (Ljava/sql/ResultSet;)V is a Java method signature, which is a
compact character representation of the number and type of the parameters
and return value.
$ For more information about Java method signatures, see "CREATE
PROCEDURE statement" on page 453 of the book ASA Reference.

Returning values from Java via stored procedures


You can use stored procedures created using the EXTERNAL NAME
LANGUAGE JAVA as wrappers around Java methods. This section
describes how to write your Java method to exploit OUT or INOUT
parameters in the stored procedure.
Java does not have explicit support for INOUT or OUT parameters. Instead,
you can use an array of the parameter. For example, to use an integer OUT
parameter, create an array of exactly one integer:
public class TestClass {
public static boolean testOut( int[] param ){
param[0] = 123;
return true;

578
Chapter 18 Using Java in the Database

}
}
The following procedure uses the testOut method:
CREATE PROCEDURE sp_testOut ( OUT p INTEGER )
EXTERNAL NAME ’TestClass/testOut ([I)Z’
LANGUAGE JAVA
The string ([I)Z is a Java method signature, indicating that the method has a
single parameter, which is an array of integers, and returns a boolean. You
must define the method so that the method parameter you wish to use as an
OUT or INOUT parameter is an array of a Java data type that corresponds to
the SQL data type of the OUT or INOUT parameter.
$ For details of the syntax, including the method signature, see "CREATE
PROCEDURE statement" on page 453 of the book ASA Reference.
$ For more information, see "Java / SQL data type conversion" on
page 294 of the book ASA Reference.

579
How Java objects are stored

How Java objects are stored


Java values are stored in serialized form. This means that each row contains
the following information:
♦ A version identifier.
♦ An identifier for the class (or subclass) that is stored.
♦ The values of non-static, non-transient fields in the class.
♦ Other overhead information.
The class definition is not stored for each row. Instead, the identifier
provides a reference to the class definition, which is held only once.
You can use Java objects without knowing the details of how these pieces
work, but storage methods for these objects do have some implications for
performance and so information follows.
Notes ♦ Disk space The overhead per row is 10 to 15 bytes. If the class has a
single variable, then the storage required for the overhead can be similar
to the amount needed for the variable itself. If the class has many
variables, the overhead is negligible.
♦ Performance Any time you insert or update a Java value, the Java VM
needs to serialize it. Any time a Java value is retrieved in a query, it
needs to be deserialized by the VM. This can amount to a significant
performance penalty.
You can avoid the performance penalty for queries using computed
columns.
♦ Indexing Indexes on Java columns will not be very selective, and will
not provide the performance benefits associated with indexes on simple
SQL data types.
♦ Serilalization If a class has a readObject or writeObject method,
these are called when deserializing or serializing the instance. Using a
readObject or writeObject method can impact performance, because
the Java VM is being invoked.

Java objects and class versions


Java objects stored in the database are persistent; that is, they exist even
when no code is running. This means that you could carry out the following
sequence of actions:
1 Install a class.

580
Chapter 18 Using Java in the Database

2 Create a table using that class as the data type for a column.
3 Insert rows into the table.
4 Install a new version of the class.
How will the existing rows work with the new version of the class?
Accessing rows Adaptive Server Anywhere provides a form of class versioning to allow the
when a class is new class to work with the old rows. The rules for accessing these older
updated values are as follows:
♦ If a serializable field is in the old version of the class, but is either
missing or not serializable in the new version, the field is ignored.
♦ If a serializable field is in the new version of the class, but was either
missing or not serializable in the old version, the field is initialized to a
default value. The default value is 0 for primitive types, false for
Boolean values, and NULL for object references.
♦ If there was a superclass of the old version that is not a superclass of the
new version, the data for that superclass is ignored.
♦ If there is a superclass of the new version that was not a superclass of
the old version, the data for that superclass is initialized to default
values.
♦ If a serializable field changes type between the older version and the
newer version, the field is initialized to a default values. Type
conversions are not supported; this is consistent with Sun Microsystems
serialization.

When objects are A serialized object is unaccessible if the class of the object or any of its
inaccessible superclasses has been removed from the database, at any time. This behavior
is consistent with Sun Microsystems serialization.
Moving objects These changes make cross database transfer of objects possible even when
across databases the versions of classes differ. Cross database transfer can occur as follows:
♦ Objects are replicated to a remote database.
♦ A table of objects is unloaded and reloaded into another database.
♦ A log file containing objects is translated and applied against another
database.

When the new Each connection’s VM loads the class definition for each class the first time
class is used that class is used.
When you INSTALL a class, the VM on your connection is implicitly
restarted. Therefore, you have immediate access to the new class.

581
How Java objects are stored

For connections other than the one that carries out the INSTALL, the new
class loads the next time a VM accesses the class for the first time. If the
class is already loaded by a VM, that connection does not see the new class
until the VM is restarted for that connection (for example, with a STOP
JAVA and START JAVA).

582
Chapter 18 Using Java in the Database

Java database design


There is a large body of theory and practical experience available to help you
design a relational database. You can find descriptions Entity-Relationship
design and other approaches not only in introductory form (see "Designing
Your Database" on page 333) but also in more advanced books.
No comparable body of theory and practice to develop object-relational
databases exists, and this certainly applies to Java-relational databases. Here
we offer some suggestions for how to use Java to enhance the practical
usefulness of relational databases.

Entities and attributes in relational and object-oriented data


In relational database design, each table describes an entity. For example, in
the sample database there are tables named Employee, Customer,
Sales_order, and Department. The attributes of these entities become the
columns of the tables: employee addresses, customer identification numbers,
sales order dates, and so on. Each row of the table may be considered as a
separate instance of the entitya specific employee, sales order, or
department.
In object-oriented programming, each class describes an entity, and the
methods and fields of that class describe the attributes of the entity. Each
instance of the class (each object) holds a separate instance of the entity.
It may seem unnatural, therefore, for relational columns to be based on Java
classes. A more natural correspondence may seem to be between table and
class.

Entities and attributes in the real world


The distinction between entity and attribute may sound clear, but a little
reflection shows that it is commonly not at all clear in practice:
♦ An address may be seen as an attribute of a customer, but an address is
also an entity, with its own attributes of street, city, and so on.
♦ A price may be seen as an attribute of a product, but may also be seen as
an entity, with attributes of amount and currency.
The utility of the object-relational database lies in exactly the fact that there
are two ways of expressing entities. You can express some entities as tables,
and some entities as classes in a table. The next section describes an
example.

583
Java database design

Relational database limitations


Consider an insurance company wishing to keep track of its customers. A
customer may be considered as an entity, so it is natural to construct a single
table to hold all customers of the company.
However, insurance companies handle several kinds of customer. They
handle policy holders, policy beneficiaries, and people who are responsible
for paying policy premiums. For each of these customer types, the insurance
company needs different information. For a beneficiary, little is needed
beyond an address. For a policy holder, health information is required. For
the customer paying the premiums, information may be needed for tax
purposes.
Is it best to handle the separate kinds of customers as separate entities, or to
handle the customer type as an attribute of the customer? There are
limitations to both approaches:
♦ Building separate tables for each type of customer can lead to a very
complex database design, and to multi-table queries when information
relating to all customers is required.
♦ It is difficult, if using a single customer table, to ensure that the
information for each customer is correct. Making columns required for
some customers, but not for others, nullable permits the entry of correct
data, but does not enforce it. There is no simple way in relational
databases to tie default behavior to an attribute of the new entry.

Using classes to overcome relational database limitations


You can use a single customer table, with Java class columns for some of the
information, to overcome the limitations of relational databases.
For example, suppose different contact information is necessary for policy
holders than for beneficiaries. You could approach this by defining a column
based on a ContactInformation class. Then define classes named
HolderContactInformation and BeneficiaryContactInformation, which
are subclasses of the ContactInformation class. By entering new customers
according to their type, you can be sure that the information is correct.

Levels of abstraction for relational data


Data in a relational database can be categorized by its purpose. Which of this
data belongs in a Java class, and which is best kept in simple data type
columns?

584
Chapter 18 Using Java in the Database

♦ Referential integrity columns Primary key columns and foreign key


columns commonly hold identification numbers. These identification
numbers may be called referential data; since they primarily define the
structure of the database and the relationships between tables.
Referential data does not generally belong in Java classes. Although you
can make a Java class column a primary key column, integers and other
simple data types are more efficient for this purpose.
♦ Indexed data Columns that are commonly indexed may also belong
outside a Java class. However, the dividing line between data that needs
to be indexed and data that is not to be indexed is vaguely defined.
With computed columns you can selectively index on a Java field or
method (or, in fact, some other expression). If you define a Java class
column and then find that it would be useful to index on a field or
method of that column, you can use computed columns to make a
separate column from that field or method.
$ For more information, see "Using computed columns with Java
classes" on page 586.
♦ Descriptive data It is common for some of the data in each row to be
descriptive. It is not used for referential integrity purposes, and is
possibly not frequently indexed, but it is data commonly used in queries.
For an employee table, this may include information such as start date,
address, benefit information, salary, and so on. This data can often
benefit from being combined into fewer columns of Java class data
types.
Java classes are useful for abstracting at a level between that of the single
relational column and the relational table.

585
Using computed columns with Java classes

Using computed columns with Java classes


Computed columns are a feature designed to make Java database design
easier, to make it easier to take advantage of Java features for existing
databases, and to improve performance of Java data types.
A computed column is a column whose values are obtained from other
columns. You cannot INSERT or UPDATE values in computed columns.
However, any update that attempts to modify the computed column does fire
triggers associated with the column.
Uses of computed There are two main uses of computed columns with Java classes:
columns
♦ Exploding a Java column If you create a column using a Java class
data type, computed columns enable you to index one of the fields of the
class. You can add a computed column that holds the value of the field,
and create an index on that field.
♦ Adding a Java column to a relational table If you wish to use some
of the features of Java classes while disturbing an existing database as
little as possible, you can add a Java column as a computed column,
collecting its values from other columns in the table.

Defining computed columns


Computed columns are declared in the CREATE TABLE or ALTER
TABLE statement.
Creating tables The following CREATE TABLE statement is used to create the product
with computed table in the Java sample tables:
columns CREATE TABLE product
(
id INTEGER NOT NULL,
JProd asademo.Product NOT NULL,
name CHAR(15) COMPUTE ( JProd>>name ),
PRIMARY KEY ("id")
)

Adding computed The following statement alters the product table by adding another
columns to tables computed column:
ALTER TABLE product
ADD inventory_Value INTEGER
COMPUTE ( JProd.quantity * JProd.unit_price )

Modifying the You can change the expression used in a computed column using the
expression for ALTER TABLE statement. The following statement changes the expression
computed columns that a computed column is based on.

586
Chapter 18 Using Java in the Database

ALTER TABLE table_name


ALTER column-name SET COMPUTE ( expression )
The column is recalculated when this statement is executed. If the new
expression is invalid, the ALTER TABLE statement fails.
The following statement stops a column from being a computed column.
ALTER TABLE table_name
ALTER column-name DROP COMPUTE
The values in the column are not changed when this statement is executed.

Inserting and updating computed columns


Computed columns have some impact on valid INSERT and UPDATE
statements. The jdba.product table in the Java sample tables has a computed
column (name) which we use to illustrate the issues. The table definition is
as follows:
CREATE TABLE "jdba"."product"
(
"id" INTEGER NOT NULL,
"JProd" asademo.Product NOT NULL,
"name" CHAR(15) COMPUTE( JProd.name ),
PRIMARY KEY ("id")
)
♦ No direct inserts or updates You cannot insert a value directly into a
computed column. The following statement fails with a Duplicate Insert
Column error:
-- Incorrect statement
INSERT INTO PRODUCT (id, name)
VALUES( 3006, ’bad insert statement’ )
Similarly, no UPDATE statement can directly update a computed
column.
♦ Listing column names You must always specify column names in
INSERT statements on tables with computed columns. The following
statement fails with a Wrong Number of Values for Insert error:
-- Incorrect statement
INSERT INTO PRODUCT
VALUES( 3007,new asademo.Product() )
Instead, you must list the columns, as follows:
INSERT INTO PRODUCT( id, JProd )
VALUES( 3007,new asademo.Product() )

587
Using computed columns with Java classes

♦ Triggers You can define triggers on a computed column, and any


INSERT or UPDATE statement that affects the column fires the trigger.

When computed columns are recalculated


Recalculating computed columns occurs when:
♦ Any column is deleted, added, or renamed
♦ The table is renamed
♦ Any column’s data type or COMPUTE clause is modified
♦ A row is inserted.
♦ A row is updated.
Computed columns are not recalculated when queried. If you use a time-
dependent expression, or one which depends on the state of the database in
some other way, then the computed column may not give a proper result.

588
Chapter 18 Using Java in the Database

Configuring memory for Java


This section describes the memory requirements for running Java in the
database and how to set up your server to meet those requirements.
The Java VM requires a significant amount of cache memory. For
information on tuning the cache, see "Using the cache to improve
performance" on page 807.
Database and The Java VM uses memory on both a per-database and on a per-connection
connection-level basis.
requirements
♦ The per database requirements are not relocatable: they cannot be paged
out to disk. They must fit into the server cache. This type of memory is
not for the server, it is for each database. When estimating cache
requirements, you must sum the requirements for each database you run
on the server.
♦ The per-connection requirements are relocatable, but only as a unit. The
requirements for one connection are either all in cache, or all in the
temporary file.

How memory is used


Java in the database requires memory for several purposes:
♦ When Java is first used when a server is running, the VM is loaded into
memory, requiring over 1.5 Mb of memory. This is part of the database-
wide requirements. An additional VM is loaded for each database that
uses Java.
♦ For each connection that uses Java, a new instance of the VM loads for
that connection. The new instance requires about 200K per connection.
♦ Each class definition that is used in a Java application is loaded into
memory. This is held in database-wide memory: separate copies are not
required for individual connections.
♦ Each connection requires a working set of Java variables and application
stack space (used for method arguments and so on).

Managing memory You can control memory use in the following ways:
♦ Set the overall cache size You must use a cache size sufficient to
meet all the requirements for non-relocatable memory.
The cache size is set when the server is started using the -c command-
line switch.

589
Configuring memory for Java

In many cases, a cache size of 8 Mb is sufficient.


♦ Set the namespace size The Java namespace size is the maximum
size, in bytes, of the per database memory allocation.
You can set this using the JAVA_NAMESPACE_SIZE option. The
option is global, and can only be set by a user with DBA authority.
♦ Set the heap size This JAVA_HEAP_SIZE option sets the maximum
size, in bytes, of per connection memory.
This option can be set for individual connections, but as it affects the
memory available for other users it can be set only by a user with DBA
authority.

Starting and In addition to setting memory parameters for Java, you can unload the VM
stopping the VM when Java is not in use using the STOP JAVA statement. Only a user with
DBA authority can execute this statement. The syntax is simply:
STOP JAVA
The VM loads whenever a Java operation is carried out. If you wish to
explicitly load it in readiness for carrying out Java operations, you can do so
by executing the following statement:
START JAVA

590
C H A P T E R 1 9

Data Access Using JDBC

About this chapter This chapter describes how to use JDBC to access data.
JDBC can be used both from client applications and inside the database. Java
classes using JDBC provide a more powerful alternative to SQL stored
procedures for incorporating programming logic in the database.
Contents
Topic Page
JDBC overview 592
Establishing JDBC connections 597
Using JDBC to access data 604
Using the Sybase jConnect JDBC driver 612
Creating distributed applications 616

591
JDBC overview

JDBC overview
JDBC provides a SQL interface for Java applications: if you want to access
relational data from Java, you do so using JDBC calls.
Rather than a thorough guide to the JDBC database interface, this chapter
provides some simple examples to introduce JDBC and illustrates how you
can use it inside and outside the server. As well, this chapter provides more
details on the server-side use of JDBC, running inside the database server.
$ The examples illustrate the distinctive features of using JDBC in
Adaptive Server Anywhere. For more information about JDBC
programming, see any JDBC programming book.
JDBC and You can use JDBC with Adaptive Server Anywhere in the following ways:
Adaptive Server
♦ JDBC on the client Java client applications can make JDBC calls to
Anywhere
Adaptive Server Anywhere. The connection takes place through the
Sybase jConnect JDBC driver or through the JDBC-ODBC bridge.
In this chapter, the phrase client application applies both to applications
running on a user’s machine and to logic running on a middle-tier
application server.
♦ JDBC in the server Java classes installed into a database can make
JDBC calls to access and modify data in the database, using an internal
JDBC driver.
The focus in this chapter is on server-side JDBC.
JDBC resources ♦ Required software You need TCP/IP to use the Sybase jConnect
driver.
The Sybase jConnect driver may already be available, depending on
your installation of Adaptive Server Anywhere.
$ For more information about the jConnect driver and its location,
see "The jConnect driver files" on page 612.
♦ Example source code You can find source code for the examples in
this chapter in the file JDBCExamples.java in the jxmp subdirectory
under your Adaptive Server Anywhere installation directory.
$ For instructions on how to set up the Java examples, including the
JDBCExamples class, see "Setting up the Java examples" on page 550.

JDBC program structure


The following sequence of events typically occur in JDBC applications:
592
Chapter 19 Data Access Using JDBC

1 Create a Connection object Calling a getConnection class method


of the DriverManager class creates a Connection object, and
establishes a connection with a database.
2 Generate a Statement object The Connection object generates a
Statement object.
3 Pass a SQL statement A SQL statement that executed within the
database environment passes to the Statement object. If the statement is
a query, this action returns a ResultSet object.
The ResultSet object contains the data returned from the SQL
statement, but exposes it one row at a time (similar to the way a cursor
works).
4 Loop over the rows of the result set The next method of the
ResultSet object performs two actions:
♦ The current row (the row in the result set exposed through the
ResultSet object) advances one row.
♦ A Boolean value (true/false) returns to indicate whether there is, in
fact, a row to advance to.
5 For each row, retrieve the values Values are retrieved for each
column in the ResultSet object by identifying either the name or
position of the column. You can use the getDate method to get the value
from a column on the current row.
Java objects can use JDBC objects to interact with a database and get data
for their own use, for example to manipulate or for use in other queries.

Server-side JDBC features


JDBC 1.2 is part of JDK 1.1. JDBC 2.0 is part of Java 2 (JDK 1.2).
Java in the database supplies a subset of the JDK version 1.1, so the internal
JDBC driver supports JDBC version 1.2.
The internal JDBC driver (asajdbc) makes some features of JDBC 2.0
available from server-side Java applications, but does not provide full JDBC
2.0 support.
The JDBC classes in the java.sql package that is part of the Java in the
database support are at level 1.2. Server-side features that are part of JDBC
2.0 are implemented in the sybase.sql.ASA package. To use JDBC 2.0
features you must cast your JDBC objects into the corresponding classes in
the sybase.sql.ASA package, rather than the java.sql package. Classes that
are declared as java.sql are restricted to JDBC 1.2 functionality only.

593
JDBC overview

The classes in sybase.sql.ASA are as follows:

JDBC class Sybase internal driver class


java.sql.Connection sybase.sql.ASA.SAConnection
java.sql.Statement sybase.sql.ASA.SAStatement
java.sql.PreparedStatement sybase.sql.ASA.SAPreparedStatement
java.sql.CallableStatement sybase.sql.ASA.SACallableStatement
java.sql.ResultSetMetaData sybase.sql.ASA.SAResultSetMetaData
java.sql.ResultSet sybase.sql.SAResultSet
java.sql.DatabaseMetaData sybase.sql.SADatabaseMetaData

The following function provides a ResultSetMetaData object for a prepared


statement without requiring a ResultSet or executing the statement. This
function is not part of the JDBC standard.
ResultSetMetaData sybase.sql.ASA.SAPreparedStatement.describe()

JDBC 2.0 The following classes are part of the JDBC 2.0 core interface, but are not
restrictions available in the sybase.sql.ASA package:
♦ java.sql.Blob
♦ java.sql.Clob
♦ java.sql.Ref
♦ java.sql.Struct
♦ java.sql.Array
♦ java.sql.Map

The following JDBC 2.0 core functions are not available in the
sybase.sql.ASA package:

594
Chapter 19 Data Access Using JDBC

Class in Missing functions


sybase.sql.ASA
SAConnection java.util.Map getTypeMap()
void setTypeMap( java.util.Map map )
SAPreparedStatement void setRef( int pidx, java.sql.Ref r )
void setBlob( int pidx, java.sql.Blob b )
void setClob( int pidx, java.sql.Clob c )
void setArray( int pidx, java.sql.Array a )
SACallableStatement Object getObject( pidx, java.util.Map map )
java.sql.Ref getRef( int pidx )
java.sql.Blob getBlob( int pidx )
java.sql.Clob getClob( int pidx )
java.sql.Array getArray( int pidx )
SAResultSet Object getObject( int cidx, java.util.Map map )
java.sql.Ref getRef( int cidx )
java.sql.Blob getBlob( int cidx )
java.sql.Clob getClob( int cidx )
java.sql.Array getArray( int cidx )
Object getObject( String cName, java.util.Map map )
java.sql.Ref getRef( String cName )
java.sql.Blob getBlob( String cName )
java.sql.Clob getClob( String cName )
java.sql.Array getArray( String cName )

Differences between client- and server-side JDBC connections


A difference between JDBC on the client and in the database server lies in
establishing a connection with the database environment.
♦ Client side In client-side JDBC, establishing a connection requires the
Sybase jConnect JDBC driver. Passing arguments to the
DriverManager.getConnection establishes the connection. The
database environment is an external application from the perspective of
the client application.

595
JDBC overview

jConnect required
Depending on the package you received Adaptive Server Anywhere
in, Sybase jConnect may or may not be included. You must have
jConnect to use JDBC from external applications. You can use
internal JDBC without jConnect.

♦ Server-side When using JDBC within the database server, a


connection already exists. A value of jdbc:default:connection passes to
DriverManager.getConnection, which provides the JDBC application
with the ability to work within the current user connection. This is a
quick, efficient and safe operation because the client application has
already passed the database security to establish the connection. The
user ID and password, having been provided once, do not need to be
provided again. The asajdbc driver can only connect to the database of
the current connection.
You can write JDBC classes in such a way that they can run both at the client
and at the server by employing a single conditional statement for
constructing the URL. An external connection requires the machine name
and port number, while the internal connection requires
jdbc:default:connection.

596
Chapter 19 Data Access Using JDBC

Establishing JDBC connections


This section presents classes that establish a JDBC database connection from
a Java application.

Connecting from a JDBC client application using jConnect


If you wish to access database system tables (database metadata) from a
JDBC application, you must add a set of jConnect system objects to your
database. Asajdbc shares the same stored procedures for database metadata
support with jConnect. These procedures are installed to all databases by
default. A dbinit switch "-I" prevents this installation.
$ For information about adding the jConnect system objects to a
database, see "Using the Sybase jConnect JDBC driver" on page 612.
The following complete Java application is a command-line application that
connects to a running database, prints a set of information to your command
line, and terminates.
Establishing a connection is the first step any JDBC application must take
when working with database data.
$ This example illustrates an external connection, which is a regular
client/server connection. For information on how to create an internal
connection, from Java classes running inside the database server, see
"Establishing a connection from a server-side JDBC class" on page 601.

External connection example code


The following is the source code for the methods used to make a connection.
The source code can be found in the main method and the ASAConnect
method of the file JDBCExamples.java in the jxmp directory under your
Adaptive Server Anywhere installation directory:
// Import the necessary classes
import java.sql.*; // JDBC
import com.sybase.jdbc.*; // Sybase jConnect
import java.util.Properties; // Properties
import sybase.sql.*; // Sybase utilities
import asademo.*; // Example classes

private static Connection conn;


public static void main(String args[]) {

conn = null;

597
Establishing JDBC connections

String machineName;
if ( args.length != 1 ) {
machineName = "localhost";
} else {
machineName = new String( args[0] );
}

ASAConnect( "dba", "sql", machineName );


if( conn!=null ) {
System.out.println( "Connection successful" );
}else{
System.out.println( "Connection failed" );
}

try{
serializeVariable();
serializeColumn();
serializeColumnCastClass();
}
catch( Exception e ) {
System.out.println( "Error: " + e.getMessage() );
e.printStackTrace();
}
}
}
private static void ASAConnect( String UserID,
String Password,
String Machinename ) {
// uses global Connection variable

String _coninfo = new String( Machinename );

Properties _props = new Properties();


_props.put("user", UserID );
_props.put("password", Password );

// Load the Sybase Driver


try {

Class.forName("com.sybase.jdbc.SybDriver").newInstance()
;

StringBuffer temp = new StringBuffer();


// Use the Sybase jConnect driver...
temp.append("jdbc:sybase:Tds:");
// to connect to the supplied machine name...
temp.append(_coninfo);
// on the default port number for ASA...
temp.append(":2638");
// and connect.
System.out.println(temp.toString());

598
Chapter 19 Data Access Using JDBC

conn = DriverManager.getConnection(
temp.toString() , _props );
}
catch ( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}

How the external connection example works


The external connection example is a Java command-line application.
Importing The application requires several libraries, which are imported in the first
packages lines of JDBCExamples.java:
♦ The java.sql package contains the Sun Microsystems JDBC classes,
which are required for all JDBC applications. You’ll find it in the
classes.zip file in your Java subdirectory.
♦ Imported from com.sybase.jdbc, the Sybase jConnect JDBC driver is
required for all applications that connect using jConnect. You’ll find it in
the jdbcdrv.zip file in your Java subdirectory.
♦ The application uses a property list. The java.util.Properties class is
required to handle property lists. You’ll find it in the classes.zip file in
your Java subdirectory.
♦ The sybase.sql package contains utilities used for serialization. You’ll
find it in the asajdbc.zip file in your Java subdirectory.
♦ The asademo package contains example classes used in some examples.
You’ll find it in the asademo.jar file in your jxmp subdirectory.

The main method Each Java application requires a class with a method named main, which is
the method invoked when the program starts. In this simple example,
JDBCExamples.main is the only method in the application.
The JDBCExamples.main method carries out the following tasks:
1 Processes the command-line argument, using the machine name if
supplied. By default, the machine name is localhost, which is
appropriate for the personal database server.
2 Calls the ASAConnect method to establish a connection.
3 Executes several methods that scroll data to your command line.

The ASAConnect The JDBCExamples.ASAConnect method carries out the following tasks:
method
1 Connects to the default running database using Sybase jConnect.

599
Establishing JDBC connections

♦ Class.forName loads jConnect. Using the newInstance method


works around issues in some browsers.
♦ The StringBuffer statements build up a connection string from the
literal strings and the supplied machine name provided on the
command line.
♦ DriverManager.getConnection establishes a connection using the
connection string.
2 Returns control to the calling method.

Running the external connection example


This section describes how to run the external connection example

v To create and execute the external connection example application:


1 From a system command prompt, change to the Adaptive Server
Anywhere installation directory.
2 Change to the jxmp subdirectory
3 Ensure that your CLASSPATH environment variable includes the
current directory (.) and the imported zip files. For example, from a
command prompt (the following should be entered all on one line):
set
classpath=..\java\jdbcdrv.zip;.;..\java\asajdbc.zip;
asademo.jar
The default zip file name for Java is classes.zip. For classes in any file
named classes.zip, you only need the directory name in the
CLASSPATH variable, not the zip-file name itself. For classes in files
with other names, you must supply the zip file name.
You need the current directory in the CLASSPATH to run the example.
4 Ensure the database is loaded onto a database server running TCP/IP.
You can start such a server on your local machine using the following
command (from the jxmp subdirectory):
start dbeng7 -c 8M ..\asademo
5 Enter the following at the command prompt to run the example:
java JDBCExamples
If you wish to try this against a server running on another machine, you
must enter the correct name of that machine. The default is localhost,
which is an alias for the current machine name.

600
Chapter 19 Data Access Using JDBC

6 Confirm that a list of people and products appears at your command


prompt.
If the attempt to connect fails, an error message appears instead.
Confirm that you have executed all the steps as required. Check that
your CLASSPATH is correct. An incorrect CLASSPATH results in a
failure to locate a class.
$ For more information about using jConnect, see "Using the Sybase
jConnect JDBC driver" on page 612, and see the online documentation for
jConnect.

Establishing a connection from a server-side JDBC class


SQL statements in JDBC are built using the createStatement method of a
Connection object. Even classes running inside the server need to establish a
connection to create a Connection object.
Establishing a connection from a server-side JDBC class is more
straightforward than establishing an external connection. Because a user
already connected executes the server-side class, the class simply uses the
current connection.

Server-side connection example code


The following is the source code for the example. You can find the source
code in the InternalConnect method of JDBCExamples.java in the jxmp
directory under your Adaptive Server Anywhere installation directory:
public static void InternalConnect() {
try {
conn =
DriverManager.getConnection("jdbc:default:connection");
System.out.println("Hello World");
}
catch ( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}
}

How the server-side connection example works


In this simple example, InternalConnect() is the only method used in the
application.

601
Establishing JDBC connections

The application requires only one of the libraries (JDBC) imported in the
first line of the JDBCExamples.java class. The others are for external
connections. The package named java.sql contains the JDBC classes.
The InternalConnect() method carries out the following tasks:
1 Connects to the default running database using the current connection:
♦ DriverManager.getConnection establishes a connection using a
connection string of jdbc:default:connection.
2 Prints Hello World to the current standard output, which is the server
window. System.out.println carries out the printing.
3 If there is an error in the attempt to connect, an error message appears in
the server window, together with the place where the error occurred.
The try and catch instructions provide the framework for the error
handling.
4 The class terminates.

Running the server-side connection example


This section describes how to run the server-side connection example.

v To create and execute the internal connection example application:


1 If you have not already done so, compile the JDBCExamples.java file. If
you are using the JDK, you can do the following in the jxmp directory
from a command prompt:
javac JDBCExamples.java
2 Start a database server using the sample database. You can start such a
server on your local machine using the following command (from the
jxmp subdirectory):
start dbeng7 ..\asademo
The TCP/IP network protocol is not necessary in this case, since you are
not using jConnect. However, you must have at least 8 Mb of cache
available to use Java classes in the database.
3 Install the class into the sample database. Once connected to the sample
database, you can do this from Interactive SQL using the following
command:
INSTALL JAVA NEW
FROM FILE ’path\jxmp\JDBCExamples.class’
where path is the path to your installation directory.

602
Chapter 19 Data Access Using JDBC

You can also install the class using Sybase Central. While connected to
the sample database, open the Java Objects folder and double-click Add
Class. Then follow the instructions in the wizard.
4 You can now call the InternalConnect method of this class just as you
would a stored procedure:
CALL JDBCExamples>>InternalConnect()
The first time a Java class is called in a session, the internal Java virtual
machine must be loaded. This can take a few seconds.
5 Confirm that the message Hello World prints on the server screen.

Notes on JDBC connections


♦ Autocommit behavior The JDBC specification requires that, by
default, a COMMIT is performed after each data modification statement.
Currently, the server-side JDBC behavior is to commit. You can control
this behavior using a statement such as the following:
conn.setAutoCommit( false ) ;
where conn is the current connection object.
♦ Connection defaults From server-side JDBC, only the first call to
getConnection( "jdbc:default:connection" ) creates a new connection
with the default values. Subsequent calls return a wrapper of the current
connection with all connection properties unchanged. If you set
AutoCommit to OFF in your initial connection, any subsequent
getConnection calls within the same Java code return a connection with
AutoCommit set to OFF.
You may wish to ensure that closing a connection resets connection
properties to their default values, so subsequent connections are obtained
with standard JDBC values. The following type of code achieves this:
Connection conn = DriverManager.getConnection("");
boolean oldAutoCommit = conn.getAutoCommit();
try {
// do code here
}
finally {
conn.setAutoCommit( oldAutoCommit );
}
This discussion applies not only to AutoCommit, but also to other
connection properties such as TransactionIsolation and isReadOnly.

603
Using JDBC to access data

Using JDBC to access data


Java applications that hold some or all classes in the database have
significant advantages over traditional SQL stored procedures. At an
introductory level, however, it may be helpful to use the parallels with SQL
stored procedures to demonstrate the capabilities of JDBC. In the following
examples, we write Java classes that insert a row into the Department table.
As with other interfaces, SQL statements in JDBC can be either static or
dynamic. Static SQL statements are constructed in the Java application, and
sent to the database. The database server parses the statement, and selects an
execution plan, and executes the statement. Together, parsing and selecting
an execution plan are referred to as preparing the statement.
If a similar statement has to be executed many times (many inserts into one
table, for example), there can be significant overhead in static SQL because
the preparation step has to be executed each time.
In contrast, a dynamic SQL statement contains placeholders. The statement,
prepared once using these placeholders, can be executed many times without
the additional expense of preparing.
In this section we use static SQL. Dynamic SQL is discussed in a later
section.

Preparing for the examples


This section describes how to prepare for the examples in the remainder of
the chapter.
Sample code The code fragments in this section are taken from the complete class
JDBCExamples.java, included in the jxmp directory under your installation
directory.

v To install the JDBCExamples class:


1 If you have not already done so, install the JDBCExamples.class file
into the sample database. Once connected to the sample database from
Interactive SQL, enter the following command in the SQL Statements
pane:
INSTALL JAVA NEW
FROM FILE ’path\jxmp\JDBCExamples.class’
where path is the path to your installation directory.

604
Chapter 19 Data Access Using JDBC

You can also install the class using Sybase Central. While connected to
the sample database, open the Java Objects folder and double-click Add
Java Class or JAR. Then follow the instructions in the wizard.

Inserts, updates, and deletes using JDBC


The Statement object executes static SQL statements. You execute SQL
statements such as INSERT, UPDATE, and DELETE, which do not return
result sets, using the executeUpdate method of the Statement object.
Statements such as CREATE TABLE and other data definition statements
can also be executed using executeUpdate.
The following code fragment illustrates how JDBC carries out INSERT
statements. It uses an internal connection held in the Connection object
named conn. The code for inserting values from an external application
using JDBC would need to use a different connection, but otherwise would
be unchanged.
public static void InsertFixed() {
// returns current connection
conn =
DriverManager.getConnection("jdbc:default:connection");
// Disable autocommit
conn.setAutoCommit( false );

Statement stmt = conn.createStatement();

Integer IRows = new Integer( stmt.executeUpdate


("INSERT INTO Department (dept_id, dept_name )"
+ "VALUES (201, ’Eastern Sales’)"
) );
// Print the number of rows updated
System.out.println(IRows.toString() + " row
inserted" );
}

Source code available


This code fragment is part of the InsertFixed method of the
JDBCExamples class, included in the jxmp subdirectory of your
installation directory.

Notes ♦ The setAutoCommit method turns off the AutoCommit behavior, so


changes are only committed if you execute an explicit COMMIT
instruction.
♦ The executeUpdate method returns an integer, which reflects the
number of rows affected by the operation. In this case, a successful
INSERT would return a value of one (1).
605
Using JDBC to access data

♦ The integer return type converts to an Integer object. The Integer class
is a wrapper around the basic int data type, providing some useful
methods such as toString().
♦ The Integer IRows converts to a string to be printed. The output goes to
the server window.

v To run the JDBC Insert example:


1 Using Interactive SQL, connect to the sample database as user ID dba.
2 Ensure the JDBCExamples class has been installed. It is installed
together with the other Java examples classes.
$ For instructions on installing the Java examples classes, see
"Setting up the Java examples" on page 550.
3 Call the method as follows:
CALL JDBCExamples>>InsertFixed()
4 Confirm that a row has been added to the department table.
SELECT *
FROM department
The row with ID 201 is not committed. You can execute a ROLLBACK
statement to remove the row.
In this example, you have seen how to create a very simple JDBC class.
Subsequent examples expand on this.

Passing arguments to Java methods


We can expand the InsertFixed method to illustrate how arguments are
passed to Java methods.
The following method uses arguments passed in the call to the method as the
values to insert:
public static void InsertArguments(
String id, String name) {
try {
conn = DriverManager.getConnection(
"jdbc:default:connection" );

String sqlStr = "INSERT INTO Department "


+ " ( dept_id, dept_name )"
+ " VALUES (" + id + ", ’" + name + "’)";

// Execute the statement


Statement stmt = conn.createStatement();

606
Chapter 19 Data Access Using JDBC

Integer IRows = new Integer( stmt.executeUpdate(


sqlStr.toString() ) );

// Print the number of rows updated


System.out.println(IRows.toString() + " row
inserted" );
}
catch ( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}

Notes ♦ The two arguments are the department id (an integer) and the
department name (a string). Here, both arguments pass to the method as
strings, because they are part of the SQL statement string.
♦ The INSERT is a static statement and takes no parameters other than the
SQL itself.
♦ If you supply the wrong number or type of arguments, you receive the
Procedure Not Found error.

v To use a Java method with arguments:


1 If you have not already installed the JDBCExamples.class file into the
sample database, do so.
2 Connect to the sample database from Interactive SQL, and enter the
following command:
call JDBCExamples>>InsertArguments( ’203’, ’Northern
Sales’ )
3 Verify that an additional row has been added to the Department table:
SELECT *
FROM Department
4 Roll back the changes to leave the database unchanged:
ROLLBACK

Queries using JDBC


The Statement object executes static queries, as well as statements that do
not return result sets. For queries, you use the executeQuery method of the
Statement object. This returns the result set in a ResultSet object.

607
Using JDBC to access data

The following code fragment illustrates how queries can be handled within
JDBC. The code fragment places the total inventory value for a product into
a variable named inventory. The product name is held in the String variable
prodname. This example is available as the Query method of the
JDBCExamples class.
The example assumes an internal or external connection has been obtained
and is held in the Connection object named conn. It also assumes a variable
public static void Query () {
int max_price = 0;
try{
conn = DriverManager.getConnection(
"jdbc:default:connection" );

// Build the query


String sqlStr = "SELECT id, unit_price "
+ "FROM product" ;

// Execute the statement


Statement stmt = conn.createStatement();
ResultSet result = stmt.executeQuery( sqlStr );

while( result.next() ) {
int price = result.getInt(2);
System.out.println( "Price is " + price );
if( price > max_price ) {
max_price = price ;
}
}
}
catch( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
return max_price;
}

Running the Once you have installed the JDBCExamples class into the sample database,
example you can execute this method using the following statement in Interactive
SQL:
select JDBCExamples>>Query()

Notes ♦ The query selects the quantity and unit price for all products named
prodname. These results are returned into the ResultSet object named
result.
♦ There is a loop over each of the rows of the result set. The loop uses the
next method.

608
Chapter 19 Data Access Using JDBC

♦ For each row, the value of each column is retrieved into an integer
variable using the getInt method. ResultSet also has methods for other
data types, such as getString, getDate, and getBinaryString.
The argument for the getInt method is an index number for the column,
starting from 1.
Data type conversion from SQL to Java is carried out according to the
information in "SQL-to-Java data type conversion" on page 295 of the
book ASA Reference.
♦ Adaptive Server Anywhere supports bidirectional scrolling cursors.
However, JDBC provides only the next method, which corresponds to
scrolling forward through the result set.
♦ The method returns the value of max_price to the calling environment,
and Interactive SQL displays it in the Results pane.

Using prepared statements for more efficient access


If you use the Statement interface, you parse each statement you send to the
database, generate an access plan, and execute the statement. The steps prior
to actual execution are called preparing the statement.
You can achieve performance benefits if you use the PreparedStatement
interface. This allows you to prepare a statement using placeholders, and
then assign values to the placeholders when executing the statement.
Using prepared statements is particularly useful when carrying out many
similar actions, such as inserting many rows.
$ For a general introduction to prepared statements, see "Preparing
statements" on page 266.
Example The following example illustrates how to use the PreparedStatement
interface, although inserting a single row is not a good use of prepared
statements.
The following method of the JDBCExamples class carries out a prepared
statement:
public static void JInsertPrepared(int id, String name)
try {
conn = DriverManager.getConnection(
"jdbc:default:connection");

// Build the INSERT statement


// ? is a placeholder character
String sqlStr = "INSERT INTO Department "
+ "( dept_id, dept_name ) "

609
Using JDBC to access data

+ "VALUES ( ? , ? )" ;

// Prepare the statement


PreparedStatement stmt = conn.prepareStatement(
sqlStr );

stmt.setInt(1, id);
stmt.setString(2, name );
Integer IRows = new Integer(
stmt.executeUpdate() );

// Print the number of rows updated


System.out.println(IRows.toString() + " row
inserted" );
}
catch ( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}

Running the Once you have installed the JDBCExamples class into the sample database,
example you can execute this example by entering the following statement:
call JDBCExamples>>InsertPrepared(
202, ’Eastern Sales’ )
The string argument is enclosed in single quotes, which is appropriate for
SQL. If you invoked this method from a Java application, use double quotes
to delimit the string.

Inserting and retrieving objects


As an interface to relational databases, JDBC is designed to retrieve and
manipulate traditional SQL data types. Adaptive Server Anywhere also
provides abstract data types in the form of Java classes. The way you access
these Java classes using JDBC depends on whether you want to insert or
retrieve the objects.
$ For more information on getting and setting entire objects, see
"Creating distributed applications" on page 616.

Retrieving objects
You can retrieve objects and their fields and methods by:

610
Chapter 19 Data Access Using JDBC

♦ Accessing methods and fields Java methods and fields can be


included in the select-list of a query. A method or field then appears as a
column in the result set, and can be accessed using one of the standard
ResultSet methods, such as getInt, or getString.
♦ Retrieving an object If you include a column with a Java class data
type in a query select list, you can use the ResultSet getObject method
to retrieve the object into a Java class. You can then access the methods
and fields of that object within the Java class.

Inserting objects
From a server-side Java class, you can use the JDBC setObject method to
insert an object into a column with Java class data type.
You can insert objects using a prepared statement. For example, the
following code fragment inserts an object of type MyJavaClass into a column
of table T:
java.sql.PreparedStatement ps =
conn.prepareStatement("insert T values( ? )" );
ps.setObject( 1, new MyJavaClass() );
ps.executeUpdate();
An alternative is to set up a SQL variable that holds the object and then to
insert the SQL variable into the table.

Miscellaneous JDBC notes


♦ Access permissions Like all Java classes in the database, classes
containing JDBC statements can be accessed by any user. There is no
equivalent of the GRANT EXECUTE statement that grants permission
to execute procedures, and there is no need to qualify the name of a class
with the name of its owner.
♦ Execution permissions Java classes are executed with the
permissions of the connection executing them. This behavior is different
to that of stored procedures, which execute with the permissions of the
owner.

611
Using the Sybase jConnect JDBC driver

Using the Sybase jConnect JDBC driver


If you wish to use JDBC from a client application or applet, you must have
Sybase jConnect to connect to Adaptive Server Anywhere databases.
Depending on the package you received Adaptive Server Anywhere in,
Sybase jConnect may or may not be included. You must have jConnect in
order to use JDBC from client applications. You can use server-side JDBC
without jConnect.
$ For a full description of jConnect and its use with Adaptive Server
Anywhere, see the jConnect documentation available in the online Books or
from the jConnect Web site.

Versions of jConnect supplied with Adaptive Server Anywhere


Adaptive Server Anywhere provides the following versions of the Sybase
jConnect JDBC driver:
♦ Full version If you choose to install jConnect, a jConnect subdirectory
is added to your Adaptive Server Anywhere installation. This holds a
directory tree with all jConnect files.
♦ Zip file The Remote Data Access features, and the Java debugger, both
use jConnect to connect to the database. A zip file of the basic jConnect
classes is provided to enable jConnect use even without the full
development version of the driver.

The jConnect driver files


The Sybase jConnect driver is installed into a set of directories under the
jConnect subdirectory of your Adaptive Server Anywhere installation. If you
have not installed jConnect, you can use the jdbcdrv.zip file installed into the
Java subdirectory.
Classpath setting For your application to use jConnect, the jConnect classes must be in your
for jConnect CLASSPATH environment variable at compile time and run time, so the
Java compiler and Java runtime can locate the necessary files.
For example, the following command adds the jConnect driver class path to
an existing CLASSPATH environment variable where path is your Adaptive
Server Anywhere installation directory.
set classpath=%classpath%;path\jConnect\classes
The following alternative command adds the jdbcdrv.zip file to your
CLASSPATH.
612
Chapter 19 Data Access Using JDBC

set classpath=%classpath%;path\java\jdbcdrv.zip

Importing the The classes in jConnect are all in the com.sybase package. The client
jConnect classes application needs to access classes in com.sybase.jdbc. For your application
to use jConnect, you must import these classes at the beginning of each
source file:
import com.sybase.jdbc.*

Installing jConnect system objects into a database


If you wish to use jConnect to access system table information (database
metadata), you must add the jConnect system objects to your database.
By default, the jConnect system objects are added to a database for any
database created using Version 6, and to any database upgraded to Version 6.
You can choose to add the jConnect objects to the database either when
creating or upgrading, or at a later time.
You can install the jConnect system objects from Sybase Central or from
Interactive SQL.

v To add the jConnect system objects to a Version 6 database from


Sybase Central:
1 Connect to the database from Sybase Central as a user with DBA
authority.
2 In the left pane of the main Sybase Central viewer, right-click the
database icon and choose Re-Install jConnect Meta-data Support from
the popup menu.

v To add the jConnect system objects to a Version 6 database from


Interactive SQL:
♦ Connect to the database from Interactive SQL as a user with DBA
authority, and enter the following command in the SQL Statements
pane:
read path\scripts\jcatalog.sql
where path is your Adaptive Server Anywhere installation directory.

613
Using the Sybase jConnect JDBC driver

Tip
You can also use a command prompt to add the jConnect system objects
to a Version 6 database. At the command prompt, type:
dbisql -c "uid=user;pwd=pwd" path\scripts\jcatalog.sql
where user and pwd identify a user with DBA authority, and path is your
Adaptive Server Anywhere installation directory.

Loading the driver


Before you can use jConnect in your application, load the driver by entering
the following statement:
Class.forName("com.sybase.jdbc.SybDriver").newInstance()
;
Using the newInstance method works around issues in some browsers.

Supplying a URL for the server


To connect to a database via jConnect, you need to supply a Universal
Resource Locator (URL) for the database. An example given in the section
"Connecting from a JDBC client application using jConnect" on page 597 is
as follows:
StringBuffer temp = new StringBuffer();
// Use the Sybase jConnect driver...
temp.append("jdbc:sybase:Tds:");
// to connect to the supplied machine name...
temp.append(_coninfo);
// on the default port number for ASA...
temp.append(":2638");
// and connect.
System.out.println(temp.toString());
conn = DriverManager.getConnection(temp.toString() ,
_props );
The URL is put together in the following way:
jdbc:sybase:Tds:machine-name:port-number
The individual components are include:
♦ jdbc:sybase:Tds The Sybase jConnect JDBC driver, using the TDS
application protocol.

614
Chapter 19 Data Access Using JDBC

♦ machine-name The IP address or name of the machine on which the


server is running. If you are establishing a same-machine connection,
you can use localhost, which means the current machine
♦ port number The port number on which the database server listens.
The port number assigned to Adaptive Server Anywhere is 2638. Use
that number unless there are specific reasons not to do so.
The connection string must be less than 253 characters in length.

Specifying a database on a server


Each Adaptive Server Anywhere server may have one or more databases
loaded at a time. The URL as described above specifies a server, but does not
specify a database. The connection attempt is made to the default database on
the server.
You can specify a particular database by providing an extended form of the
URL in one of the following ways.
Using the jdbc:sybase:Tds:machine-name:port-number?ServiceName=DBN
ServiceName The question mark followed by a series of assignments is a standard way of
parameter providing arguments to a URL. The case of servicename is not significant,
and there must be no spaces around the = sign. The DBN parameter is the
database name.
Using the A more general method allows you to provide additional connection
RemotePWD parameters such as the database name, or a database file, using the
parameter RemotePWD field. You set RemotePWD as a Properties field using the
setRemotePassword() method.
Here is sample code that illustrates how to use the field.
sybDrvr = (SybDriver)Class.forName(
"com.sybase.jdbc2.jdbc.SybDriver" ).newInstance();
props = new Properties();
props.put( "User", "DBA" );
props.put( "Password", "SQL" );
sybDrvr.setRemotePassword(
null, "dbf=asademo.db", props );
Connection con = DriverManager.getConnection(
"jdbc:sybase:Tds:localhost", props );
Using the database file parameter DBF, you can start a database on a server
using jConnect. By default, the database is started with autostop=YES. If you
specify a DBF or DBN of utility_db , then the utility database will
automatically be started.
$ For information on the utility database, see "Using the utility database"
on page 794.

615
Creating distributed applications

Creating distributed applications


In a distributed application, parts of the application logic run on one
machine, and parts run on another machine. With Adaptive Server
Anywhere, you can create distributed Java applications, where part of the
logic runs in the database server, and part on the client machine.
Adaptive Server Anywhere is capable of exchanging Java objects with an
external, Java client.
Having the client application retrieve a Java object from a database is the key
task in a distributed application This section describes how to accomplish
that task.
Related tasks In other parts of this chapter, we described how to retrieve several tasks
related to retrieving objects, but which should not be confused with
retrieving the object itself. For example:
♦ "Querying Java objects" on page 570 describes how to retrieve an object
into a SQL variable. This does not solve the problem of getting the
object into your Java application.
♦ "Querying Java objects" on page 570 also describes how to retrieve the
public fields and the return value of Java methods. Again, this is distinct
from retrieving an object into a Java application.
♦ "Inserting and retrieving objects" on page 610 describes how to retrieve
objects into server-side Java classes. Again, this is not the same as
retrieving them into a client application.

Requirements for There are several tasks in building a distributed application.


distributed
applications v To build a distributed application:
1 Any class running in the server must implement the Serializable
interface. This is very simple.
2 The client-side application must import the class, so the object can be
reconstructed on the client side.
3 Use the sybase.sql.ASAUtils.toByteArray method on the server side to
serialize the object. This is only necessary for Adaptive Server
Anywhere version 6.0.1 and earlier.
4 Use the sybase.sql.ASAUtils.fromByteArray method on the client side
to reconstruct the object. This is only necessary for Adaptive Server
Anywhere version 6.0.1 and earlier.
These tasks are described in the following sections.

616
Chapter 19 Data Access Using JDBC

Implementing the Serializable interface


Objects pass from the server to a client application in serialized form. For an
object to be sent to a client application, it must implement the Serializable
interface. Fortunately, this is a very simple task.

v To implement the Serializable interface:


♦ Add the words implements java.io.Serializable to your class definition.
For example, the Product class in the jxmp\asademo subdirectory
implements the Serializable interface by virtue of the following
declaration:
public class Product implements java.io.Serializable
Implementing the Serializable interface amounts to simply declaring that
your class can be serialized.
The Serializable interface contains no methods and no variables. Serializing
an object converts it into a byte stream which allows it to be saved to disk or
sent to another Java application where it can be reconstituted, or
deserialized.
A serialized Java object in a database server, sent to a client application and
deserialized, is identical in every way to its original state. Some variables in
an object, however, either don’t need to be or, for security reasons, should
not be serialized. Those variables are declared using the keyword transient,
as in the following variable declaration.
transient String password;
When an object with this variable is deserialized, the variable always
contains its default value, null.
Custom serialization can be accomplished by adding writeObject() and
readObject() methods to your class.
$ For more information about serialization, see Sun Microsystems’ Java
Development Kit (JDK).

Importing the class on the client side


On the client side, any class that retrieves an object has to have access to the
proper class definition to use the object. To use the Product class, which is
part of the asademo package, you must include the following line in your
application:
import asademo.*

617
Creating distributed applications

The asademo.jar file must be included in your CLASSPATH for this


package to be located.

A sample distributed application


The JDBCExamples.java class contains three methods that illustrate
distributed Java computing. These are all called from the main method. This
method is called in the connection example described in "Connecting from a
JDBC client application using jConnect" on page 597, and is an example of a
distributed application.
Here is the getObjectColumn method from the JDBCExamples class.
private static void getObjectColumn() throws Exception {
// Return a result set from a column containing
// Java objects
asademo.ContactInfo ci;
String name;
String sComment ;

if ( conn != null ) {
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(
"SELECT JContactInfo FROM jdba.contact"
);
while ( rs.next() ) {
ci = ( asademo.ContactInfo )rs.getObject(1);
System.out.println( "\n\tStreet: " + ci.street +
"City: " + ci.city +
"\n\tState: " + ci.state +
"Phone: " + ci.phone +
"\n" );
}
}
}
The getObject method is used in the same way as in the internal Java case.

Older method

getObject and setObject recommended


The getObject and setObject methods remove the need for explicit
serialization and deserialization that was needed in earlier versions of the
software. The current section describes that older method, for users who
are maintaining code that uses these techniques.

618
Chapter 19 Data Access Using JDBC

In this section we describe how one of these examples works. You can study
the code for the other examples.
Serializing and Here is the serializeColumn method of an old version of the
deserializing query JDBCExamples class.
results private static void serializeColumn() throws Exception {
Statement stmt;
ResultSet rs;
byte arrayb[];
asademo.ContactInfo ci;
String name;

if ( conn != null ) {
stmt = conn.createStatement();
rs = stmt.executeQuery( "SELECT
sybase.sql.ASAUtils.toByteArray( JName.getName() )
AS Name,
sybase.sql.ASAUtils.toByteArray(
jdba.contact.JContactInfo )
FROM jdba.contact" );

while ( rs.next() ) {
arrayb = rs.getBytes("Name");
name = ( String
)sybase.sql.ASAUtils.fromByteArray( arrayb );
arrayb = rs.getBytes(2);
ci =
(asademo.ContactInfo)sybase.sql.ASAUtils.fromByteArray(
arrayb );
System.out.println( "Name: " + name +
"\n\tStreet: " + ci.street +
"\n\tCity: " + ci.city +
"\n\tState: " + ci.state +
"\n\tPhone: " + ci.phone +
"\n" );
}
System.out.println( "\n\n" );
}
}
Here is how the method works:
1 A connection already exists when the method is called. The connection
object is checked, and as long as it exists, the code executes.
2 A SQL query is constructed and executed. The query is as follows:
SELECT
sybase.sql.ASAUtils.toByteArray( JName.getName() )
AS Name,
sybase.sql.ASAUtils.toByteArray(
jdba.contact.JContactInfo )

619
Creating distributed applications

FROM jdba.contact
This statement queries the jdba.contact table. It gets information from
the JName and the JContactInfo columns. Instead of just retrieving the
column itself, or a method of the column, the
sybase.sql.ASAUtils.toByteArray function converts the values to a
byte stream so it can be serialized.
3 The client loops over the rows of the result set. For each row, the value
of each column is deserialized into an object.
4 The output (System.out.println ) shows that the fields and methods of
the object can be used as they could in their original state.

Other features of distributed applications


There are two other methods in JDBCExamples.java that use distributed
computing:
♦ serializeVariable This method creates a native Java object referenced
by a SQL variable on the database server and passes it back to the client
application.
♦ serializeColumnCastClass This method is like the serializeColumn
method, but demonstrates how to reconstruct subclasses. The column
that is queried (JProd from the product table) is of data type
asademo.Product. Some of the rows are asademo.Hat, which is a
subclass of the Product class. The proper class is reconstructed on the
client side.

620
C H A P T E R 2 0

Debugging Logic in the Database

About this chapter This chapter describes how to use the Sybase debugger to assist in
developing Java classes, SQL stored procedures, triggers, and event handlers.
Contents
Topic Page
Introduction to debugging in the database 622
Tutorial 1: Connecting to a database 624
Tutorial 2: Debugging a stored procedure 627
Tutorial 3: Debugging a Java class 630
Common debugger tasks 635
Writing debugger scripts 637

621
Introduction to debugging in the database

Introduction to debugging in the database


With Java in the database, you can add complex classes into your database.
SQL stored procedures and triggers are other ways of adding logic to your
database. The debugger lets you test these classes and procedures, and detect
and fix problems with them.
This chapter describes how to set up and use the debugger.

Debugger features
You can carry out many tasks with the Sybase debugger, including the
following:
♦ Debug Java classes You can debug Java classes that are stored in the
database.
♦ Debug procedures and triggers You can debug SQL stored
procedures and triggers.
♦ Debug event handlers Event handlers are an extension of SQL stored
procedures. The material in this chapter about debugging stored
procedures applies equally to debugging event handlers.
♦ Browse classes and stored procedures You can browse through the
source code of installed classes and SQL procedures.
♦ Trace execution Step line by line through the code of a Java class or
stored procedure running in the database. You can also look up and
down the stack of functions that have been called.
♦ Set breakpoints Run the code until you hit a breakpoint, and stop at
that point in the code.
♦ Set break conditions Breakpoints include lines of code, but you can
also specify conditions when the code is to break. For example, you can
stop at a line the tenth time it is executed, or only if a variable has a
particular value. You can also stop whenever a particular exception is
thrown in a Java application.
♦ Inspect and modify local variables When execution is stopped at a
breakpoint, you can inspect the values of local variables and alter their
value.
♦ Inspect and break on expressions When execution is stopped at a
breakpoint, you can inspect the value of a wide variety of expressions.
♦ Inspect and modify row variables Row variables are the OLD and
NEW values of row-level triggers. You can inspect and set these values.
622
Chapter 20 Debugging Logic in the Database

♦ Execute queries When execution is stopped at a breakpoint in a SQL


procedure, you can execute queries. This permits you to look at
intermediate results held in temporary tables, as well as checking values
in base tables.

Requirements for using the debugger


The debugger runs on a client machine, and connects to the database using
the Sybase jConnect JDBC driver.
You need the following in order to use the debugger:
♦ Permissions In order to use the debugger, you must either have DBA
authority or be granted permissions in the SA_DEBUG group. This
group is added to all databases when the database is created.
♦ Source code The source code for your application must be available
to the debugger. For Java classes, the source code is held on a directory
on your hard disk. For stored procedures, the source code is held in the
database.
♦ Compilation options To debug Java classes, they must be compiled
so that they contain debugging information. For example, if you are
using the Sun Microsystems JDK compiler javac.exe, they must be
compiled using the -g command-line option.
♦ jConnect Support The database that you want to connect to must
have jConnect support installed.

Debugger information
This chapter contains three tutorials to help you get started using the
debugger. Task-based help and information about each window is available
in the debugger online Help.
$ For more information on the debugger windows, see "Debugger
windows" on page 1053.

623
Tutorial 1: Connecting to a database

Tutorial 1: Connecting to a database


This tutorial shows you how to start the debugger, connect to a database, and
attach to a connection for debugging.

Start the Java debugger


The debugger runs on your client machine.
You can start the debugger from the command line or from Sybase Central.

v To start the Java debugger (Sybase Central):


1 For the purposes of this tutorial, close down the sample database and all
other Adaptive Server Anywhere applications.
2 Start Sybase Central. Choose Tools➤Adaptive Server Anywhere
7➤Open Database Object Debugger.

v To start the Java debugger (command line):


1 For the purposes of this tutorial, close down the sample database and all
other Adaptive Server Anywhere applications.
2 From a system command prompt, change directory to the directory
holding the Adaptive Server Anywhere software for your operating
system.
3 Enter the following command to start the debugger:
dbprdbg
The Connect window appears:

624
Chapter 20 Debugging Logic in the Database

You can now connect to a database from the debugger.

Connect to the sample database


To connect to a database, you need to supply a user ID, and a password. This
section describes by example how to connect.

v To connect to the sample database from the debugger:


1 Start a personal database server on the sample database. You can do this
from the Start menu or by entering the following command at a system
prompt:
dbeng7 -c 8M path\asademo.db
where path is your Adaptive Server Anywhere installation directory.
2 In the debugger Login window, enter the following information:
♦ User Enter the user ID DBA.
♦ Password Enter the password SQL.
♦ User to debug Enter an asterisk (*) to indicate the you wish to
debug connections for all users.

625
Tutorial 1: Connecting to a database

♦ Host Leave at localhost.


♦ Port Leave at 2638.
3 Click OK to connect to the database. The debugger interface appears,
displaying a list of stored procedures and triggers, and a list of Java
classes installed in the database.

$ For more information on valid connection parameters for the


debugger, see "Supplying a URL for the server" on page 614.

626
Chapter 20 Debugging Logic in the Database

Tutorial 2: Debugging a stored procedure


This tutorial describes a sample session for debugging a stored procedure. It
is a continuation of "Tutorial 1: Connecting to a database" on page 624.
In this tutorial, you call the stored procedure sp_customer_products, which is
part of the sample database.
The sp_customer_products procedure executes the following query against
the sample database:
CREATE PROCEDURE dba.sp_customer_products(
INOUT customer_id INTEGER )
RESULT(id integer,quantity_ordered integer)
BEGIN
SELECT product.id,sum(sales_order_items.quantity)
FROM product,sales_order_items,sales_order
WHERE sales_order.cust_id = customer_id
AND sales_order.id = sales_order_items.id
AND sales_order_items.prod_id = product.id
GROUP BY product.id
END
It takes a customer ID as input, and returns as a result set a list of product
IDs and the number ordered by that customer.

Display stored procedure source code in the debugger


The stored procedure source code is stored in the database. You can display
the stored procedure source code in the Source window.

v To display stored procedure source code in the debugger:


♦ From the debugger interface, double-click the sp_customer_products
stored procedure. The source code for the procedure appears in the
Source window.

627
Tutorial 2: Debugging a stored procedure

Set a breakpoint
You can set a breakpoint in the body of the sp_customer_products
procedure. When the procedure is executed, execution stops at the
breakpoint.

v To set a breakpoint in a stored procedure:


1 In the Source Code window, locate the line with the query:
select product.id,…
2 Click the green indicator to the left of the line, until it is red.
Repeatedly clicking the indicator toggles its status.

Run the procedure


You can call the stored procedure from Interactive SQL, and see its
execution interrupted at the breakpoint.

v To call the procedure from Interactive SQL:


1 Start Interactive SQL. Connect to the sample database with a user ID of
DBA and a password of SQL.
The connection appears in the debugger Connections window list.
2 Enter the following command in Interactive SQL to call the procedure
using the customer with ID 122:
CALL sp_customer_products( 122 )

628
Chapter 20 Debugging Logic in the Database

The query does not complete. Instead, execution is stopped in the


debugger at the breakpoint. In Interactive SQL, the Interrupt the SQL
Statement button is active. In the debugger Source window, the red
arrow indicates the current line.
3 Step to the next line by choosing Run➤Step Over. You can also press
F7.
$ For longer procedures, you can use other methods of stepping
through code. For more examples, see "Tutorial 3: Debugging a Java
class" on page 630.
For the next lesson, leave the debugger stopped at the SELECT line.

Inspect and modify variables


You can inspect the values of variables in the debugger.
Inspecting local You can inspect the values of local variables in a procedure as you step
variables through the code, to better understand what is happening.

v To inspect and modify the value of a variable:


1 If the Local Variables window is not displayed, choose Window➤Local
Variables to display it.
The Local Variables window shows that there are two local variables;
the stored procedure itself (which does not have a return value, and so is
listed as NULL) and the customer_id passed in to the procedure.
2 In the Local Variables window, double-click the Value column entry for
customer_id, and type in 125 to change the customer ID value used in
the query.
3 In the Source window, press F5 to complete the execution of the query
and finish the tutorial.
The Interactive SQL Results window displays the list of product IDs and
quantities for customer 125:

Id sum(sales_order....
301 60
700 48
... ...

Inspecting trigger In addition to local variables, you can display other variables, such as row-
row variables level trigger OLD and NEW values in the debugger Row Variables window.

629
Tutorial 3: Debugging a Java class

Tutorial 3: Debugging a Java class


This tutorial describes a sample session for debugging a stored procedure. It
is a continuation of "Tutorial 1: Connecting to a database" on page 624.
In this tutorial, you call JDBCExamples.Query() from Interactive SQL,
interrupt the execution in the debugger, and trace through the source code for
this method.
The JDBCExamples.Query() method executes the following query against
the sample database:
SELECT id, unit_price
FROM product
It then loops through all the rows of the result set, and returns the one with
the highest unit price.
Compiling Java You must compile classes with the javac -g option in order to debug them.
classes for The sample classes are compiled for debugging.
debugging

Prepare the database


If you intend to run Java examples, such as "Tutorial 3: Debugging a Java
class" on page 630, you need to install the Java example classes into the
sample database.
$ For more information about how to install the Java examples, see
"Setting up the Java examples" on page 550.
$ For more information about the JDBCExamples class and its methods,
see "Data Access Using JDBC" on page 591.

Display Java source code into the debugger


The debugger looks in a set of locations for source code files (with .java
extension). You need to add the jxmp subdirectory of your installation
directory to the list of locations, so that the code for the class currently being
executed in the database is available to the debugger.

v To display Java source code in the debugger:


1 From the debugger interface, select File➤Edit Source Path. The Source
path window appears.

630
Chapter 20 Debugging Logic in the Database

2 Enter the path to the jxmp subdirectory of your Adaptive Server


Anywhere installation directory. For example, if you installed Adaptive
Server Anywhere in c:\asa7, you would enter the following:
c:\asa7\jxmp
3 Click Apply, and close the window.
4 In the Classes window, double-click JDBCExamples. The source code
for the JDBCExamples class appears in the Source window.

Notes on locating The Source Path window holds a list of directories in which the
Java source code debugger looks for Java source code. Java rules for finding packages
apply. The debugger also searches the current CLASSPATH for source
code.
For example, if you add the paths c:\asa7\jxmp and c:\Java\src to the
source path, and the debugger is trying to find a class called
asademo.Product, it looks for the source code in
c:\asa7\jxmp\asademo\Product.Java and c:\Java\src\my\
asademo\Product.Java

631
Tutorial 3: Debugging a Java class

Set a breakpoint
You can set a breakpoint at the beginning of the Query() method. When the
method is invoked, execution stops at the breakpoint.

v To set a breakpoint in a Java class:


1 In the Source Code window, page down until you see the beginning of
the Query() method. This method is near the end of the class, and starts
with the following line:
public static int Query() {
2 Click the green indicator to the left of the first line of the method, until it
is red. The first line of the method is:
int max_price = 0;
Repeatedly clicking the indicator toggles its status.

Run the method


You can invoke the Query() method from Interactive SQL, and see its
execution interrupted at the breakpoint.

v To invoke the method from Interactive SQL:


1 Start Interactive SQL. Connect to the sample database as used ID DBA
and password SQL.
The connection appears in the debugger Connections window list.
2 Enter the following command in Interactive SQL to invoke the method:
SELECT JDBCExamples.Query()
The query does not complete. Instead, execution is stopped in the
debugger at the breakpoint. In Interactive SQL, the Stop button is active.
In the debugger Source window, the red arrow indicates the current line.
You can now step through source code and carry out debugging activities in
the debugger.

Step through source code


In this section we illustrate some of the ways you can step through code in
the debugger.

632
Chapter 20 Debugging Logic in the Database

Following the previous section, the debugger should have stopped execution
of JDBCExamples.Query() at the first statement in the method:
Examples Here are some example steps you can try:
1 Step to the next line Choose Run➤Step Over, or press F7 to step to
the next line in the current method. Try this two or three times.
2 Run to a selected line Select the following line using the mouse, and
choose Run➤Run To Selected, or press F6 to run to that line and break:
max_price = price;
The red arrow moves to the line.
3 Set a breakpoint and execute to it Select the following line (line
292) and press F9 to set a breakpoint on that line:
return max_price;
An asterisk appears in the left hand column to mark the breakpoint.
Press F5 to execute to that breakpoint.
4 Experiment Try different methods of stepping through the code. End
with F5 to complete the execution.
When you have completed the execution, the Interactive SQL Data
window displays the value 24.

Options The complete set of options for stepping through source code are displayed
on the Run menu. You can find more information in the debugger online
Help.

Inspect and modify variables


You can inspect the values of both local variables (declared in a method) and
class static variables in the debugger.
Inspecting local You can inspect the values of local variables in a method as you step through
variables the code, to better understand what is happening. You must have compiled
the class with the javac -g option to do this.

v To inspect and modify the value of a variable:


1 Set a breakpoint at the first line of the JDBCExamples.Query method.
This line is as follows:
int max_price = 0
2 In Interactive SQL, enter the following statement again to execute the
method:

633
Tutorial 3: Debugging a Java class

SELECT JDBCExamples.Query()
The query executes only as far as the breakpoint.
3 Press F7 to step to the next line. The max_price variable has now been
declared and initialized to zero.
4 If the Local Variables window is not displayed, choose Window➤Local
Variables to display it.
The Local Variables window shows that there are several local
variables. The max_price variable has a value of zero. All others are
listed as variable not in scope, which means they are not yet initialized.
5 In the Local Variables window, double-click the Value column entry for
max_price, and type in 45 to change the value of max_price to 45.
The value 45 is larger than any other price. Instead of returning 24, the
query will now return 45 as the maximum price.
6 In the Source window, press F7 repeatedly to step through the code. As
you do so, the values of the variables appear in the Local Variables
window. Step through until the stmt and result variables have values.
7 Expand the result object by clicking the icon next to it, or setting the
cursor on the line and pressing ENTER. This displays the values of the
fields in the object.
8 When you have experimented with inspecting and modifying variables,
press F5 to complete the execution of the query and finish the tutorial.

Inspecting static In addition to local variables, you can display class-level variables (static
variables variables) in the debugger Statics window, and inspect their values in the
Inspection window. For more information, see the debugger online Help.

634
Chapter 20 Debugging Logic in the Database

Common debugger tasks


If you want to… Then choose…
Add a directory to the Java source file File➤Add Source Path
search path
Display callee’s context Stack➤Down
Display caller’s context Stack➤Up
Enable capturing of connections by the Connection➤Enable capture
debugger
Exit the debugger File➤Exit
Find a string in the selected window Search➤Find
Ignore the case of a word when Search➤Ignore Case
searching
Load the procedure debugger’s settings Settings➤Load From File
from a file
Login to the database as a new Connection➤Login
connection
Log out of the database Connection➤Logout
Restart a program’s execution Run➤Restart
Run a debug script File➤Run Script

$ For more information, see


"Writing debugger scripts" on page 637
Run a program, line by line, going Run➤Step Into
through procedures and triggers line by
line as well
Run a program, line by line, switching Run➤Step Through
between a Java program and stored
procedures in other environments,
when applicable
Run a program, line by line, without Run➤Step Over
going through procedures and triggers
line by line. If any procedure or trigger
contains a breakpoint, the breakpoint
will be ignored.
Run a program, or resume a program’s Run➤Go
execution from a breakpoint

635
Common debugger tasks

If you want to… Then choose…


Run a program until the current Run➤Step Out
procedure/method returns.
Save procedures’ breakpoints within Settings➤Remember Breakpoints
procedures
Save the procedure debugger’s settings Settings➤Save
Save the procedure debugger’s settings Settings➤Save to File
to a file
Save the procedure debugger’s window Settings➤Save on Exit
positions, fonts, etc. automatically
upon exiting the debugger
Save the procedure debugger’s window Settings➤Remember Windows
positions, fonts, etc. with settings. Attributes
Specify the path containing the Java File➤Edit Source Path
source file
View the source code for a procedure File➤Open
or class

636
Chapter 20 Debugging Logic in the Database

Writing debugger scripts


The debugger allows you to write scripts in the Java programming language.
A script is a Java class that extends the "sybase.asa.procdebug.DebugScript
class" on page 637.
When the debugger runs a script, it loads the class and calls its run method.
The first parameter of the run method is a pointer to an instance of the
"sybase.asa.procdebug.IDebugAPI interface" on page 638. This
interface lets you interact with and control the debugger.
A debugger window is represented by the
"sybase.asa.procdebug.IDebugWindow interface" on page 641.
You can compile scripts with a command such as the following:
javac -classpath "c:\Program Files\Sybase\SQL Anywhere
7\java\ProcDebug.jar";%classpath% myScript.Java.

sybase.asa.procdebug.DebugScript class
You can write scripts to control debugger behavior. Scripts are classes that
extend the DebugScript class. For more information on scripts, see "Writing
debugger scripts" on page 637.
The DebugScript class is as follows:
// All debug scripts must inherit from this class

package sybase.asa.procdebug;

abstract public class DebugScript


{
abstract public void run( IDebugAPI db, String args[] );
/*
The run method is called by the debugger
- args will contain command line arguments
*/

public void OnEvent( int event ) throws DebugError {}


/*
- Override the following methods to process debug events
- NOTE: this method will not be called unless you call
DebugAPI.AddEventHandler( this );
*/

637
Writing debugger scripts

sybase.asa.procdebug.IDebugAPI interface
You can write scripts to control debugger behavior. Scripts are Java classes
that use the IDebugAPI interface to control the debugger. For more
information on scripts, see "Writing debugger scripts" on page 637.
The IDebugAPI interfaces is as follows:
package sybase.asa.procdebug;
import java.util.*;
public interface IDebugAPI

{
// Simulate Menu Items

IDebugWindow MenuOpenSourceWindow() throws DebugError;


IDebugWindow MenuOpenCallsWindow() throws DebugError;
IDebugWindow MenuOpenClassesWindow() throws DebugError;
IDebugWindow MenuOpenClassListWindow() throws DebugError;
IDebugWindow MenuOpenMethodsWindow() throws DebugError;
IDebugWindow MenuOpenStaticsWindow() throws DebugError;
IDebugWindow MenuOpenCatchWindow() throws DebugError;
IDebugWindow MenuOpenProcWindow() throws DebugError;
IDebugWindow MenuOpenOutputWindow() throws DebugError;
IDebugWindow MenuOpenBreakWindow() throws DebugError;
IDebugWindow MenuOpenLocalsWindow() throws DebugError;
IDebugWindow MenuOpenInspectWindow() throws DebugError;
IDebugWindow MenuOpenRowVarWindow() throws DebugError;
IDebugWindow MenuOpenQueryWindow() throws DebugError;
IDebugWindow MenuOpenEvaluateWindow() throws DebugError;
IDebugWindow MenuOpenGlobalsWindow() throws DebugError;
IDebugWindow MenuOpenConnectionWindow() throws DebugError;
IDebugWindow MenuOpenThreadsWindow() throws DebugError;
IDebugWindow GetWindow( String name ) throws DebugError;

638
Chapter 20 Debugging Logic in the Database

void MenuRunRestart() throws DebugError;


void MenuRunHome() throws DebugError;
void MenuRunGo() throws DebugError;
void MenuRunToCursor() throws DebugError;
void MenuRunInterrupt() throws DebugError;
void MenuRunOver() throws DebugError;
void MenuRunInto() throws DebugError;
void MenuRunIntoSpecial() throws DebugError;
void MenuRunOut() throws DebugError;
void MenuStackUp() throws DebugError;
void MenuStackDown() throws DebugError;
void MenuStackBottom() throws DebugError;
void MenuFileExit() throws DebugError;
void MenuFileOpen( String name ) throws DebugError;
void MenuFileAddSourcePath( String what ) throws DebugError;
void MenuSettingsLoadState( String file ) throws DebugError;
void MenuSettingsSaveState( String file ) throws DebugError;
void MenuWindowTile() throws DebugError;
void MenuWindowCascade() throws DebugError;
void MenuWindowRefresh() throws DebugError;
void MenuHelpWindow() throws DebugError;
void MenuHelpContents() throws DebugError;
void MenuHelpIndex() throws DebugError;
void MenuHelpAbout() throws DebugError;
void MenuBreakAtCursor() throws DebugError;
void MenuBreakClearAll() throws DebugError;
void MenuBreakEnableAll() throws DebugError;
void MenuBreakDisableAll() throws DebugError;
void MenuSearchFind( IDebugWindow w, String what ) throws DebugError;
void MenuSearchNext( IDebugWindow w ) throws DebugError;
void MenuSearchPrev( IDebugWindow w ) throws DebugError;
void MenuConnectionLogin() throws DebugError;
void MenuConnectionReleaseSelected() throws DebugError;

// output window
void OutputClear();
void OutputLine( String line );
void OutputLineNoUpdate( String line );
void OutputUpdate();

// Java source search path

void SetSourcePath( String path ) throws DebugError;


String GetSourcePath() throws DebugError;

// Catch java exceptions


Vector GetCatching();
void Catch( boolean on, String name ) throws DebugError;

639
Writing debugger scripts

// Database connections
int ConnectionCount();
void ConnectionRelease( int index );
void ConnectionAttach( int index );
String ConnectionName( int index );
void ConnectionSelect( int index );

// Login to database
boolean LoggedIn();
void Login( String url, String userId, String password, String
userToDebug ) throws DebugError;
void Logout();

// Simulate keyboard/mouse actions


void DeleteItemAt( IDebugWindow w, int row ) throws DebugError;
void DoubleClickOn( IDebugWindow w, int row ) throws DebugError;

// Breakpoints
Object BreakSet( String where ) throws DebugError;
void BreakClear( Object b ) throws DebugError;
void BreakEnable( Object b, boolean enabled ) throws DebugError;
void BreakSetCount( Object b, int count ) throws DebugError;
int BreakGetCount( Object b ) throws DebugError;
void BreakSetCondition( Object b, String condition ) throws
DebugError;
String BreakGetCondition( Object b ) throws DebugError;
Vector GetBreaks() throws DebugError;

// Scripting
void RunScript( String args[] ) throws DebugError;
void AddEventHandler( DebugScript s );
void RemoveEventHandler( DebugScript s );

640
Chapter 20 Debugging Logic in the Database

// Miscellaneous
void EvalRun( String expr ) throws DebugError;
void QueryRun( String query ) throws DebugError;
void QueryMoreRows() throws DebugError;
Vector GetClassNames();
Vector GetProcedureNames();
Vector WindowContents( IDebugWindow window ) throws DebugError;
boolean AtBreak();
boolean IsRunning();
boolean AtStackTop();
boolean AtStackBottom();
void SetStatusText( String msg );
String GetStatusText();
void WaitCursor();
void OldCursor();
void Error( Exception x );
void Error( String msg );
void Warning( String msg );
String Ask( String title );
boolean MenuIsChecked( String cmd );
void MenuSetChecked( String cmd, boolean on );
void AddInspectItem( String s ) throws DebugError;

// Constants for DebugScript.OnEvent parameter


public static final int EventBreak = 0;
public static final int EventTerminate = 1;
public static final int EventStep = 2;
public static final int EventInterrupt = 3;
public static final int EventException = 4;
public static final int EventConnect = 5;
};

sybase.asa.procdebug.IDebugWindow interface
You can write scripts to control debugger behavior. In scripts, the debugger
window is represented by the IDebugWindow interface. For more
information on scripts, see "Writing debugger scripts" on page 637.
The IDebugWindow interfaces is as follows:
// this interface represents a debugger window
package sybase.asa.procdebug;
public interface IDebugWindow
{
public int GetSelected();
/*
get the currently selected row, or -1 if no selection
*/

public boolean SetSelected( int i );


/*
set the currently selected row. Ignored if i < 0 or i > #rows
*/

641
Writing debugger scripts

public String StringAt( int row );


/*
get the String representation of the Nth row of the window.
Returns null if row > # rows
*/

public java.awt.Rectangle GetPosition();


public void SetPosition( java.awt.Rectangle r );
/*
get/set the windows position within the frame
*/

public void Close();


/*
Close (destroy) the window
*/
}

642
P A R T F I V E

Database Administration and


Advanced Use

This part covers administrative tasks such as backups, managing users in a


multi-user environment, application deployment, and network communications.
It also covers features suitable for advanced users, including performance
concerns, query optimization, and remote data access.

643
644
C H A P T E R 2 1

Backup and Data Recovery

About this chapter This chapter describes how to protect your data against operating system
crashes, file corruption, disk failures, and total machine failure.
The chapter describes how to make backups of your database, how to restore
data from a backup, and how to run your server so that performance and data
protection concerns are addressed.
Contents
Topic Page
Introduction to backup and recovery 646
Understanding backups 651
Designing backup procedures 654
Configuring your database for data protection 663
Backup and recovery internals 667
Backup and recovery tasks 674

645
Introduction to backup and recovery

Introduction to backup and recovery


A backup is a copy of the information in a database, held in some physically
separate location from your database. If the database becomes unavailable,
perhaps because of damage to a disk drive, you can restore it from the
backup. Depending on the nature of the damage, it is often possible to restore
from backups all committed changes to the database up to the time it became
unavailable.
Restoring databases from a backup is one aspect of database recovery. The
other aspect is recovery from operating system or database server crashes,
and improper shutdowns. The database server checks on database startup
whether the database was shut down cleanly at the end of the previous
session. If it was not, the server executes an automatic recovery process to
restore information. This mechanism recovers all changes up to the most
recently committed transaction are recovered.
Chapter contents This chapter contains the following material:
♦ An introduction to backup and recovery (this section).
♦ Concepts and background information to help you design and use an
appropriate backup strategy:
♦ "Understanding backups" on page 651.
♦ Information to help you decide the type and frequency of backups you
use, and the way you run your database server so that your data is well
protected:
♦ "Designing backup procedures" on page 654.
♦ "Configuring your database for data protection" on page 663.
♦ Information for advanced users, describing Adaptive Server Anywhere
internal operations related to backup and recovery:
♦ "Backup and recovery internals" on page 667.
♦ Step by step instructions for how to carry out backup and recovery tasks.
♦ "Backup and recovery tasks" on page 674

646
Chapter 21 Backup and Data Recovery

Questions and To answer the question... Consider reading...


answers
What is a backup? "Introduction to backup and recovery"
on page 646
What is recovery? "Introduction to backup and recovery"
on page 646
What is a transaction log? "The transaction log" on page 651
What are media and system failure? "Protecting your data against failure"
on page 648
From what kinds of failure do backups "Protecting your data against failure"
protect my data? on page 648
What tools are available for backups? "Ways of making backups" on
page 649
What types of backup are available? "Types of backup" on page 654
What type of backup should I use? "Designing backup procedures" on
page 654
If my database file or transaction log "Protecting your data against media
become corrupt, what data may be lost? failure" on page 653
How are backups executed? "Understanding backups" on page 651
How often do I carry out backups? "Scheduling backups" on page 655
Can I schedule automatic backups? "Scheduling backups" on page 655
My database is involved in replication. "A backup scheme for databases
How does this affect my backup involved in replication" on page 658
strategy?
"Backup methods for remote
databases in replication installations"
on page 660
How can I backup to tape? "Backing up a database directly to
tape" on page 683
How do I plan a backup schedule? "Designing a backup and recovery
plan" on page 661
Can I automate backups? "Automating Tasks Using Schedules
and Events" on page 495
How can I be sure my database file is not "Ensuring your database is valid" on
corrupt? page 662
"Validating a database" on page 676

647
Introduction to backup and recovery

To answer the question... Consider reading...


How can I be sure my transaction log is "Validating the transaction log on
not corrupt? database startup" on page 672
"Validating a transaction log" on
page 677
How can I run my database for "Configuring your database for data
maximum protection against failures? protection" on page 663
How can I ensure high availability and "Protecting against total machine
machine redundancy? failure" on page 664
"Making a live backup" on page 685
How do I carry out a backup? "Making a full backup" on page 674
How do I restore data from backups "Recovering from media failure on the
when a failure occurs? database file" on page 685
"Recovering from media failure on an
unmirrored transaction log" on
page 686
"Recovering from media failure on a
mirrored transaction log" on page 687

Protecting your data against failure


If your database has become unusable, you have experienced a database
failure. Adaptive Server Anywhere provides protection against the following
categories of failure:

Media failure The database file and/or the transaction log become
unusable. This may occur because the file system or the device storing the
database file becomes unusable, or it may be because of file corruption.
For example:
♦ The disk drive holding the database file or the transaction log file
becomes unusable.
♦ The database file or the transaction log file become corrupted. This can
happen because of hardware problems or software problems.
Backups protect your data against media failure.
$ For more information, see "Understanding backups" on page 651.

648
Chapter 21 Backup and Data Recovery

System failure A system failure occurs when the computer or operating


system goes down while there are partially completed transactions. This
could occur when the computer is inappropriately turned off or rebooted,
when another application causes the operating system to crash, or because of
a power failure.
For example:
♦ The computer or operating system becomes temporarily unavailable
while there are partially completed transactions, perhaps because of a
power failure or operating system crash, or because the computer is
inappropriately rebooted.
After a system failure occurs, the database server recovers automatically
when you next start the database. The results of each transaction committed
before the system error are intact. All changes by transactions that were not
committed before the system failure are canceled.
$ For details of the recovery mechanism, see "Backup and recovery
internals" on page 667.
$ It is possible to recover uncommitted changes manually. For
information, see "Recovering uncommitted operations" on page 689.

Ways of making backups


There are several distinct ways of making backups. This section introduces
each of the major approaches, but does not address any issues of appropriate
options.
You can make backups in the following ways:
♦ Sybase Central You can use the Backup wizard in Sybase Central to
make a backup. You can access the wizard by selecting a database and
clicking Backup in the File menu (or the popup menu).
♦ Command-line utility The dbbackup command-line utility makes
backups. For example, executing the following command at the system
command prompt makes backup copies of the database and transaction
log in the directory c:\backup on the client machine:
dbbackup –c "connection-string" c:\backup
♦ SQL Statement You can use a SQL statement to make the database
server execute a backup operation. For example, the following statement
places backup copies of the database file and transaction log into the
directory c:\backup on the server machine.
BACKUP DATABASE
DIRECTORY 'c:\\backup'

649
Introduction to backup and recovery

♦ Offline backup The above examples are all online backups, executed
against a running database. You can make offline backups by copying
the database files when the database is not running.

Notes You must have DBA authority or REMOTE DBA authority to make backups
of a database.

650
Chapter 21 Backup and Data Recovery

Understanding backups
To understand what files you need to back up, and how you restore databases
from backups, you need to understand how the changes made to the database
are stored on disk.

The database file


When a database is shut down, the database file holds a complete and current
copy of all the data in the database. When a database is running, however,
the database file is generally not current.
The only time a database file is guaranteed to hold a complete and current
copy of all data is when a checkpoint takes place. At a checkpoint, all the
contents of the database cache are written out to the disk.
The database server checkpoints a database under the following conditions:
♦ As part of the database shutdown operations.
♦ When the amount of time since the last checkpoint exceeds the database
option CHECKPOINT_TIME
♦ When the estimated time to do a recovery operation exceeds the
database option RECOVERY_TIME
♦ When the database server is idle long enough to write all dirty pages
♦ When a connection issues a CHECKPOINT statement
♦ When the database server is running without a transaction log, and a
transaction is committed
Between checkpoints, you need both the database file and another file, called
the transaction log, to ensure that you have a complete copy of all committed
transactions.
$ For more details on checkpoints, see "Checkpoints and the checkpoint
log" on page 668, and "How the database server decides when to checkpoint"
on page 671.

The transaction log


The transaction log is a separate file from the database file. It stores all
changes to the database. Inserts, updates, deletes, commits, rollbacks, and
database schema changes are all logged. The transaction log is also called the
forward log or the redo log.

651
Understanding backups

The transaction log is a key component of backup and recovery, and is also
essential for data replication using SQL Remote or the Replication Agent.
By default, all databases use transaction logs. Using a transaction log is
optional, but you should always use a transaction log unless you have a
specific reason not to. Running a database with a transaction log provides
much greater protection against failure, better performance, and the ability to
replicate data.
$ For information on how to use a transaction log to protect against
media failure, see "Protecting against media failure on the database file" on
page 663.
When changes are Like the database file, the transaction log is organized into pages: fixed size
forced to disk areas of memory. When a change is recorded in the transaction log, it is
made to a page in memory. The change is forced to disk when the earlier of
the following happens:
♦ The page is full.
♦ A COMMIT is executed.
In this way, completed transactions are guaranteed to be stored on disk,
while performance is improved by avoiding a write to the disk on every
operation.
$ Configuration options are available to allow advanced users to tune the
precise behavior of the transaction log. For more information, see
"COOPERATIVE_COMMITS option" on page 180 of the book ASA
Reference, and "DELAYED_COMMITS option" on page 184 of the book
ASA Reference.
Transaction log A transaction log mirror is an identical copy of the transaction log,
mirrors maintained at the same time as the transaction log. If a database has a
mirrored transaction log, every database change is written to both the
transaction log and the transaction log mirror. By default, databases do not
have transaction log mirrors.
A transaction log mirror provides extra protection for critical data. It enables
complete data recovery in the case of media failure on the transaction log. A
mirrored transaction log also enables a database server to carry out automatic
validation of the transaction log on database startup.
$ For more information, see "Protecting against media failure on the
transaction log" on page 663.

652
Chapter 21 Backup and Data Recovery

Protecting your data against media failure


Backups protect your data against media failure, as part of the data protection
mechanisms Adaptive Server Anywhere provides.
$ For an overview of data protection mechanisms, see "Protecting your
data against failure" on page 648.
The practical aspects of recovery from media failure depend on whether the
media failure is on the database file or the transaction log file.

Media failure on the database file If your database file is not usable, but
your transaction log is still usable, you can recover all committed changes to
the database as long as you have proper a backup procedure in place. All
information since the last backed up copy of the database file is held in
backed up transaction logs, or in the online transaction log.
$ For information on how to configure your database system, see
"Protecting against media failure on the database file" on page 663.

Media failure on the transaction log file Unless you use a mirrored
transaction log: you cannot recover information entered between the last
database checkpoint and a media failure on the transaction log. For this
reason, it is recommended that you use a mirrored transaction log in setups
such as SQL Remote consolidated databases, where loss of the transaction
log can lead to loss of key information, or the breakdown of a replication
system.
$ For more information, see "Protecting against media failure on the
transaction log" on page 663.

653
Designing backup procedures

Designing backup procedures


When you make a backup, you have a set of choices about how to manage
transaction logs. The choices you make depend on a set of factors including
the following:
♦ Is the database involved in replication?
In this chapter, replication means SQL Remote replication, or a
MobiLink database where dbsync.exe is running, or a database using the
Replication Agent. Each of these replication methods requires access to
the transaction log, and potentially to old transaction logs.
♦ How fast is the transaction log file growing relative to your available
disk space? If the transaction log is growing quickly, you may not be
able to afford to keep transaction logs available.

Types of backup
This section assumes you are familiar with basic concepts related to backups.
$ For more information about concepts related to backups, see
"Introduction to backup and recovery" on page 646, and "Understanding
backups" on page 651.
Backups can be categorized in several ways:
♦ Full backup and incremental backup A full backup is a backup of
both the database file and of the transaction log. An incremental
backup is a backup of the transaction log only. Typically, full backups
are interspersed with several incremental backups.
$ For information on making backups, see , "Making a full backup"
on page 674, and "Making an incremental backup" on page 675.
♦ Server-side backup and client-side backup You can execute an
online backup from a client machine using the Backup utility. To
execute a server side backup you execute the BACKUP statement; the
database server then carries out the backup.
You can easily build server side backup into applications because it is a
SQL statement. Also, server-side backup is generally faster because the
data does not have to be transported across the client/server
communications system.
Instructions for server-side and client-side backups are given together
for each backup procedure.

654
Chapter 21 Backup and Data Recovery

♦ Archive backup and image backup An archive backup copies the


database file and the transaction log into a single archive file, typically
on a tape drive. An image backup makes a copy of the database file
and/or the transaction log, each as separate files. You can only carry out
archive backups as server-side backups, and you can only make full
backups.
You should use an archive backup if you are backing up directly to tape.
Otherwise, image backup has more flexibility for transaction log file
management.
Archive backups are supported on Windows NT and UNIX platforms
only.
$ For information on archive backups, see "Backing up a database
directly to tape" on page 683.
♦ Online and offline backup Backing up a running database provides a
snapshot of a consistent database, even though the database is being
modified by other users. An offline backup consists simply of copying
the files. You should only carry out an offline backup when the database
is not running, and when the database server was shut down properly.
The information in this chapter focuses on online backup.
♦ Live backup A live backup is a continuous backup of the database that
helps protect against total machine failure.
$ For information on when to use live backups, see "Protecting
against total machine failure" on page 664.
$ For information on how to make a live backup, see "Making a live
backup" on page 685.

Scheduling backups
Most backup schedules involve periodic full backups interspersed with
incremental backups of the transaction log. There is no simple rule for
deciding how often to make backups of your data. The frequency with which
you make backups depends on the importance of your data, how often it is
changing, and other factors.
Most backup strategies involve occasional full backups, interspersed by
several incremental backups. A common starting point for backups is to
carry out a weekly full backup, with daily incremental backups of the
transaction log. Both full and incremental backups can be carried out online
(while the database is running) or offline, server side or client side. Archive
backups are always full backups.

655
Designing backup procedures

The kinds of failure against which a backup schedule protects you depends
not only on how often you make backups, but on how you operate your
database server.
$ For more information, see "Configuring your database for data
protection" on page 663.
You should always keep more than one full backup. If you make a backup on
top of a previous backup, a media failure in the middle of the backup leaves
you with no backup at all. You should also keep some of your full backups
offsite to protect against fire, flood, earthquake, theft, or vandalism.
You can use the event scheduling features of Adaptive Server Anywhere to
perform online backups automatically at scheduled times.
$ For information on scheduling operations such as backups, see
"Automating Tasks Using Schedules and Events" on page 495.

A backup scheme for when disk space is plentiful


If disk space is not a problem on your production machine (where the
database server is running) then you do not need to worry about choosing
special options to manage the transaction log file. In this case, you can use a
simple form of backup that makes copies of the database file and transaction
log, and leaves the transaction log in place. All backups leave the database
file in place.
A full backup of this kind is illustrated in the figure below. In an incremental
backup, only the transaction log is backed up.

656
Chapter 21 Backup and Data Recovery

Before
backup db_name.log

db_name.db

database log
directory directory

Backup

After db_name.log
backup
db_name.log

db_name.db db_name.db

database log backup


directory directory directory

$ For information on how to carry out backups of this type, see "Making
a backup, continuing to use the original transaction log" on page 678.

A backup scheme for databases not involved in replication


In many circumstances, disk space limitations make it impractical to let the
transaction log grow indefinitely. In this case, you can choose to delete the
contents of the transaction log when the backup is complete, freeing the disk
space. You should not choose this option if the database is involved in
replication, as replication requires access to the transaction log.
A full backup, truncating the log file, is illustrated in the figure below. In an
incremental backup, only the transaction log is backed up.

657
Designing backup procedures

Before
backup db_name.log

db_name.db

database log
directory directory

Truncate Backup

After db_name.log
backup
db_name.log

db_name.db db_name.db

database log backup


directory directory directory

Deleting the transaction log after each incremental backup makes recovery
from a media failure on the database file a more complex task, as there may
then be several different transaction logs since the last full backup. Each
transaction log needs to be applied in sequence to bring the database up to
date.
You can use this kind of backup at a database that is operating as a MobiLink
consolidated database, as MobiLink does not rely on the transaction log. If
you are running SQL Remote or the MobiLink dbsync.exe application, you
must use a scheme suitable for preserving old transaction logs, as in the
following section.
$ For information on how to carry out a backup of this type, see "Making
a backup, deleting the original transaction log" on page 679.

A backup scheme for databases involved in replication


If your database is part of a SQL Remote installation, the Message Agent
needs access to old transactions. If it is a consolidated database, it holds the
master copy of the entire SQL Remote installation, and thorough backup
procedures are essential.
If your database is a primary site in a Replication Server installation, the
Replication Agent requires access to old transactions. However, disk space
limitations often make it impractical to let the transaction log grow
indefinitely.

658
Chapter 21 Backup and Data Recovery

If your database is participating in a MobiLink setup, using the dbsync.exe


executable, the same considerations apply. However, if your database is a
MobiLink consolidated database, you do not need old transaction logs and
can use a scheme for databases not involved in replication, as in the previous
section.
In these case, you can choose backup options to rename and restart the
transaction log. This kind of backup prevents open-ended growth of the
transaction log, while maintaining information about the old transactions for
the Message Agent and the Replication Agent.
This kind of backup is illustrated in the figure below.

Before
backup db_name.log

db_name.db

database log
directory directory
Rename Backup

New transaction
After log
YYMMDDnn.log db_name.log
backup
db_name.log

db_name.db db_name.db

database log backup


directory directory directory

$ For information on how to carry out a backup of this kind, see "Making
a backup, renaming the original transaction log" on page 680.
Offline transaction In addition to backing up the transaction log, the backup operation renames
logs the online transaction log to a filename of the form YYMMDDnn.log. This file
is no longer used by the database server, but is available for the Message
Agent and the Replication Agent. It is called an offline transaction log. A
new online transaction log is started with the same name as the old online
transaction log.

659
Designing backup procedures

There is no Year 2000 issue with the two-digit year in the YYMMDDnn.log
filenames. The names are used for distinguishability only, not for ordering.
For example, the renamed log file from the first backup on December 10,
2000, is named 00121000.log. The first two digits indicate the year, the
second two digits indicate the month, the third two digits indicate the day of
the month, and the final two digits distinguish among different backups made
on the same day.
The Message Agent and the Replication Agent can use the offline copies to
provide the old transactions as needed. If you set the DELETE_OLD_LOGS
database option to ON, then the Message Agent and Replication Agent delete
the offline files when they are no longer needed, saving disk space.

Backup methods for remote databases in replication installations


Backup procedures are not as crucial at remote databases as at the
consolidated database. You may choose to rely on replication to the
consolidated database as a data backup method. In the event of a media
failure, the remote database would have to be re-extracted from the
consolidated database, and any operations that have not been replicated
would be lost. (You could use the log translation utility to attempt to recover
lost operations.)
Even if you do choose to rely on replication to protect remote database data,
backups may still need to be done periodically at remote databases to prevent
the transaction log from growing too large. You should use the same option
(rename and restart the log) as at the consolidated database, running the
Message Agent so that it has access to the renamed log files. If you set the
DELETE_OLD_LOGS option to ON at the remote database, the old log files
will be deleted automatically by the Message Agent when they are no longer
needed.
Automatic You can use the -x Message Agent command-line switch to eliminate the
transaction log need to rename the transaction log on the remote computer when the
renaming database server is shut down. The -x option renames the transaction log after
it has been scanned for outgoing messages.

Backup a database to a tape drive


All the types of backup described above are image backups. The backup
copy of each file is also a file. To make backups to tape using an image
backup, you have to take each backup copy and put it on tape using a disk
backup utility.

660
Chapter 21 Backup and Data Recovery

You can carry out direct backup to a tape drive using an archive backup.
Archive backups are always full backups. An archive backup makes copies
of both the database file and the transaction log, but these copies are placed
into a single file.
$ You can make archive backups using the BACKUP statement. For
information, see "Backing up a database directly to tape" on page 683, and
"BACKUP statement" on page 401 of the book ASA Reference.
$ You can restore the backup using the RESTORE statement. For
information, see "Restoring an archive backup" on page 688, and
"RESTORE statement" on page 591 of the book ASA Reference.

Designing a backup and recovery plan


For reliable protection of your data, you should develop and implement a
backup schedule. You should also ensure that you have a set of tested
recovery instructions.
Typical schedules call for occasional full backups interspersed with several
incremental backups. The frequency of each depends on the nature of the
data you are protecting.
If you use internal backups, you can use the scheduling features in Adaptive
Server Anywhere to automate the task. Once you specify a schedule, the
backups are carried out automatically by the database server.
$ For information on automating backups, see "Automating Tasks Using
Schedules and Events" on page 495.
The length of time your organization can function without access to the data
in your database imposes a maximum recovery time, and you should develop
and test a backup and recovery plan that meets this requirement.
You should verify that you have the protection you need against media
failure on the database file and on the transaction log file. If you are running
in a replication environment, you should consider using a mirrored
transaction log.
$ For background information on media failure, see "Protecting your data
against media failure" on page 653.
Factors that affect External factors such as available hardware, the size of database files,
recovery time recovery medium, disk space, and unexpected errors can affect your recovery
time. When planning a backup strategy, you should allow additional
recovery time for miscellaneous tasks that must be performed, such as
entering recovery commands or retrieving and loading tapes.

661
Designing backup procedures

Adding more files into the recovery scenario increases the places where
recovery can fail. As the backup and recovery strategy develops you should
consider checking your recovery plan.
$ For information on how to implement a backup and recovery plan, see
"Implementing a backup and recovery plan" on page 674.

Ensuring your database is valid


Database file corruption may not be apparent until applications try to access
the affected part of the database. As part of your data protection plan, you
should periodically check that your database has no errors. You can do this
by validating the database. This task requires DBA authority.
Database validation includes a scan of every row in every table and a look-up
of each row in each index on the table. Validation requires exclusive access
to each table in turn. For this reason, it is best done when there is no other
activity on the database. Database validation does not validate data,
continued row references, or foreign key relationships.
Backup copies of the database and transaction log must not be changed in
any way. If there were no transactions in progress during the backup, you can
check the validity of the backup database using read-only mode. However, if
transactions were in progress the database server must carry out recovery on
the database when you start it. Recovery modifies the backup copy, which is
not desirable.
If you can be sure that no transactions are in progress when the backup is
being made, the database server does not need to carry out the recovery
steps. In this case, you can carry out a validity check on the backup using the
read-only database option.

Tip
You can ensure that no transactions are in progress when you make a
backup by using the BACKUP statement with the WAIT BEFORE
START clause.

If a base table in the database file is corrupt, you should treat the situation as
a media failure, and recover from your previous backup. If an index is
corrupt, you may want to unload the database without indexes, and reload.
$ For instructions, see "Validating a database" on page 676, and
"Validating a transaction log" on page 677.
$ For information on read-only databases, see "–r command-line option"
on page 33 of the book ASA Reference.

662
Chapter 21 Backup and Data Recovery

Configuring your database for data protection


There are several ways in which you can configure your database and the
database server to provide protection against media failure while maintaining
performance.

Protecting against media failure on the database file


When you create a database, by default the transaction log is put on the same
device and in the same directory as the database. This arrangement does not
protect against all kinds of media failure, and you should consider placing
the transaction log in another location for production use.
For comprehensive protection against media failure, you should keep the
transaction log on a different device from the database file. Some machines
with two or more hard drives have only one physical disk drive with several
logical drives or partitions: if you want reliable protection against media
failure, make sure that you have a machine with at least two storage devices.
Placing the transaction log on a separate device can also result in improved
performance by eliminating the need for disk head movement between the
transaction log and the main database file.
You should not place the transaction log on a network directory. Reading and
writing pages over a network gives poor performance and may result in file
corruption.
$ For information on creating databases, see "Creating a database" on
page 115.
$ For information on how to change the location of a transaction log, see
"Changing the location of a transaction log" on page 690.

Protecting against media failure on the transaction log


It is recommended that you use a transaction log mirror when running high-
volume or extremely critical applications. For example, at a consolidated
database in a SQL Remote setup, replication relies on the transaction log,
and if the transaction log is damaged or becomes corrupt, data replication can
fail.

663
Configuring your database for data protection

If you are using a mirrored transaction log, and an error occurs while trying
to write to one of the logs (for example, if the disk is full), the database
server stops. The purpose of a transaction log mirror is to ensure complete
recoverability in the case of media failure on either log device; this purpose
would be lost if the server continued with a single log.
Where to store the There is a performance penalty for using a mirrored log, as each database log
transaction log write operation must be carried out twice. The performance penalty depends
mirror on the nature and volume of database traffic and on the physical
configuration of the database and logs.
A transaction log mirror should be kept on a separate device from the
transaction log. This improves performance. Also, if either device fails, the
other copy of the log keeps the data safe for recovery.
Alternatives to a Alternatives to a mirrored transaction log are to use a disk controller that
transaction log provides hardware mirroring, or operating-system level software mirroring,
mirror as provided by Windows NT and NetWare. Generally, hardware mirroring is
more expensive, but provides better performance.
For more Live backups provide additional protection that has some similarities to
information transaction log mirroring. For more information, see "Differences between
live backups and transaction log mirrors" on page 665.
For information on creating a database with a mirrored transaction log, see
"The Initialization utility" on page 98 of the book ASA Reference.
For information on changing an existing database to use a mirrored
transaction log, see "The Transaction Log utility" on page 132 of the book
ASA Reference.

Protecting against total machine failure


You can use a live backup to provide a redundant copy of the transaction log
available for restart of your system on a secondary machine, in case the
machine running the database server becomes unusable.
A live backup runs continuously, terminating only if the server shuts down.
If you suffer a system failure, the backed up transaction log can be used for a
rapid restart of the system. However, depending on the load that the server is
processing, the live backup may lag behind and may not contain all
committed transactions.
$ For information on making a live backup, see "Making a live backup"
on page 685. For information on restarting database using a live backup, see
"Recovering from a live backup" on page 687.

664
Chapter 21 Backup and Data Recovery

Differences between live backups and transaction log mirrors


Both a live backup and a transaction log mirror appear to provide a
secondary copy of the transaction log. However, there are several differences
between using a live backup and using a transaction log mirror:
♦ In general, a live backup is made to a different machine Running a
transaction log mirror on a separate machine is not recommended. It can
lead to performance and data corruption problems, and will stop the
database server if the connection between the machines goes down.
By running the backup utility on a separate machine, the database server
does not do the writing of the backed up log file, and the data transfer is
done by the Adaptive Server Anywhere client/server communications
system. Therefore, performance impact is less and reliability is greater.
♦ A live backup provides protection against a machine becoming
unusable Even if a transaction log mirror is kept on a separate device,
it does not provide immediate recovery if the whole machine becomes
unusable. You could consider an arrangement where two machines share
access to a set of disks.
♦ A live backup may lag behind the database server A mirrored
transaction log contains all the information required for complete
recovery of committed transactions. Depending on the load that the
server is processing, the live backup may lag behind and may not
contain all the committed transactions.

Live backups and The live backup of the transaction log is always the same length or shorter
regular backups than the active transaction log. When a live backup is running, and another
backup restarts the transaction log (dbbackup -r or dbbackup -x), the live
backup automatically truncates the live backup log and restarts the live
backup at the beginning of the new transaction log.
$ For information on how to make a live backup, see "Making a live
backup" on page 685.

Controlling transaction log size


The size of the transaction log can determine what kind of backup is right for
you, and can also affect recovery times.

665
Configuring your database for data protection

You can control how fast the transaction log file grows by ensuring that all
your tables have compact primary keys. If you carry out updates or deletes
on tables that do not have a primary key or a unique index not allowing
NULL, the entire contents of the affected rows is entered in the transaction
log. If a primary key is defined, the database server needs to store only the
primary key column values to uniquely identify a row. If the table contains
many columns or wide columns, the transaction log pages fill up much faster
if no primary key is defined. In addition to taking up disk space, this extra
writing of data affects performance.
If a primary key does not exist, the engine looks for a UNIQUE NOT NULL
index on the table (or a UNIQUE constraint). A UNIQUE index that allows
NULL is not sufficient.

666
Chapter 21 Backup and Data Recovery

Backup and recovery internals


This section describes the internal mechanisms used during backup and
during automatic recovery mechanism from system failures.

Backup internals
When you issue a backup instruction, the database may be in use by many
people. If you later need to use your backup to restore your database, you
need to know what information has been backed up, and what has not.
The database server carries out a backup as follows:
1 Issue a checkpoint. Further checkpoints are disallowed until the backup
is complete. While the backup is taking place, any pages modified by
other connections are saved before modification in the temporary file,
instead of the database file, so that the backup image is made as of the
checkpoint.
2 Make a backup of the database file, if the backup instruction is for a full
backup.
3 Make a backup of the transaction log.
The backup includes all operations recorded in the transaction log before
the final page of the log is read. This may include instructions issued
after the backup instruction was issued.
The backup copy of the transaction log is generally smaller than the
online transaction log. The database server allocates space to the online
transaction logs in multiples of 64K, so the transaction log file size
generally includes empty pages. However, only the non-empty pages are
backed up.
4 If the backup instruction requires the transaction log to be truncated or
renamed, then wait until there are no uncommitted transactions before
truncating or renaming the log file.
If the database is busy, this wait may be significant.
$ For information on renaming and truncating the transaction log,
see "Designing backup procedures" on page 654.
5 Mark the backup image of the database to indicate that recovery is
needed. This causes any operations that happened since the start of the
backup to be applied. It also causes operations that were incomplete at
the checkpoint to be undone, if they were not committed.

667
Backup and recovery internals

Restrictions during backup and recovery


The database server prevents the following operations from being executed
while a backup is in progress:
♦ Another backup, with the exception of a live backup.
♦ A checkpoint, other than the one issued by the backup instruction itself.
♦ Any statement that causes a checkpoint. This includes data definition
statements as well as LOAD TABLE and TRUNCATE TABLE.
During recovery, including restoring backups, no action is permitted by other
users of the database.

Checkpoints and the checkpoint log


The checkpoint log is a part of the database file. The database file is
composed of pages: fixed size portions of hard disk. Before any page is
updated (made dirty), the database server carries out the following
operations:
♦ It makes a copy of the original page. These copied pages are the
checkpoint log.
♦ It reads the page into memory, where it is held in the database cache.

Cache A

Database A A
file

Page about to Checkpoint log


be changed copy of page
Transaction
log

Changes made to the page are applied to the copy in the cache. For
performance reasons they are not written immediately to the database file on
disk.

668
Chapter 21 Backup and Data Recovery

Cache B

Changed
page

Database A A
file

Transaction A->B
log

When the cache is full, the changed page may get written out to disk. The
copy in the checkpoint log remains unchanged.

Cache B

Database
B A
file

Transaction
A->B
log

A checkpoint is a point at which all dirty pages are written to disk.


Following a checkpoint, the checkpoint log is deleted. The pages in the
checkpoint log are added to a list of free pages in the database file for use
later on. The database file size does not shrink when the checkpoint log is
deleted.

669
Backup and recovery internals

At a checkpoint, all the data in the database is held on disk in the database
file. The information in the database file matches that in the transaction log.
The checkpoint represents a known state of the database on disk. During
recovery, the database is first recovered to the most recent checkpoint, and
then changes since that checkpoint are applied.
$ For more information, see "How the database server decides when to
checkpoint" on page 671.

Transactions and the rollback log


As changes are made to the contents of a database, a rollback log is kept for
the purpose of canceling changes if a transaction is rolled back or if a
transaction is uncommitted when a system failure occurs. There is a separate
rollback log for each connection. When a transaction is committed or rolled
back, the rollback log contents for that connection are deleted. The rollback
logs are stored in the database, and rollback log pages are copied into the
checkpoint log along with other pages that are changed.
The rollback log is also called the undo log.
$ For information on transaction processing, see "Using Transactions and
Isolation Levels" on page 381.

The automatic recovery process


When a database is shut down during normal operation, the database server
carries out a checkpoint so that all the information in the database is held in
the database file. This is a clean shutdown.
Each time you start a database, the database server checks whether the last
shutdown was clean or the result of a system failure. If the database was not
shut down cleanly, it automatically takes the following steps to recover from
a system failure:
1 Recover to the most recent checkpoint All pages are restored to
their state at the most recent checkpoint, by copying the checkpoint log
pages over the changes made since the checkpoint.

670
Chapter 21 Backup and Data Recovery

Database
A A
file
Dirty page Checkpoint log
overwritten copy of page

Transaction
A->B
log file

2 Apply changes made since the checkpoint Changes made between


the checkpoint and the system failure, which are held in the transaction
log, are applied.
3 Rollback uncommitted transactions Any uncommitted transactions
are rolled back, using the rollback logs.

How the database server decides when to checkpoint


The priority of writing dirty pages to the disk increases as the time and the
amount of work since the last checkpoint grows. The priority is determined
by the following factors:
♦ Checkpoint Urgency The time that has elapsed since the last
checkpoint, as a percentage of the checkpoint time setting of the
database. The server -gc command line option controls the maximum
desired time, in minutes, between checkpoints. You can also set the
desired time using the CHECKPOINT_TIME option.
$ For more information, see "–gc command-line option" on page 25
of the book ASA Reference.
♦ Recovery Urgency A heuristic to estimate the amount of time
required to recover the database if it fails right now. The server -gr
command line option controls the maximum desired time, in minutes,
for recovery in the event of system failure. You can also set the desired
time using the RECOVERY_TIME option.
$ For more information, see "–gr command-line option" on page 29
of the book ASA Reference.
The checkpoint and recovery urgencies are important only if the server does
not have enough idle time to write dirty pages.
Frequent checkpoints make recovery quicker, but also create work for the
server writing out dirty pages.
671
Backup and recovery internals

There are two database options that allow you to control the frequency of
checkpoints. CHECKPOINT_TIME controls the maximum desired time
between checkpoints and RECOVERY_TIME controls the maximum desired
time for recovery in the event of system failure.
The writing of dirty pages to disk is carried out by a task within the server
called the idle I/O task. This task shares processing time with other database
tasks.
There is a threshold for the number of dirty pages, below which writing of
database pages does not take place.
When the database is busy, the urgency is low, and the cache only has a few
dirty pages, the idle I/O task runs at a very low priority and no writing of
dirty pages takes place.
Once the urgency exceeds 30%, the priority of the idle I/O task is increased.
At intervals, the priority is increased again. As the urgency becomes high,
the engine shifts its primary focus to writing dirty pages until the number
gets below the threshold again. However, the engine only writes out pages
during the idle I/O task if the number of dirty pages is greater than the
threshold.
If, because of other activity in the database, the number of dirty pages falls to
zero, and if the urgency is 50% or more, then a checkpoint takes place
automatically, since it is a convenient time.
Both the checkpoint urgency and recovery urgency values increase in value
until the checkpoint occurs, at which point they drop to zero. They do not
decrease otherwise.

Validating the transaction log on database startup


When a database using a transaction log mirror starts up, the database server
carries out a series of checks and automatic recovery operations to confirm
that the transaction log and its mirror are not corrupted, and to correct some
problems if corruption is detected.
On startup, the server checks that the transaction log and its mirror are
identical by carrying out a full comparison of the two files; if they are
identical, the database starts as usual. The comparison of log and mirror adds
to database startup time.
If the database stopped because of a system failure, it is possible that some
operations were written into the transaction log but not into the mirror. If the
server finds that the transaction log and the mirror are identical up to the end
of the shorter of the two files, the remainder of the longer file is copied into
the shorter file. This produces an identical log and mirror. After this
automatic recovery step, the server starts as usual.
672
Chapter 21 Backup and Data Recovery

If the check finds that the log and the mirror are different in the body of the
shorter of the two, one of the two files is corrupt. In this case, the database
does not start, and an error message is generated saying that the transaction
log or its mirror is invalid.

673
Backup and recovery tasks

Backup and recovery tasks


This section collects together instructions for tasks related to backup and
recovery.

Implementing a backup and recovery plan


Regular backups and tested recovery commands are part of a comprehensive
backup and recovery plan.
$ For background information, see "Designing a backup and recovery
plan" on page 661.

v To implement a backup and recovery plan:


♦ Create and verify your backup and recovery commands.
♦ Measure the time it takes to execute backup and recovery commands.
♦ Document the backup commands and create written procedures
outlining where your backups are kept and identify any naming
convention used as well as the kind of backups performed.
♦ Set up your backup procedures on the production server.
♦ Monitor backup procedures to avoid unexpected errors. Make sure any
changes in the process are reflected in your documentation.
$ For information on carrying out backups, see "Making a full backup"
on page 674, and "Making an incremental backup" on page 675.

Making a full backup


A full backup is a backup of the database file and the transaction log file.
$ For information on the difference between a full backup and an
incremental backup, see "Types of backup" on page 654.

v To make a full backup (overview):


1 Ensure that you have DBA authority on the database.
2 Perform a validity check on your database to ensure it is not corrupt.
You can use the Validate utility or the sa_validate stored procedure.
$ For more information, see "Validating a database" on page 676.

674
Chapter 21 Backup and Data Recovery

3 Make a backup of your database file and transaction log.


$ For information on how to carry out the backup operation, see the
following:
♦ "Making a backup, continuing to use the original transaction log" on
page 678.
♦ "Making a backup, deleting the original transaction log" on
page 679.
♦ "Making a backup, renaming the original transaction log" on
page 680.
Notes Validity checking requires exclusive access to entire tables on your database.
For more information and alternative approaches, see "Ensuring your
database is valid" on page 662.
If you validate your backup copy of the database, make sure you do so in
read-only mode. Start the database server with the –r command-line option
to use read-only mode.

Making an incremental backup


An incremental backup is a backup of the transaction log file only. Typically,
you should make several incremental backups between each full backup.
$ For information on the difference between a full backup and an
incremental backup, see "Types of backup" on page 654.

v To make an incremental backup (overview):


1 Ensure that you have DBA authority on the database.
2 Make a backup of your transaction log only, not your database file.
$ For information on how to carry out the backup operation, see the
following:
♦ "Making a backup, continuing to use the original transaction log" on
page 678.
♦ "Making a backup, deleting the original transaction log" on
page 679.
♦ "Making a backup, renaming the original transaction log" on
page 680.

675
Backup and recovery tasks

Notes The backup copies of the database file and transaction log file have the same
names as the online versions of these files. For example, if you make a
backup of the sample database, the backup copies are called asademo.db and
asademo.log. When you repeat the backup statement, choose a new backup
directory to avoid overwriting the backup copies.
$ For information on how to make a repeatable incremental backup
command, by renaming the backup copy of the transaction log, see
"Renaming the backup copy of the transaction log during backup" on
page 683.

Validating a database
Validating a database is a key part of the backup operation. For information,
see "Ensuring your database is valid" on page 662.
For an overview of the backup operation, see "Making a full backup" on
page 674.

v To check the validity of an entire database (Sybase Central):


1 Connect to the database as a user with DBA authority.
2 Right-click the database and choose Validate from the popup menu.
A message box indicates whether the database is valid or not.

v To check the validity of an entire database (SQL):


1 Connect to the database as a user with DBA authority.
2 Execute the sa_validate stored procedure:
call sa_validate
The procedure returns a single column, named msg. If all tables are
valid, the column contains No errors detected.
$ For more information, see "sa_validate system procedure" on page 973
of the book ASA Reference.

v To check the validity of an entire database (Command line):


1 Connect to the database as a user with DBA authority.
2 Run the dbvalid command-line utility:
dbvalid -c "connection_string"

676
Chapter 21 Backup and Data Recovery

$ For more information, see "The Validation utility" on page 148 of the
book ASA Reference.
Notes If you are checking the validity of a backup copy, you should run the
database in read-only mode so that it is not modified in any way. You can
only do this when there were no transactions in progress during the backup.
$ For information on running databases in read-only mode, see "–r
command-line option" on page 33 of the book ASA Reference.

Validating a single table


You can check the validity of a single table either from Sybase Central or
using a SQL statement. You must have DBA authority or be the owner of a
table to check its validity.

v To check the validity of a table (Sybase Central):


1 Open the Tables folder.
2 Right-click the table and choose Validate from the popup menu.
A message box indicates whether the table is valid or not.

v To check the validity of a table (SQL):


♦ Execute the VALIDATE TABLE statement:
VALIDATE TABLE table_name

Notes If you do have errors reported, you can drop all of the indexes and keys on a
table and recreate them. Any foreign keys to the table will also need to be
recreated. Another solution to errors reported by VALIDATE TABLE is to
unload and reload your entire database. You should use the -u option of
dbunload so that it does not try to use a possibly corrupt index to order the
data.

Validating a transaction log


The process for validating a transaction log file depends on whether it is in
use on a production database (online) or is not in use (offline, or a backup
copy).

677
Backup and recovery tasks

v To validate an online transaction log:


♦ Run your database with a mirrored transaction log. The database server
automatically validates the transaction log each time the database is
started.

v To validate an offline or backed up transaction log:


♦ Run the Log Translation utility (dbtran) against the log file. If the Log
Translation utility can successfully read the log file, it is valid.

Making a backup, continuing to use the original transaction log


This task describes the simplest kind of backup, which leaves the transaction
log untouched.
$ For information on when to use this type of backup, see "A backup
scheme for when disk space is plentiful" on page 656.

v To make a backup, continuing to use the original transaction log


(Sybase Central):
1 Start Sybase Central. Connect to the database as a user with DBA
authority.
2 Right-click the database and choose Backup from the popup menu. The
Backup wizard appears.
3 On the first page, choose Image Backup.
4 On the next page, enter the name of a directory to hold the backup
copies, and choose whether to carry out a complete backup (all database
files) or an incremental backup (transaction log file only).
5 On the next page, choose the option Continue to use the original
transaction log.
6 On the final page, click Finish to start the backup.
The procedure describes a client-side backup. There are more options
available for this kind of backup.
If you choose a server-side backup, and the server is running on a different
machine from Sybase Central, you cannot use the Browse button to locate a
directory in which to place the backups. The Browse button browses the
client machine, while the backup directory is relative to the server.

678
Chapter 21 Backup and Data Recovery

v To make a backup, continuing to use the original transaction log


(SQL):
♦ If you are using the BACKUP statement, use the following clauses only:
BACKUP DATABASE
DIRECTORY directory_name
[ TRANSACTION LOG ONLY ]
Include the TRANSACTION LOG ONLY clause if you are making an
incremental backup.

v To make a backup, continuing to use the original transaction log


(Command line):
♦ If you are using the dbbackup utility, use the following syntax:
dbbackup -c "connection_string" [ -t ] backup_directory

Include the -t option only if you are making an incremental backup.

Making a backup, deleting the original transaction log


If your database is not involved in replication, and if you have limited disk
space on your online machine, you can delete the contents of the online
transaction log (truncate the log) when you make a backup. In this case, you
need to use every backup copy made since the last full backup during
recovery from media failure on the database file.
$ For information on when to use this type of backup, see "A backup
scheme for databases not involved in replication" on page 657.

v To make a backup, deleting the transaction log (Sybase Central):


1 Start Sybase Central. Connect to the database as a user with DBA
authority.
2 Right-click the database and choose Backup from the popup menu. The
Backup wizard appears.
3 On the first page, choose Image Backup.
4 On the next page, enter the name of a directory to hold the backup
copies, and choose whether to carry out a complete backup (all database
files) or an incremental backup (transaction log file only).
5 On the next page, choose the option Truncate the original log.
6 On the final page, click Finish to start the backup.

679
Backup and recovery tasks

v To make a backup, deleting the transaction log (SQL):


♦ Use the BACKUP statement with the following clauses:
BACKUP DATABASE
DIRECTORY backup_directory
[ TRANSACTION LOG ONLY ]
TRANSACTION LOG TRUNCATE
Include the TRANSACTION LOG ONLY clause only if you are making
an incremental backup.
The backup copies of the transaction log and database file are placed in
backup_directory. If you enter a path, it is relative to the working
directory of the database server, not your client application.

v To make a backup, deleting the transaction log (Command line):


♦ From a system command prompt, enter the following command:
dbbackup -c "connection_string" -x [ -t ] backup_directory

Include the -t option only if you are making an incremental backup.


The backup copies of the transaction log and database file are placed in
backup_directory. If you enter a path, it is relative to the directory from
which you run the command.
Notes Before the online transaction log can be erased, all outstanding transactions
must terminate. If there are outstanding transactions, your backup can not
complete. For information, see "Backup internals" on page 667.
You can determine which connection has an outstanding transaction using
the sa_conn_info system procedure. If necessary, you can disconnect the user
with a DROP CONNECTION statement.
$ For more information, see "Determining which connection has an
outstanding transaction" on page 682.

Making a backup, renaming the original transaction log


This set of backup options is typically used for databases involved in
replication. In addition to making backup copies of the database file and
transaction log, the transaction log at backup time is renamed to an offline
log, and a new transaction log is started, with the same name as the log in use
at backup time.
$ For information on when to use this set of backup options, see "A
backup scheme for databases involved in replication" on page 658.

680
Chapter 21 Backup and Data Recovery

v To make a backup, renaming the transaction log (Sybase Central):


1 Start Sybase Central. Connect to the database as a user with DBA
authority.
2 Right-click the database and choose Backup from the popup menu. The
Backup wizard appears.
3 On the first page, choose Image Backup.
4 On the next page, enter the name of a directory to hold the backup
copies, and choose whether to carry out a complete backup (all database
files) or an incremental backup (transaction log file only).
5 On the next page, choose the option Rename the transaction log.
6 On the final page, click Finish to start the backup.

v To make a backup, renaming the transaction log (SQL):


♦ Use the BACKUP statement, with the following clauses:
BACKUP DATABASE
DIRECTORY backup_directory
[ TRANSACTION LOG ONLY ]
TRANSACTION LOG RENAME
Include the TRANSACTION LOG ONLY clause only if you are making
an incremental backup.
The backup copies of the transaction log and database file are placed in
backup_directory. If you enter a path, it is relative to the working
directory of the database server, not your client application.

v To make a backup, renaming the transaction log (Command line):


♦ From a system command prompt, enter the following command. You
must enter the command on a single line:
dbbackup -c "connection_string" -r [ -t ] backup_directory

Include the -t option if you are making an incremental backup.


The backup copies of the transaction log and database file are placed in
backup_directory. If you enter a path, it is relative to the directory from
which you run the command.
Notes Before the online transaction log can be renamed, all outstanding
transactions must terminate. If there are outstanding transactions, your
backup will not complete. For information, see "Backup internals" on
page 667.

681
Backup and recovery tasks

You can determine which connection has an outstanding transaction using


the sa_conn_info system procedure. If necessary, you can disconnect the user
with a DROP CONNECTION statement.
$ For more information, see "Determining which connection has an
outstanding transaction" on page 682.

Determining which connection has an outstanding transaction


If you are carrying out a backup that renames or deletes the transaction log,
and there are outstanding transactions, the backup must wait until those
transactions are complete before it can complete.

v To determine which connection has an outstanding transaction:


1 Connect to the database from Interactive SQL or another application that
can call stored procedures.
2 Execute the sa_conn_info system procedure
call sa_conn_info
3 Inspect the UncmtOps column to see which connection has
uncommitted operations.
$ For more information, see "sa_conn_info system procedure" on
page 964 of the book ASA Reference.
If there are not too many connections, you can also use the Console utility to
determine which connection has outstanding connections.

v To determine which connection has an outstanding transaction


(Console utility) (NetWare):
1 Connect to the database from the Console utility. For example, the
following command connects to the default database using user ID DBA
and password SQL:
LOAD dbconsol -c "uid=DBA;pwd=SQL"
Note: This utility is only available for NetWare and Unix operating
systems. On UNIX, the utility is named dbconsole.
$ For more information, see "The Console utility" on page 87 of the
book ASA Reference.
2 Double-click each connection, and inspect the Uncommitted Ops entry
to see which users have uncommitted operations. If necessary, you can
disconnect the user to enable the backup to finish.

682
Chapter 21 Backup and Data Recovery

Renaming the backup copy of the transaction log during backup


By default, the backup copy of the transaction log file has the same name as
the online file. For each backup operation, you must assign a different name
or location for the backup copy, or you must move the backup copy before
the next backup is done.
You can make a repeatable incremental backup command by renaming the
backup copy of the transaction log.

v To rename the backup copy of the transaction log (SQL):


♦ Use the MATCH keyword in the BACKUP statement. For example, the
following statement makes an incremental backup of the asademo
database to the directory c:\backup. The backup copy of the transaction
log is called YYMMDDnn.log, where YYMMDD is the date and nn is a
counter, starting from 00.
BACKUP DATABASE
DIRECTORY ’c:\\backup’
TRANSACTION LOG ONLY
TRANSACTION LOG RENAME MATCH

v To rename the backup copy of the transaction log (Command line):


♦ Supply the -n command-line switch to dbbackup. For example, the
following command makes an incremental backup of the sample
database, renaming the backup copy of the transaction log.
dbbackup -c "uid=DBA;pwd=SQL;dbn=asademo" -r -t -n c:\backup

Notes The backup copy of the transaction log is named YYMMDDnn.log, where YY
is the year, MM is the month, DD is the day of the month, and nn increments
if there are more than one backup per day. There is no Year 2000 issue with
the two-digit year in the YYMMDDnn.log filenames. The names are used for
distinguishability only, not for ordering.

Backing up a database directly to tape


An archive backup makes a copy of the database file and transaction log file
in a single archive destination. Only server-side, full backups can be made in
this manner.
$ For more information, see "Types of backup" on page 654.

683
Backup and recovery tasks

v To make an archive backup to tape (Sybase Central):


1 Start Sybase Central. Connect to the database as a user with DBA
authority.
2 Right-click the database and choose Backup from the popup menu. The
Backup wizard appears.
3 On the first page of the wizard, select Archive backup.
4 Follow the instructions in the wizard to start the backup.

v To make an archive backup to tape (SQL):


♦ Use the BACKUP statement, with the following clauses:
BACKUP DATABASE
TO archive_root
[ ATTENDED { ON | OFF } ]
[ WITH COMMENT comment string ]
If you set the ATTENDED option to OFF, the backup fails if it runs out
of tape or disk space. If ATTENDED is set to ON, you are prompted to
take an action, such as replacing the tape, when there is no more space
on the backup archive device.
Notes The BACKUP statement makes an entry in the text file backup.syb, in the
same directory as the server executable.
$ For information on restoring from an archive backup, see "Restoring an
archive backup" on page 688.
Example The following statement makes a backup to the first tape drive on a Windows
NT machine:
BACKUP DATABASE
TO ’\\\\.\\tape0’
ATTENDED OFF
WITH COMMENT ’May 6 backup’
The first tape drive on Windows NT is \\.\tape0. As the backslash is an
escape character in SQL strings, each backslash is preceded by another.
The following statement makes an archive file on disk named
c:\backup\archive.1.
BACKUP DATABASE
TO ’c:\\backup\\archive’

684
Chapter 21 Backup and Data Recovery

Making a live backup


You carry out a live backup of the transaction log by using the dbbackup
command line utility with the -l command-line option.
$ For information on live backups, see "Protecting against total machine
failure" on page 664.

v To make a live backup:


1 Set up a secondary machine from which you can run the database if the
online machine fails. For example, ensure that you have Adaptive Server
Anywhere installed on the secondary machine.
2 Periodically, carry out a full backup to the secondary machine.
3 Run a live backup of the transaction log to the secondary machine.
dbbackup -l path\filename.log -c "connection_string"
You should normally run the dbbackup utility from the secondary
machine.
If the primary machine becomes unusable, you can restart your database
using the secondary machine. The database file and the transaction log hold
the information needed to restart.

Recovering from media failure on the database file


The recovery process depends on whether you leave the transaction log
untouched on incremental backup in your backup process. If your backup
operation deletes or renames the transaction log, you may have to apply
changes from several transaction logs. If your backup operation leaves the
transaction log untouched, you need to use only the online transaction log in
recovery.
$ For information on the backup types discussed here, see "Designing
backup procedures" on page 654.

v To recover from media failure on the database file:


1 Make an extra backup copy of the current transaction log. The database
file is gone, and the only record of changes since the last backup is in the
transaction log.
2 Create a recovery directory to hold the files you use during recovery.
3 Copy the database file from the last full backup to the recovery
directory.

685
Backup and recovery tasks

4 Apply the transactions held in the backed up transaction logs to the


recovery database.
For each log file, in chronological order,
♦ Copy the log file into the recovery directory.
♦ Start the database server with the apply transaction log (-a) switch,
to apply the transaction log:
dbeng7 db_name.db -a log_name.log
The database server shuts down automatically once the transactions
have been applied.
5 Copy the online transaction log into the recovery directory. Apply the
transactions from the to the recovery database.
dbeng7 db_name.db -a db_name.log
6 Perform validity checks on the recovery database.
7 Make a post-recovery backup.
8 Move the database file to the production directory.
9 Allow user access to the production database.

Recovering from media failure on an unmirrored transaction log


If your database is a primary site in a Replication Server installation, or a
consolidated database in a SQL Remote installation, you should use a
mirrored transaction log, or hardware equivalent.
$ For more information, see "Protecting against media failure on the
transaction log" on page 663.

v To recover from media failure on an unmirrored transaction log


(partial recovery):
1 Make an extra backup copy of the database file immediately. With the
transaction log is gone, the only record of the changes between the last
backup and the most recent checkpoint is in the database file.
2 Delete or rename the transaction log file.
3 Restart the database with the -f switch.
dbeng7 asademo.db -f

686
Chapter 21 Backup and Data Recovery

Caution
This command should only be used when the database is not
participating in a SQL Remote or Replication Server replication
system. If your database is a consolidated database in a SQL
Remote replication system, you may have to re-extract the remote
databases.

Without the -f switch, the server reports the lack of a transaction log as
an error. With the switch, the server restores the database to the most
recent checkpoint and then rolls back any transactions that were not
committed at the time of the checkpoint. A new transaction log is then
created.

Recovering from media failure on a mirrored transaction log

v To recover from media failure on a mirrored transaction log:


1 Make an extra copy of the backup of your database file taken at the time
the transaction log was started.
2 Identify which of the two files is corrupt. Run the Log Translation utility
on the transaction log and on its mirror, to see which one generates an
error message. The log translation utility is accessible from Sybase
Central or as the dbtran command-line utility.
The following command-line translates a transaction log named
asademo.log, placing the translated output into asademo.sql:
dbtran asademo.log
The translation utility properly translates the intact file, and reports an
error while translating the corrupt file.
3 Copy the correct file over the corrupt file so that you have two identical
files again.
4 Restart the server.

Recovering from a live backup


A live backup is made to separate machine from the primary machine which
is running your production database. To restart a database from a live
backup, you must have Adaptive Server Anywhere installed on the
secondary machine.

687
Backup and recovery tasks

$ For information on live backups, see "Protecting against total machine


failure" on page 664.

v To restart a database using a live backup:


1 Start the database server on the secondary machine with the apply
transaction log (-a) switch, to apply the transaction log and bring the
database up to date:
dbeng7 asademo.db -a filename.log
The database server shuts down automatically once the transaction log is
applied.
2 Start the database server in the normal way, allowing user access. Any
new activity is appended to the current transaction log.

Restoring an archive backup


If you use an archive backup (typically to tape), you use the RESTORE
statement to recover your data.
$ For information on making archive backups, see "Backing up a
database directly to tape" on page 683.

v To restore a database from an archive backup (Sybase Central):


1 In Sybase Central, connect to a database with DBA authority.
2 Open the Utilities folder (located within the server folder).
3 In the right pane, double-click Restore Database.
4 Follow the instructions of the wizard.

v To restore a database from an archive backup (Interactive SQL):


1 Start a personal database server. Use a command such as the following,
which starts a server named restore:
dbeng7 -n restore
2 Start Interactive SQL. On the Identification tab of the Connect dialog,
enter a user ID of DBA and a password of SQL. Leave all other fields on
this tab blank.
3 Click the Database tab and enter a database name of utility_db. Leave
all other fields on this tab blank.
4 Click OK to connect.

688
Chapter 21 Backup and Data Recovery

5 Execute the RESTORE statement, specifying the archive root. At this


time, you can choose to restore an archived database to its original
location (default), or to a different machine with different device names
using the RENAME clause.
$ For more information, see "RESTORE statement" on page 591 of the
book ASA Reference.
Example The following statement restores a database from a tape archive to the
database file c:\newdb\newdb.db.
RESTORE DATABASE ’c:\\newdb\\newdb.db’
FROM ’\\\\.\\tape0’
The following statement restores a database from an archive backup in file
c:\backup\archive.1 to the database file c:\newdb\newdb.db. The transaction
log name and location is specified in the database.
RESTORE DATABASE ’c:\\newdb\\newdb.db’
FROM ’c:\\backup\\archive’
$ For more information, see "RESTORE statement" on page 591 of the
book ASA Reference.

Recovering uncommitted operations


When recovering from media failure on the database file, the transaction log
is intact. Recovery reapplies all committed transactions to the database. In
some circumstances, you may wish to find information about transactions
that were incomplete at the time of the failure.

v To recover uncommitted operations from a transaction log (Sybase


Central):
1 Do one of the following:
♦ If you are connected to a database, open the server for that database
and then open the Utilities folder. In the right pane, double-click
Translate Log.
♦ If you are not connected to a database, click Tools➤Adaptive
Server Anywhere➤Translate Log.
2 Follow the instructions in the wizard.
3 Edit the translated log (SQL command file) in a text editor and identify
the instructions you need.

689
Backup and recovery tasks

v To recover uncommitted operations from a transaction log


(Command line):
1 Run dbtran to convert the transaction log into a SQL command file,
using the -a command-line switch to include uncommitted transactions.
For example, the following command uses dbtran to convert a
transaction log:
dbtran -a sample.log changes.sql
2 Edit the translated log (SQL command file) in a text editor and identify
the instructions you need.
$ For more information on the log translation utility, see "The Log
Translation utility" on page 117 of the book ASA Reference.
Note The transaction log may or may not contain changes right up to the point
where a failure occurred. It does contain any changes made before the end of
the most recently committed transaction that made changes to the database.

Changing the location of a transaction log


The database must not be running when you change the location of a
transaction log.
$ For information on how to choose the location of a transaction log, see
"Protecting against media failure on the database file" on page 663.

v To change the location of a transaction log (Sybase Central):


1 Do one of the following:
♦ If you are already connected to the database associated with the log
file, open the server for that database and then open the Utilities
folder. In the right pane, double-click Change Log File Information.
♦ If you are not connected to the database, click Tools➤Adaptive
Server Anywhere➤Change Log File Information.
2 Follow the instructions in the wizard.

v To start a transaction log mirror for an existing database (Command


line):
1 Ensure the database is not running.
2 Enter the following command at a system command prompt:
dblog -t new-log-file database-file

690
Chapter 21 Backup and Data Recovery

$ For a full description of dblog command-line options, see "Transaction


log utility options" on page 134 of the book ASA Reference.

Creating a database with a transaction log mirror


You can choose to maintain a transaction log mirror when you create a
database. This option is available either from the CREATE DATABASE
statement, from Sybase Central or from the dbinit command-line utility.
$ For information on why you may wish to use a transaction log mirror,
see "Protecting against media failure on the transaction log" on page 663.

v To create a database that uses a transaction log mirror (Sybase


Central):
1 Do one of the following:
♦ If you are connected to a database, open the server for that database
and then open the Utilities folder. In the right pane, double-click
Create Database.
♦ If you are not connected to a database, click Tools➤Adaptive
Server Anywhere➤Create Database.
2 Follow the instructions in the wizard.

v To create a database that uses a transaction log mirror (SQL):


♦ Use the CREATE DATABASE statement, with the TRANSACTION
LOG clause.
$ For more information, see "CREATE DATABASE statement" on
page 427 of the book ASA Reference.

v To create a database that uses a transaction log mirror (Command


line):
♦ Use the dbinit command with the -m option. For example, the following
command (which should be entered on one line) initializes a database
named company.db, with a transaction log kept on a different device and
a mirror on a third device.
dbinit -t d:\log_dir\company.log -m
e:\mirr_dir\company.mlg c:\db_dir\company.db

$ For a full description of initialization options, see "Initialization utility


options" on page 99 of the book ASA Reference.

691
Backup and recovery tasks

Starting a transaction log mirror for an existing database


You can choose to maintain a transaction log mirror for an existing database
any time the database is not running, by using the transaction log utility. This
option is available from either Sybase Central or the dblog command-line
utility.
$ For information on why you may wish to use a transaction log mirror,
see "Protecting against media failure on the transaction log" on page 663.

v To start a transaction log mirror for an existing database (Sybase


Central):
1 Do one of the following:
♦ If you are already connected to the database associated with the log
mirror, open the server for that database and then open the Utilities
folder. In the right pane, double-click Change Log File Information.
♦ If you are not connected to the database, click Tools➤Adaptive
Server Anywhere➤Change Log File Information.
2 Follow the instructions in the wizard.

v To start a transaction log mirror for an existing database (Command


line):
1 Ensure the database is not running.
2 Enter the following command at a system command prompt:
dblog -m mirror-file database-file
You can also use the dblog utility and Sybase Central to stop a database from
using a transaction log mirror.
$ For a full description of dblog command-line options, see "Transaction
log utility options" on page 134 of the book ASA Reference.

692
C H A P T E R 2 2

Importing and Exporting Data

About this chapter Transferring large amounts of data into and from your database may be
necessary in several situations. For example,
♦ Importing an initial set of data into a new database
♦ Exporting data from your database for use with other applications, such
as spreadsheets
♦ Building new copies of a database, perhaps with a modified structure
♦ Creating extractions of a database for replication or synchronization
This chapter describes how to import data to and export data from databases,
both in text form and in other formats.
Contents
Topic Page
Introduction to import and export 694
Understanding importing and exporting 696
Designing import procedures 701
Designing export procedures 705
Designing rebuild and extract procedures 709
Import and export internals 713
Import tasks 715
Export tasks 721
Rebuild tasks 729
Extract Tasks 734

693
Introduction to import and export

Introduction to import and export


Copying data into and from your database can be a useful and timesaving
feature.
Importing involves taking data from another program or file and depositing
it into a database. Although inputting data into a database can be as simple as
regularly inserting new rows into a table, importing is generally considered
an administrative action that happens less frequently and involves moving
larger amounts of data. When you import entire databases into Adaptive
Server Anywhere, it is called loading.
Exporting involves copying data from your database and placing it into
another program or file somewhere else. Although retrieving data from a
database can be as simple as querying with a SELECT statement, exporting
is generally considered an administrative action that happens less frequently
and involves copying larger amounts of data from your database. When you
export entire databases from Adaptive Server Anywhere, it is called
unloading.
Together, unloading and reloading a database is called rebuilding.
This chapter describes the Adaptive Server Anywhere tools and utilities that
help you achieve your importing and exporting goals, including SQL,
Interactive SQL, the dbunload utility and the Sybase Central wizard.
Chapter contents This chapter contains the following material:
♦ An introduction to import and export (this section).
♦ Concepts and background information
♦ Information to help you decide when to import, export, and rebuild your
database
♦ Tips for how to improve performance when importing and exporting.
♦ Step by step instructions for how to import data, export data, and rebuild
your databases using the SQL statements UNLOAD, LOAD TABLE
and UNLOAD TABLE, Sybase Central, dbunload, and Interactive SQL.
Questions and To answer the question… Consider reading…
answers
What is importing? "Importing/Exporting" on page 696
What is exporting? "Importing/Exporting" on page 696
What is loading? "Loading/Unloading" on page 696
What is unloading? "Loading/Unloading" on page 696
What is rebuilding? "Rebuilding a database" on page 697

694
Chapter 22 Importing and Exporting Data

To answer the question… Consider reading…


Does replicating affect importing and "Rebuilding a database involved in
exporting? replication" on page 697
What tools can I use to import and "Tools" on page 697
export?
What is a temporary table? "Tools" on page 697
What data file formats does ASA support "Data formats" on page 699
for importing and exporting?
What issues affect importing? "What import issues should I
consider?" on page 701
How do I know which tool to use to "What import tools are available?" on
import? page 701
What can I do with the data I import? "What is the scope of data I want to
import?" on page 703
What if the table structures don’t match? "What if the table structures don’t
match?" on page 703
What issues affect exporting? "What export issues should I
consider?" on page 705
How do I know which tool to use to "What export tools are available?" on
export? page 705
What kind of data can I export? "What is the scope of the data I want
to export?" on page 707
How do I deal with null values in my "Choosing how to output NULLs" on
output? page 707
What are rebuild and extract procedures? "What is rebuilding?" on page 709
What issues affect rebuilding? "What rebuild and extract issues
should I consider?" on page 709
What’s the difference between "Introduction to import and export" on
rebuilding, and importing/exporting? page 694
What tools can I use to rebuild a "What rebuild tools are available?" on
database? page 710
How can I get the most out of importing "Import and export internals" on
and exporting? page 713

695
Understanding importing and exporting

Understanding importing and exporting


Understanding the terminology and your options is important in deciding
which tools to use to accomplish your importing and exporting goals.

Importing/Exporting
Importing and exporting are administrative tasks that involve reading data
into your database, or writing data out of your database. This data may be
coming from or destined for database systems or programs other than
Adaptive Server Anywhere.
You can import individual tables or portions of tables, from other database
file formats, or from ASCII files. Depending on the format of the data you
are inserting, there is some flexibility as to whether you create the table
before the import, or during the import. You may find importing a useful tool
if you need to add large amounts of data to your database at a time.
You can export individual tables and query results in ASCII format, or in a
variety of formats supported by other database programs. You may find
exporting a useful tool if you need to share large portions of your database,
or extract portions of your database according to particular criteria.
Although Adaptive Server Anywhere import and export procedures work on
one table at a time, you can create scripts that effectively automate the
importing or export procedure, allowing you to import and export data into
or from a number of tables consecutively.

Loading/Unloading
Loading and unloading are very similar to importing and exporting in that
they involve copying data into and out of your database. They are different,
however, in that loading and unloading usually involve importing or
exporting the entire database, and are usually intended for reuse within an
Adaptive Server Anywhere database.
Loading and unloading are most useful for improving performance,
reclaiming fragmented space, or upgrading your database to a newer version
of Adaptive Server Anywhere.

696
Chapter 22 Importing and Exporting Data

Rebuilding a database

Rebuilding a database is a specific type of import and export involving


unloading and reloading your entire database. Rebuilding your database
takes all the information out of your database and puts it back in, in a
uniform fashion, thus filling space and improving performance much like
defragmenting your disk drive.
Rebuilding is different from exporting in that rebuilding exports and imports
table definitions and schema in addition to the data. The unload portion of
the rebuild process produces ASCII format data files and a ’ reload.sql ’ file
which contains table and other definitions. Running the reload.sql script
recreates the tables and loads the data into them.
You can carry out this operation from Sybase Central or using the dbunload
command-line utility.
Consider rebuilding your database if you want to upgrade your database,
reclaim disk space or improve performance. You might consider extracting
a database (creating a new database from an old database) if you are using
SQL Remote or MobiLink.

Rebuilding a database involved in replication


If a database is participating in replication, particular care needs to be taken
if you wish to rebuild the database.
Replication is based on the offsets in the transaction log. When you rebuild a
database, the offsets in the old transaction log are different than the offsets in
the new log, making the old log unavailable For this reason, good backup
practices are especially important when participating in replication.
There are two ways of rebuilding a database involved in replication. The first
method uses the dbunload utility -ar option to make the unload and reload
occur in a way that does not interfere with replication. The second method is
a manual method of accomplishing the same task.

Tools
Import Tools The following tools are available for importing
♦ Interactive SQL import wizard
♦ INPUT statements

697
Understanding importing and exporting

♦ LOAD TABLE statements


♦ INSERT statement
♦ Sybase Central
Export Tools The following tools are available for exporting
♦ Interactive SQL export wizard
♦ OUTPUT statements
♦ UNLOAD TABLE statement
♦ UNLOAD statement
♦ SELECT statement with output redirection
♦ dbunload command-line utility
♦ Sybase Central

Temporary Tables
Temporary tables, whether local or global, serve the same purpose:
temporary storage of data. The difference between the two, and the
advantages of each, however, lies in the duration each table exists.
A local temporary table exists only for the duration of a connection or, if
defined inside a compound statement, for the duration of the compound
statement. It is useful when need to load a set of data once only.
The definition of the global temporary table remains in the database
permanently, but the rows exist only within a given connection. When you
close the database connection, the data in the global temporary table
disappears. However, the table definition remains with the database for you
to access when you open your database next time. Global temporary tables
are useful when you need to load a set of data repeatedly, or when you need
to merge tables with different structures.

698
Chapter 22 Importing and Exporting Data

Internal/external
The Interactive SQL INPUT and OUTPUT commands are external to the
database (client-side). If ISQL is being run on a different machine than the
database server, paths to files being read or written are relative to the client.
An INPUT is recorded in the transaction log as a separate INSERT statement
for each row read. As a result, INPUT is considerably slower than LOAD
TABLE . This also means that ON INSERT triggers will fire during an
INPUT. Missing values will be inserted as NULL on NULLABLE rows, as 0
(zero) on non-nullable numeric columns, and as an empty string on non-
nullable non-numeric columns. The OUTPUT statement is useful when
compatibility is an issue since it can write out the result set of a SELECT
statement to any one of a number of file formats.
The LOAD TABLE, UNLOAD TABLE and UNLOAD statements, on the
other hand, are internal to the database (server-side). Paths to files being
written or read are relative to the database server. Only the command travels
to the database server, where all processing happens. A LOAD table
statement is recorded in the transaction log as a single command. The data
file must contain the same number of columns as the table to be loaded.
Missing values on columns with a default value will be inserted as NULL,
zero or an empty string if the DEFAULTS option is set to OFF (default), or
as the default value if the DEFAULTS value is set to ON. Internal importing
and exporting only provides access to text and BCP formats, but it is a faster
method.

Data formats
Interactive SQL supports the following import and export file formats:

File Format Description Available for Available for


Importing Exporting
ASCII A text file, one row per line, with á á
values separated by a delimiter.
String values optionally appear
enclosed in apostrophes (single
quotes). This is the same as the
format used by LOAD TABLE and
UNLOAD TABLE.
DBASEII DBASE II format á á
DBASEIII DBASE III format á á
Excel 2.1 Excel format 2.1 á á

699
Understanding importing and exporting

File Format Description Available for Available for


Importing Exporting
FIXED Data records appear in fixed format á á
with the width of each column either
the same as defined by the column’s
type or specified as a parameter.
FOXPRO FoxPro format á á
HTML HTML (Hyper Text Markup á
Language) format
LOTUS Lotus workspace format á á
SQL The SQL statement format. This Using the á
Statements format can be used as an argument READ
in a READ statement. statement only

Adaptive Server Enterprise compatibility


You can import and export files between Adaptive Server Anywhere and
Adaptive Server Enterprise using the BCP FORMAT clause. Simply make
sure the BCP output is in delimited ASCII format. If you are exporting
BLOB data from Adaptive Server Anywhere for use in Adaptive Server
Enterprise, use the BCP format clause with the UNLOAD TABLE statement.
For more information about BCP and the FORMAT clause, see "LOAD
TABLE statement" on page 560 of the book ASA Reference or "UNLOAD
TABLE statement" on page 635 of the book ASA Reference.

700
Chapter 22 Importing and Exporting Data

Designing import procedures


Before you begin importing data into your database, spend some time and
consider in detail what resources you have and exactly what you want to
accomplish by importing data into your database.
For a discussion about loading databases, see "Designing rebuild and extract
procedures" on page 709.

What import issues should I consider?


Some basic issues to consider when importing include:
♦ tools and performance
♦ scope of data you want to import
♦ does the structure of your data match that of the destination table

What import tools are available?


There are a variety of tools available to help you import your data.
Interactive SQL You can access the import wizard by choosing Data➤Import from the
import wizard Interactive SQL menu. The wizard provides an interface to allow you to
choose a file to import, a file format, and a destination table to place the data
in. You can choose to import this data into an existing table, or you can use
the wizard to create and configure a completely new table.
Choose the Interactive SQL import wizard when you prefer using a graphical
interface to import data in a format other than text, or when you want to
create a table at the same time you import the data.
INPUT statement You execute the INPUT statement from the SQL Statements pane of the
Interactive SQL window. The INPUT statement allows you to import data in
a variety of file formats into one or more tables. You can choose a default
input format, or you can specify the file format on each INPUT statement.
Interactive SQL can execute a command file containing multiple INPUT
statements.
If a data file is in DBASE, DBASEII, DBASEIII, FOXPRO, or LOTUS
format and the table does not exist, it will be created. There are performance
impacts associated with importing large amounts of data with the INPUT
statement, since the INPUT statement writes everything to the Transaction
log.

701
Designing import procedures

Choose the Interactive SQL INPUT statement when you want to import data
into one or more tables, when you want to automate the import process using
a command file, or when you want to import data in a format other than text.
LOAD TABLE You execute the LOAD TABLE statement from the SQL Statements pane of
statement the Interactive SQL window. It allows you to import data only, into a table,
in an efficient manner in text/ASCII/FIXED formats. The table must exist
and have the same number of columns as the input file has fields, defined on
compatible data types. The LOAD TABLE statement imports with one row
per line, and values separated by a delimiter.
To use the LOAD TABLE statement, the user must have ALTER permission
on the table. For more information about controlling who can use the LOAD
TABLE statement, see "–gl command-line option" on page 27 of the book
ASA Reference.
Choose the LOAD TABLE statement when you want to import data in text
format. If you have a choice between using the INPUT statement or the
LOAD TABLE statement, choose the LOAD TABLE statement for better
performance.
INSERT statement You execute the INSERT statement from the SQL Statements pane of the
Interactive SQL window. Since you include the data you want to place in
your table directly in the INSERT statement, it is considered interactive
input. File formats are not an issue. You can also use the INSERT statement
with remote data access to import data from another database rather than a
file.
Choose the INSERT statement when you want to import small amounts of
data into a single table.
Sybase Central Sybase Central does not provide for importing data. It does provide wizards
for rebuilding (loading or unloading entire databases) or extracting
databases, which are specialized cases of importing and exporting.
Choose Sybase Central when you want to use a wizard to rebuild or extract a
database.
Proxy Tables You can import data directly from another database. Using the Adaptive
Server Anywhere remote data access feature, you can create a proxy table,
which represents a table from the remote database, and then use an INSERT
statement with a SELECT clause to insert data from the remote database into
a permanent table in your database.
$ For more information about remote data access, see "Accessing Remote
Data" on page 893.

702
Chapter 22 Importing and Exporting Data

What is the scope of data I want to import?


File formats You can import data in text, ASCII, Fixed, DBASE, DBASEII, DBASEIII,
Excel 2.1, FoxPro, LOTUS, BCP formats. Since not all tools support all file
formats, investigate which tool suits your purposes best before choosing.
There are trade-offs between availability of file formats, and speed of import.
What can I do with You can insert (append) data into tables, and you can replace data in tables.
the data I import? In some cases, you can also create new tables at the same time as you import
the data. If you are trying to create a whole new database, however, consider
loading the data instead of importing it, for performance reasons.
For more information about loading data, see "Designing rebuild and extract
procedures" on page 709, or "Importing data" on page 717

What if the table structures don’t match?


The structure of the data you want to load into a table does not always match
the structure of the destination table itself, which presents problems during
importing. For example, the column data types may be different or in a
different order, or there may be extra values in the import data that do not
match columns in the destination table.
Rearranging the If you know that the structure of the data you want to import does not match
table or data the structure of the destination table, you have several options. You can
rearrange the columns in your table using the LOAD TABLE statement; you
can rearrange the import data to fit the table using a variation of the INSERT
statement and a global temporary table; or you can use the INPUT statement
to specify a specific set or order of columns.
Allowing columns to contain NULLs
If the file you are importing contains data for a subset of the columns in a
table, or if the columns are in a different order, you can also use the LOAD
TABLE statement DEFAULTS option to fill in the blanks and merge
non-matching table structures.
If DEFAULTS is OFF, any column not present in the column list is assigned
NULL. If DEFAULTS is OFF and a non-nullable column is omitted from the
column list, the database server attempts to convert the empty string to the
column’s type. If DEFAULTS is ON and the column has a default value, that
value is used.
For more information, see "Merging different table structures" on page 719.

703
Designing import procedures

Handling conversion errors


When you load data from external sources, there may be errors in the data.
For example, there may be dates that are not valid dates and numbers that are
not valid numbers. The CONVERSION_ERROR database option allows you
to ignore conversion errors by converting them to NULL values.
$ For information on setting Interactive SQL database options, see "SET
OPTION statement" on page 612 of the book ASA Reference, or
"CONVERSION_ERROR option" on page 180 of the book ASA Reference.

704
Chapter 22 Importing and Exporting Data

Designing export procedures


Before you begin exporting data, spend some time and consider in detail
what resources you have and exactly what type of information you want to
export from your database.
For a discussion about unloading databases, see "Designing rebuild and
extract procedures" on page 709.

What export issues should I consider?


Some basic issues to consider when exporting include:
♦ tools and performance issues
♦ scope of data you want to export
♦ handling null values

What export tools are available?


There are a variety of tools available to help you export your data.
Interactive SQL You can access the export wizard by choosing Data➤Export from the
export wizard Interactive SQL menu. The wizard provides an interface to allow you to
export query results in a format other than text.
Choose the Interactive SQL export wizard when you want to export query
results in a format other than text.
OUTPUT You can export query results, tables or views from your database using the
statement Interactive SQL OUTPUT statement. The Interactive SQL OUTPUT
statement supports several different file formats. You can either specify the
default output format, or you can specify the file format on each OUTPUT
statement. Interactive SQL can execute a command file containing multiple
OUTPUT statements.
There are performance impacts associated with exporting large amounts of
data with the OUTPUT statement. As well, you should use the OUTPUT
statement on the same machine as the server if possible to avoid sending
large amounts of data across the network.
Choose the Interactive SQL OUTPUT statement when you want to export all
or part of a table or view in a format other than text, or when you want to
automate the export process using a command file

705
Designing export procedures

You execute the UNLOAD TABLE statement from the SQL Statements
UNLOAD TABLE pane of the Interactive SQL window. It allows you to export data only, in an
statement efficient manner in text/ASCII/FIXED formats. The UNLOAD TABLE
statement exports with one row per line, and values separated by a comma
delimiter. The data exports in order by primary key values to make reloading
quicker.
To use the UNLOAD TABLE statement, the user must have ALTER or
SELECT permission on the table. For more information about controlling
who can use the UNLOAD TABLE statement, see "–gl command-line
option" on page 27 of the book ASA Reference.
Choose the UNLOAD TABLE statement when you want to export entire
tables in text format. If you have a choice between using the OUTPUT
statement, UNLOAD statement, or UNLOAD TABLE statement, choose the
UNLOAD TABLE statement for performance reasons.
UNLOAD The UNLOAD statement is similar to the OUTPUT statement in that they
statement both export query results to a file. As well, you execute both statements from
the SQL Statements pane of the Interactive SQL window. The UNLOAD
statement, however, allows you to export data in a more efficient manner and
in text/ASCII/FIXED formats only . The UNLOAD statement exports with
one row per line, and values separated by a comma delimiter.
To use the UNLOAD statement, the user must have ALTER or SELECT
permission on the table. For more information about controlling who can use
the UNLOAD statement, see "–gl command-line option" on page 27 of the
book ASA Reference.
Choose the UNLOAD statement when you want to export query results if
performance is an issue, and if output in text format is acceptable. The
UNLOAD statement is also a good choice when you want to embed an
export command in an application.
Dbunload utility The dbunload utility and Sybase Central are graphically different, and
functionally equivalent. You can use either one interchangeably to produce
the same results. These tools are different from Interactive SQL statements in
that they can operate on several tables at once. And in addition to exporting
table data, both tools can also export table schema.
If you want to rearrange your tables in the database, you can use dbunload to
create the necessary command files and modify them as needed. Sybase
Central provides wizards and a GUI interface for unloading one, many or all
of the tables in a database. Tables can be unloaded with structure only, data
only or both structure and data. To unload fewer than all of the tables in a
database, a connection must be established beforehand.
You can also extract one or many tables with or without command files.
These files can be used to create identical tables in different databases.

706
Chapter 22 Importing and Exporting Data

Choose Sybase Central or the dbunload utility when you want to export in
text format, when you need to process large amounts of data quickly, when
your file format requirements are flexible, or when your database needs to be
rebuilt or extracted.
For more information about exporting entire databases, rebuilding databases
or creating extractions from databases, see "Designing rebuild and extract
procedures" on page 709, or "Export tasks" on page 721.

What is the scope of the data I want to export?


File formats You can export data to files in text, ASCII, FIXED, DBASE, DBASEII
DBASEIII, Excel 2.1, FoxPro, HTML, LOTUS, SQL Statements, BCP
formats. Since different tools support different file formats, investigate which
tool suits your purposes best before choosing. There are trade-offs between
availability of file formats, and speed of export.
Type of data You can export query results, table data, or table schema. If you are trying to
export a whole database, however, consider unloading the data instead of
exporting it, for performance reasons.
$ For more information about exporting query results, see "Exporting
query results" on page 725.
$ For information about unloading complete databases, see "Designing
rebuild and extract procedures" on page 709.
$ For information about additional command-line switches for the
dbunload utility, see "The dbunload command-line utility" on page 139 of
the book ASA Reference.

Choosing how to output NULLs


Most commonly, users want to extract data for use in other software
products.
Since the other software package may not understand NULL values, there
are two ways of specifying how NULL values are output. You can use either
the Interactive SQL (NULLS) option, or the IFNULL function. Both options
allow you to output a specific value in place of a NULL value.
Use the Interactive SQL (NULLS) option to set the default behavior, or to
change the output value for a particular session. Use the IFNULL function to
apply the output value to a particular instance or query.
$ For information about outputting NULL values, see "Choosing NULL
value output" on page 727

707
Designing export procedures

$ For information on setting Interactive SQL options, see "SET OPTION


statement" on page 612 of the book ASA Reference.

708
Chapter 22 Importing and Exporting Data

Designing rebuild and extract procedures


The rebuild (load/unload) and extract procedures are used to rebuild
databases and to create new databases from part of an old one.
For a complete discussion of extract procedures, see "The Database
Extraction utility" on page 601 of the book Replication and Synchronization
Guide or "Using the extraction utility" on page 489 of the book Replication
and Synchronization Guide.

What rebuild and extract issues should I consider?


Some basic issues to consider when rebuilding include:
♦ what is rebuilding
♦ tools and performance issues
♦ scope of the procedure

What is rebuilding?
With importing and exporting, the destination of the data is either into your
database or out of your database. Importing reads data into your database.
Exporting writes data out of your database. Often the information is either
coming from or going to another non-Adaptive Server Anywhere database.
Rebuilding, however, combines two functions: loading and unloading.
Loading and Unloading takes data and schema out of an Adaptive Anywhere
database and then places the data and schema back into an Adaptive Server
Anywhere database. The unloading procedure produces fixed format data
files and a reload.sql file which contains table definitions required to recreate
the table exactly. Running the reload.sql script recreates the tables and loads
the data back into them.
Rebuilding a database can be a time consuming operation, and can require a
large amount of disk space. As well, the database is unavailable for use while
being unloaded and reloaded. For these reasons, rebuilding a database is not
advised in a production environment unless you have a definite goal in mind.

709
Designing rebuild and extract procedures

Rebuilding a database involved in replication


The procedure for rebuilding a database depends on whether the database
is involved in replication or not. If the database is involved in replication,
you must preserve the transaction log offsets across the operation, as the
Message Agent and Replication Agent require this information. If the
database is not involved in replication, the process is simpler.

What is extracting?
Extracting removes a remote Adaptive Server Anywhere database from a
consolidated Adaptive Server Enterprise or Adaptive Server Anywhere
database.
You can use the Sybase Central Extraction wizard, or the extraction utility to
extract databases. The extraction utility is the recommended way of creating
and synchronizing remote databases from a consolidated databases.
$ For more information about extraction tools and how to perform
extractions, see "The Database Extraction utility" on page 601 of the book
Replication and Synchronization Guide or "Using the extraction utility" on
page 489 of the book Replication and Synchronization Guide.

What rebuild tools are available?


LOAD/UNLOAD You execute the UNLOAD TABLE statement from the SQL Statements
TABLE statement pane of the Interactive SQL window. It allows you to export data only, in an
efficient manner in text/ASCII/FIXED formats. The UNLOAD TABLE
statement exports with one row per line, and values separated by a comma
delimiter. The data exports in order by primary key values to make reloading
quicker.
To use the UNLOAD TABLE statement, the user must have ALTER or
SELECT permission on the table.
Choose the UNLOAD TABLE statement when you want to export data in
text format or when performance is an issue.
Dbunload/Dbisql The dbunload/dbisql utility and Sybase Central are graphically different, and
utility and Sybase functionally equivalent. You can use either one interchangeably to produce
Central the same results.

710
Chapter 22 Importing and Exporting Data

You can use the Sybase Central Unload wizard or the dbunload utility to
unload an entire database in ASCII comma-delimited format and to create
the necessary Interactive SQL command files to completely recreate your
database. This may be useful for creating extractions, creating a backup of
your database, or building new copies of your database with the same or a
slightly modified structure. The dbunload utility and Sybase Central are
useful for exporting Adaptive Server Anywhere files intended for reuse
within Adaptive Server Anywhere.
Choose Sybase Central or the dbunload utility when you want to rebuild your
or extract from your database, export in text format, when you need to
process large amounts of data quickly, or when your file format requirements
are flexible.

What is the scope of rebuilding?


From one ASA Rebuilding generally takes data out of an Adaptive Server Anywhere
database to database and then place that data back into an Adaptive Server Anywhere
another database. The unloading and reloading are closely tied together, since you
usually perform both tasks, rather than just one or the other.
Rebuilding a You might rebuild your database if you wanted to:
database ♦ Upgrade your database Most new features are made available by
applying the Upgrade utility. The New Features documentation will state
if an unload and reload is required to obtain a particular feature. In
general, only features that require a change in database file format
require you to rebuild.
♦ Reclaim disk space Databases do not shrink if you delete data.
Instead, any empty pages are simply marked as free so they can be used
again. They are not removed from the database unless you rebuild it.
Rebuilding a database can reclaim disk space if you have deleted a large
amount of data from your database and do not anticipate adding more.
♦ Improve performance Rebuilding databases can improve
performance for the following reasons:
♦ If data on pages within the database is fragmented, unloading and
reloading can eliminate the fragmentation.
♦ Since the data can be unloaded and reloaded in order by primary
keys, access to related information can be faster, as related rows
may appear on the same or adjacent pages.

711
Designing rebuild and extract procedures

New versions of the ASA database server can be used without upgrading
Upgrading a your database. If you want to use features of the new version that require
database access to new system tables or database options, you must use the upgrade
utility to upgrade your database. The upgrade utility does not unload or
reload any data.
If you want to use features of the new version that rely on changes in the
database file format, you must unload and reload your database. You should
back up your database after rebuilding the database.
To upgrade your database file, use the new version of Adaptive Server
Anywhere.
For more information about upgrading your database, see "The Upgrade
utility" on page 145 of the book ASA Reference.

712
Chapter 22 Importing and Exporting Data

Import and export internals


This section describes tips to improve performance when importing and
exporting.

Performance tips
Although loading large volumes of data into a database can be very time
consuming, there are a few things you can do to save time:
♦ If you use the LOAD TABLE statement, then bulk mode is not
necessary.
♦ If you are using the INPUT command, run Interactive SQL or the client
application on the same machine as the server. Loading data over the
network adds extra communication overhead. This might mean loading
new data during off hours.
♦ Place data files on a separate physical disk drive from the database. This
could avoid excessive disk head movement during the load.
♦ If you are using the INPUT command, start the server with the -b switch
for bulk operations mode. In this mode, the server does not keep a
rollback log or a transaction log, it does not perform an automatic
COMMIT before data definition commands, and it does not lock any
records.
Without a rollback log, you cannot use savepoints and aborting a
command always causes transactions to roll back. Without automatic
COMMIT, a ROLLBACK undoes everything since the last explicit
COMMIT.
Without a transaction log, there is no log of the changes. You should
back up the database before and after using bulk operations mode
because, in this mode, your database is not protected against media
failure. For more information, see "Backup and Data Recovery" on
page 645.
The server allows only one connection when you use the -b switch.
If you have data that requires many commits, running with the -b option
may slow database operation. At each COMMIT, the server carries out a
checkpoint; this frequent checkpointing can slow the server.

713
Import and export internals

♦ Extend the size of the database, as described in "ALTER DBSPACE


statement" on page 385 of the book ASA Reference. This command
allows a database to be extended in large amounts before the space is
required, rather than the normal 32 pages at a time when the space is
needed. As well as improving performance for loading large amounts of
data, it also serves to keep the database more contiguous within the file
system.

714
Chapter 22 Importing and Exporting Data

Import tasks
This section collects together instructions for how to import a variety of data
types using a variety of tools. All tasks assume the database is up and
running unless otherwise specified.

Importing a database
You can use either the Interactive SQL wizard or the INPUT statement to
create a database by importing one table at a time. You can also create a
script that automates this process. However, for more efficient results,
consider reloading a database whenever possible.
$ For more information about importing a database that was previously
unloaded, see "Reloading a Database" on page 729.

Importing remote databases


You can import remote Oracle, DB2, Microsoft SQL Server, Sybase
Adaptive Server Enterprise, Sybase Adaptive Server Anywhere, and
Microsoft Access databases into Adaptive Server Anywhere using the
sa_migrate set of stored procedures.
If you do not want to modify the tables in any way, you can use the quick
method. Alternatively, if you would like to remove tables or foreign key
mappings, you can use the extended method.

v To import remote databases (single step):


1 From Interactive SQL, connect to the target database.
2 From the Interactive SQL statements pane, enter the following stored
procedure:
dbo.sa_migrate(
IN local_table_owner VARCHAR(128),
IN server_name VARCHAR(128),
IN table_name VARCHAR(128) DEFAULT NULL,
IN owner_name VARCHAR(128) DEFAULT NULL,
IN database_name VARCHAR(128) DEFAULT NULL,
IN migrate_data BIT DEFAULT 1,
IN drop_proxy_tables BIT DEFAULT 1)
This procedure calls several procedure sin turn and migrates all
remote tables using the specified criteria.

715
Import tasks

$ For more information about accessing remote data, see "Accessing


Remote Data" on page 893.

v To import remote databases (with modifications):


1 From the Interactive SQL statements pane, enter the following stored
procedure:
dbo.sa_migrate_create_remote_table_list(
IN server_name
IN table_name
IN owner_name )
All three arguments are VARCHAR (128). In addition, table_name and
owner_name have a default value of NULL.
This populates the dbo.migrate_remote_table_list table with a list of
remote tables to migrate You can delete rows from this table for remote
tables you do not wish to migrate.
2 Enter the following stored procedure:
dbo.sa_migrate_create_tables(IN local_table_owner)

IN local_table_owner is VARCHAR (128).


This procedure takes the list of remote tables from
dbo.migrate_remote_table_list and creates a proxy table and a base table
for each remote table listed. This procedure also creates all primary key
indexes.
3 Enter the following stored procedure:
dbo.sa_migrate_data(IN local_table_owner)

IN local_table_owner is VARCHAR (128).


This procedure migrates the data from each remote table into the base
table created by dbo.sa_migrate_create_tables.
4 Enter the following stored procedure:
dbo.sa_migrate_create_remote_fks_list(IN
server_name)

IN server_name is VARCHAR (128).


This procedure populates the table dbo.migrate_remote_fks_list with the
list of foreign keys associated with each of the remote tables listed in d
bo.migrate_remote_table_list.
You can remove any foreign key mappings you do not want to recreate
on the local base tables.
5 Enter the following stored procedure:

716
Chapter 22 Importing and Exporting Data

dbo.sa_migrate_create_fks(IN local_table_owner)

IN local_table_owner is VARCHAR (128).


This procedure creates the foreign key mappings defined in
dbo.migrate_remote_fks_list on the base tables.
6 Enter the following stored procedure:
dbo.sa_migrate_drop_proxy_tables(IN
local_table_owner)

IN local_table_owner is VARCHAR (128).


This procedure drops all proxy tables created for migration purposes and
completes the migration process.

Importing data

v To import data (Interactive SQL Data Menu):


1 From the Interactive SQL window, choose Data➤Import.
The Open dialog appears.
2 Locate the file you want to import and click Open.
You can import data in text, DBASEII, Excel 2.1, FOXPRO, and Lotus
formats.
The Import wizard appears.
3 Click the Use an existing table option and then enter the name and
location of the existing table.
You can click the Browse button and locate the table you want to import
the data into.
4 Click Finish to import your data.
In this case, importing appends the new data to the existing table. If the
export is successful, the Messages pane displays the amount of time it to
took to import the data, and what execution plan was used. If the import
is unsuccessful, a message appears indicating the import was
unsuccessful.

v To import data (Interactive SQL INSERT statement):


1 Ensure that the table you want to place the data in exists.
2 Enter the following INSERT statement in the SQL Statements pane of
the Interactive SQL window.

717
Import tasks

INSERT INTO T1
VALUES ( ... )
3 Execute the statement.
Inserting values appends the new data to the existing table.

v To import data (Interactive SQL INPUT statement):


1 Ensure that the table you want to place the data in exists.
2 Enter the following INPUT statement in the SQL Statements pane of the
Interactive SQL window.
INPUT INTO t1
FROM file1
FORMAT ASCII;
Where t1 is the name of the table you want to place the data in, and file1
is the name of the file that holds the data you want to import.
3 Execute the statement.
If the export is successful, the Messages pane displays the amount of
time it to took to import the data, and what execution plan was used. If
the import is unsuccessful, a message appears indicating the import was
unsuccessful.

Importing a table

v To import a table (Interactive SQL Data Menu):


1 Ensure that the table you want to place the data in exists.
2 From the Interactive SQL window, choose Data➤Import.
The Open dialog appears.
3 Locate the file you want to import and click Open.
You can import data in text, DBASEII, Excel 2.1, FOXPRO, and Lotus
formats.
The Import wizard appears.
4 Click the Create a new table with the following name option and enter a
name for the new table in the field.
5 Click Finish to import your data.

718
Chapter 22 Importing and Exporting Data

If the export is successful, the Messages pane displays the amount of


time it to took to import the data, and what execution plan was used. If
the import is unsuccessful, a message appears indicating the import was
unsuccessful.

v To import a table (Interactive SQL):


1 In the SQL statements pane of the Interactive SQL window, create the
table you want to load data into.
You can use the CREATE TABLE statement to create the table.
2 Enter the LOAD TABLE statement in the following format:
LOAD TABLE department
FROM ’dept.txt’
3 Execute the statement.
The LOAD TABLE statement appends the contents of the file to the
existing rows of the table; it does not replace the existing rows in the
table. You can use the TRUNCATE TABLE statement to remove all the
rows from a table.
Neither the TRUNCATE TABLE statement nor the LOAD TABLE
statement fires triggers, including referential integrity actions such as
cascaded deletes.
The LOAD TABLE statement has the additional STRIP clause. The
default setting (STRIP ON) strips trailing blanks from values before
inserting them. To keep trailing blanks, use the STRIP OFF clause in
your LOAD TABLE statement.
$ For a full description of the LOAD TABLE statement syntax, see
"LOAD TABLE statement" on page 560 of the book ASA Reference.

Merging different table structures

v To load data with a different structure using a global temporary


table:
1 In the SQL statements pane of the Interactive SQL window, create a
global temporary table with a structure matching that of the input file.
You can use the CREATE TABLE statement to create the global
temporary table.
2 Use the LOAD TABLE statement to load your data into the global
temporary table.

719
Import tasks

When you close the database connection, the data in the global
temporary table disappears. However, the table definition remains with
the database for you to access when you open your database next time.
3 Use the INSERT statement with a FROM SELECT clause to extract and
summarize data from the temporary table and put it into one or more of
the permanent database tables.

v To load data with a different structure using the DEFAULTS option:


1 In the SQL statements pane of the Interactive SQL window, enter the
LOAD TABLE statement.
2 Specify whether you want the DEFAULTS option ON or OFF.
If DEFAULTS is OFF, any column not present in the column list is
assigned NULL. If DEFAULTS is OFF and a non-nullable column is
omitted from the column list, the database server attempts to convert the
empty string to the column’s type. If DEFAULTS is ON and the column
has a default value, that value is used.
For example, to load two columns into the employee table, and set the
remaining column values to the default values if there are any, the
LOAD TABLE statement should look like this:
LOAD TABLE employee (emp_lname, emp_fname)
FROM ‘new_employees.txt’
DEFAULTS ON
3 Execute the LOAD TABLE statement.

720
Chapter 22 Importing and Exporting Data

Export tasks
This section collects together instructions for tasks related to exporting. All
tasks assume that you have your database up and running.
$ For information about additional command-line switches you can apply
to the dbunload utility, see "The dbunload command-line utility" on
page 139 of the book ASA Reference.
$ For more information about connection parameter switches that allow
you to export from a non-running database, see "Connection and
Communication Parameters" on page 45 of the book ASA Reference.

Exporting a database

v To unload all or part of a database (Sybase Central):


1 From the Sybase Central window, click Utilities in the left pane.
All of the functions you can perform on a database appear in the right
pane.
2 Double-click Unload Database in the right pane.
The Unload an Adaptive Server Anywhere Database wizard appears.
You can also open the Unload an Adaptive Server Anywhere Database
wizard by right clicking on the database name in the left pane, and
choosing Unload Database from the popup menu, or by choosing the
Tools➤Adaptive Server Anywhere 7➤Unload Database command.
3 Click Next to proceed with the unload wizard.
4 Click the Use the following connection option.
5 Click on the connection you want to use in the box, and then click Next.
6 Specify the path and location for the unloaded database command file
and click Next.
You can use the Browse button to locate the directory you want to save
the command file to. The command file has the .sql extension and is
necessary to rebuild your database from the data files you unload.
7 Choose the Do not reload the data option, and then click Next.
$ For more information about the reload and replace options, see
"Rebuild tasks" on page 729.

721
Export tasks

8 Specify what portion of the database you want to unload, and then click
Next.
You can choose:
♦ Extract structure and data
♦ Extract structure only
♦ Extract data only
Choose the Extract structure and data option to unload the entire
database.
9 Specify whether you want to do an Internal unload or an External
unload, and click Next
10 Specify the number of levels of view dependency.
Specifying levels of view dependency allows you to recreate views
based upon other views. For example, if you have one view based upon
existing tables, you would enter the number 1 in this field. View one is
independent and can be recreated from the tables alone. If, however, you
have a second view based upon the first view, you would enter number 2
in this field. View 2 is dependent on view 1, and cannot be created until
view 1 is created first.
11 Specify whether you want to order the data.
Exporting the data in an ordered format means that the data will be
reloaded in an ordered format. This is useful if you want to improve
performance of your database, or bypass a corrupted index.
12 Specify the path and location for the unloaded data and click Next.
You can use the Browse button to locate the directory you want to save
the database to. If the directory you choose does not exist, you will be
prompted to confirm creation of the directory.
13 Confirm that the information you specified in the wizard is correct and
click Finish to unload the database.
The Unload/Extract Database Message Window replaces the Unload an
Adaptive Server Anywhere Database wizard, and displays messages
about which files contain which tables in your database.
14 Close the Unload/Extract Database Message Window.

v To unload all or part of a database (command-line):


1 At the command prompt, enter the dbunload command and specify the
connection parameters, using the –c switch.

722
Chapter 22 Importing and Exporting Data

For example, the following command unloads the entire database to


c:\temp:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" c:\temp

2 If you want to export data only, add the –d switch.


For example, if you want to export data only, your final command would
look like this:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –d c:\temp

3 If you want to export schema only, add the –n switch instead.


For example, if you want to export schema only, your final command
would look like this:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –n c:\temp
4 Press Enter to execute the command.

Exporting tables
In addition to the methods described below, you can also export a table by
selecting all the data in a table and exporting the query results. For more
information, see "Exporting query results" on page 725.

Tip
You can export views just as you would export tables.

v To export a table (command-line):


1 At the command prompt, enter the following dbunload command and
press Enter:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –t
employee c:\temp
where –c specifies the database connection parameters and –t specifies
the name of the table(s) you want to export. This dbunload command
unloads the data from the sample database (assumed to be running on
the default database server with the default database name) into a set of
files in the c:\temp directory. A command file to rebuild the database
from the data files is created with the default name reload.sql in the
current directory.
You can unload more than one table by separating the table names with
a comma (,) delimiter.

723
Export tasks

v To export a table (Interactive SQL):


1 In the SQL Statements pane of Interactive SQL, enter your query.
UNLOAD TABLE department
TO ’dept.txt’
This statement unloads the department table from the sample database
into the file dept.txt in the server’s current working directory. If you are
running against a network server, the command unloads the data into a
file on the server machine, not the client machine. Also, the file name
passes to the server as a string. Using escape backslash characters in the
file name prevents misinterpretation if a directory of file name begins
with an n (\n is a newline character) or any other special characters.
Each row of the table is output on a single line of the output file, and no
column names are exported. The columns are separated, or delimited, by
a comma. The delimiter character can be changed using the
DELIMITED BY clause. The fields are not fixed-width fields. Only the
characters in each entry are exported, not the full width of the column.
$ For a full description of the UNLOAD TABLE statement syntax,
see "UNLOAD TABLE statement" on page 635 of the book ASA
Reference

Exporting table data or table schema

v To export table data or table schema (command-line):


1 At the command prompt, enter the dbunload command and specify the
connection parameters, using the –c switch.
2 Specify the table(s) you want to export data or schema for, using the –t
switch.
If you want to export part of the employee table, the command would
look like this:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –t
employee c:\temp
You can unload more than one table by separating the table names with
a comma (,) delimiter.
3 If you want to export data only, add the –d switch.
For example, if you want to export data only, your final command would
look like this:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –d –t
employee c:\temp

724
Chapter 22 Importing and Exporting Data

4 If you want to export schema only, add the –n switch instead.


For example, if you want to export schema only, your final command
would look like this:
dbunload -c "dbn=asademo;uid=dba;pwd=sql" –n –t
employee c:\temp
5 Press Enter to execute the command.
The dbunload commands in these examples unload the data, or schema
from the sample database table (assumed to be running on the default
database server with the default database name) into a file in the c:\temp
directory. A command file to rebuild the database from the data files is
created with the default name reload.sql in the current directory.

Exporting query results


You can export queries (including queries on views) to a file from
Interactive SQL using the Data menu or the OUTPUT statement.

v To export query results (Interactive SQL data menu):


1 In the SQL Statements pane of the Interactive SQL window, enter your
query.
2 Click Execute SQL statement(s) to display the result set.
3 Choose Data➤Export.
The Save As dialog box appears.
4 Specify a name and location for the exported data.
5 Specify the file format and click Save.
If the export is successful, the Messages pane displays the amount of
time it to took to export the query results set, the filename and path of
the exported data, and the number of rows written.
If the export is unsuccessful, a message appears indicating the export
was unsuccessful.

v To export query results (Interactive SQL OUTPUT statement):


1 In the SQL Statements pane of the Interactive SQL window, enter your
query.
2 At the end of the query, type OUTPUT TO ’c:\filename’.

725
Export tasks

For example, to export the entire employee table to the file


employee.dbf, enter the following query:
SELECT *
FROM employee;
OUTPUT TO ’c:\employee.dbf’
3 If you want to export query results and append the results to another file,
add the APPEND statement to the end of the OUTPUT statement.
For example:
SELECT *
FROM employee;
OUTPUT TO ’c:\employee.dbf’ APPEND
4 If you want to export query results and include messages, add the
VERBOSE statement to the end of the OUTPUT statement.
For example:
SELECT *
FROM employee;
OUTPUT TO ’c:\employee.dbf’ VERBOSE
5 If you want to specify a format other than ASCII, add a FORMAT
clause to the end of the query.
For example:
SELECT *
FROM employee;
OUTPUT TO ’c:\employee.dbf’
FORMAT dbaseiii;
where c:\employee.dbf is the path, name, and extension of the new file
and dbaseiii is the file format for this file. You can enclose the string in
single or double quotation marks, but they are only required if the path
contains embedded spaces.
Where dbaseiii is the file format for this file. If you leave the FORMAT
option out, the file type defaults to ASCII.
6 Execute the statement.
If the export is successful, the Messages pane displays the amount of
time it to took to export the query results set, the filename and path of
the exported data, and the number of rows written. If the export is
unsuccessful, a message appears indicating the export was unsuccessful.

726
Chapter 22 Importing and Exporting Data

Tips
You can combine the APPEND and VERBOSE statements to append both
results and messages to an existing file. For example, type OUTPUT TO
’c:\filename.sql’ APPEND VERBOSE. For more information about
APPEND and VERBOSE, see the "OUTPUT statement" on page 573 of
the book ASA Reference.
The OUTPUT TO, APPEND, and VERBOSE statements are equivalent to
the >#, >>#, >&, and >>& operators of earlier versions of the software.
You can still use these operators to redirect data, but the new
Interactive SQL statements allow for more precise output and easier to
read code.

v To export query results (UNLOAD statement):


1 In the SQL Statements pane of the Interactive SQL window, enter the
UNLOAD statement.
For example:
UNLOAD
SELECT *
FROM employee;
TO ’c:\employee.dbf’
2 Execute the statement.
If the export is successful, the Messages pane displays the amount of
time it to took to export the query results set, the filename and path of
the exported data, and the number of rows written. If the export is
unsuccessful, a message appears indicating the export was unsuccessful.

Choosing NULL value output


Specifying how NULL values are output provides for greater compatibility
with other software packages

v To specify NULL value output (Interactive SQL):


1 From the Interactive SQL window, choose Tools➤Options to display
the Options dialog.
2 Click the Appearance tab.
3 In the Display null values as field, type the value you want to replace
null values with.

727
Export tasks

4 Click Make Permanent if you want the changes to become the default, or
click OK if you want the changes to be in effect only for this session.

728
Chapter 22 Importing and Exporting Data

Rebuild tasks
Rebuilding a database involves unloading and reloading your entire database.
You can carry out this operation from Sybase Central or using the command-
line utilities.
There are additional switches available for the dbunload utility that allow
you to tune the unload, as well as connection parameter switches that allow
you to specify a running or non-running database and database parameters.
It is good practice to make backups of your database before rebuilding.

Reloading a Database

v To reload a database (Command line):


1 From the command-line, execute the reload.sql script.
For example, the following command loads the reload.sql script in the
current directory.
dbisql -c "dbn=asademo;uid=dba;pwd=sql" reload.sql

Rebuilding a database not involved in replication


The following procedures should be used only if your database is not
involved in replication.

v To rebuild a database not involved in replication (Sybase Central):


1 In the left pane of the Sybase Central window, click Utilities.
All of the functions you can perform on a database appear in the right
pane.
2 Double-click Unload Database in the right pane.
The Unload an Adaptive Server Anywhere Database wizard appears.
You can also open the Unload an Adaptive Server Anywhere Database
wizard by right clicking on the database name in the left pane, and
choosing Unload Database from the popup menu, or by choosing the
Tools➤Adaptive Server Anywhere 7➤Unload Database command.
3 Click Next to proceed with the unload wizard.
4 Click the Use the following connection option.

729
Rebuild tasks

5 Click on the connection you want to use in the box, and then click Next.
6 Specify the path and location for the unloaded database command file
and click Next.
You can use the Browse button to locate the directory you want to save
the command file to. The command file has the .sql extension and is
necessary to rebuild your database from the data files you unload.
7 Specify how you would like to rebuild your database, and then click
Next.
You can choose to:
♦ Reload into a new database. If you choose this option, you must
also specify the name and location of the new database.
♦ Reload into an existing database. If you choose this option, the
Connect dialog box appears. You must specify a database and
connection parameters to continue with the rebuild.
♦ Replace the original database. If you choose this option, you must
also specify where to place the old log file.
$ For more information about the Do not reload the data option, see
"Exporting a database" on page 721.
8 Specify what portion of the database you want to rebuild, and then click
Next.
You can choose:
♦ Extract structure and data
♦ Extract structure only
♦ Extract data only
9 Specify whether you want to do an Internal unload or an External
unload, and click Next
10 Specify the number of levels of view dependency.
Specifying levels of view dependency allows you to recreate views
based upon other views. For example, if you have one view based upon
existing tables, you would enter the number 1 in this field. View one is
independent and can be recreated from the tables alone. If, however, you
have a second view based upon the first view, you would enter number 2
in this field. View 2 is dependent on view 1, and cannot be created until
view 1 is created first.
11 Specify whether you want to order the data.

730
Chapter 22 Importing and Exporting Data

Exporting the data in an ordered format means that the data will be
reloaded in an ordered format. This is useful if you want to improve
performance of your database, or bypass a corrupted index.
12 Specify the path and location for the unloaded data and click Next.
You can use the Browse button to locate the directory you want to save
the database to. If you choose a location that does not exist, you will be
prompted to confirm creation of the new directory.
13 Confirm that the information you specified in the wizard is correct and
click Finish to unload the database.
The Adaptive Server Anywhere Tools Console replaces the Unload an
Adaptive Server Anywhere Database wizard, and displays messages
about which files contain which tables in your database. When the
unload finishes, the message ’Complete’ appears.
14 Close the Unload/Extract Database Message Window.

v To rebuild a database not involved in replication (Command-line):


1 Choose Start➤Programs➤Command Prompt to open the command
prompt.
2 Execute the dbunload command-line utility using one of the following
switches:
♦ The –an switch rebuilds to a new database.
dbunload -c "dbf=asademo.db;uid=dba;pwd=sql" -an
asacopy.db

♦ The –ac switch reloads to an existing database


dbunload -c "dbf=asademo.db;uid=dba;pwd=sql" -ac
"uid=dba;pwd=sql;dbf=newdemo.db"

♦ The –ar switch replaces the existing database


dbunload -c "dbf=asademo.db;uid=dba;pwd=sql" -ar
"uid=dba;pwd=sql;dbf=newdemo.db"
If you use one these options, no interim copy of the data is created on
disk, so you do not specify an unload directory on the command line.
This provides greater security for your data, but at some cost for
performance.
3 Shut down the database and archive the transaction log, before using the
reloaded database.
Notes The -an and -ar switches only apply to connections to a personal server, or
connections to a network server over shared memory.

731
Rebuild tasks

Rebuilding a database involved in replication

v To rebuild a database involved in replication:


1 Shut down the database.
2 Perform a full off-line backup by copying the database and transaction
log files to a secure location.
3 Rebuild the database using the following command-line.
dbunload -c connection_string -ar directory
where connection_string is a connection with DBA authority, directory
is the directory used in your replication environment for old transaction
logs, and there are no other connections to the database.
$ For more information, see "Unload utility options" on page 140 of
the book ASA Reference.
4 Shut down the new database. Perform validity checks that you would
usually perform after restoring a database.
5 Start the database using any production switches you need. You can now
allow user access to the reloaded database.
If the above procedure does not meet your needs, you can manually adjust
the transaction log offsets. The following procedure describes how to carry
out that operation.
Notes The -ar switch only applies to connections to a personal server, or
connections to a network server over shared memory.

v To rebuild a database involved in replication, with manual


intervention:
1 Shut down the database.
2 Perform a full off-line backup by copying the database and transaction
log files to a secure location.
3 Run the dbtran utility to display the starting offset and ending offset of
the database’s current transaction log file. Note the ending offset for later
use.
4 Rename the current transaction log file so that it is not modified during
the unload process, and place this file in the off-line directory.
5 Rebuild the database.
$ For information on this step, see "Rebuild tasks" on page 729.
6 Shut down the new database.

732
Chapter 22 Importing and Exporting Data

7 Erase the current transaction log file for the new database.
8 Use dblog on the new database with the ending offset noted in step 3 as
the -z parameter, and also set the relative offset to zero.
dblog -x 0 -z 137829 database-name.db
9 When you run the Message Agent, provide it with the location of the
original off-line directory on its command-line.
10 Start the database. You can now allow user access to the reloaded
database.

733
Extract Tasks

Extract Tasks
For more information on how to perform database extractions, see
♦ "The Database Extraction utility" on page 601 of the book Replication
and Synchronization Guide
♦ "Extraction utility options" on page 604 of the book Replication and
Synchronization Guide
♦ "Extracting groups" on page 493 of the book Replication and
Synchronization Guide
♦ "Extracting the remote databases" on page 186 of the book Replication
and Synchronization Guide
♦ "Extracting a remote database in Sybase Central" on page 601 of the
book Replication and Synchronization Guide

734
C H A P T E R 2 3

Managing User IDs and Permissions

About this chapter Each user of a database must have a name they type when connecting to the
database, called a user ID. This chapter describes how to manage user IDs.
Contents
Topic Page
Database permissions overview 736
Setting user and group options 740
Managing individual user IDs and permissions 741
Managing connected users 752
Managing groups 753
Database object names and prefixes 760
Using views and procedures for extra security 762
Changing Ownership on Nested Objects 765
How user permissions are assessed 767
Managing the resources connections use 768
Users and permissions in the system tables 769

735
Database permissions overview

Database permissions overview


Proper management of user IDs and permissions lets users of a database
carry out their jobs effectively, while maintaining the security and privacy of
information within the database.
You use SQL statements for assigning user IDs to new users of a database,
granting and revoking permissions for database users, and finding out the
current permissions of users.
Database permissions are assigned to user IDs. Throughout this chapter, the
term user is used as a synonym for user ID. Remember, however, that you
grant and revoke permissions for each user ID.
Setting up Even if there are no security concerns regarding a multi-user database, there
individual user IDs are good reasons for setting up an individual user ID for each user. The
administrative overhead is very low if a group with the appropriate
permissions is set up. This chapter discusses groups of users.
You may want to use individual user IDs, since:
♦ The log translation utility can selectively extract the changes made by
individual users from a transaction log. This is very useful when
troubleshooting or piecing together what happened if data is incorrect.
♦ Sybase Central displays much more useful information with individual
user IDs, as you can tell which connections are which users.
♦ Row locking messages (with the BLOCKING option set to OFF) are
more informative.

DBA authority overview


When you create a database, you also create a single usable user ID. This
first user ID is DBA, and the password is initially SQL. The DBA user ID
automatically has DBA authority within the database. This level of
permission enables DBA users to carry out any activity in the database. They
can create tables, change table structures, create new user IDs, revoke
permissions from users, and so on.
Users with DBA A user with DBA authority becomes the database administrator. In this
authority chapter, references made to the database administrator, or DBA, include any
user or users with DBA authority.

736
Chapter 23 Managing User IDs and Permissions

Although DBA authority may be granted or transferred to other user IDs, this
chapter assumes that the DBA user ID is the database administrator, and that
the abbreviation DBA means both the DBA user ID and any user ID with
DBA authority.
Adding new users The DBA has the authority to add new users to the database. As the DBA
adds users, they are also granted permissions to carry out tasks on the
database. Some users may need to simply look at the database information
using SQL queries, others may need to add information to the database, and
others may need to modify the structure of the database itself. Although
some of the responsibilities of the DBA may be handed over to other user
IDs, the DBA is responsible for the overall management of the database by
virtue of the DBA authority.
The DBA has authority to create database objects and assign ownership of
these objects to other user IDs.

RESOURCE authority overview


RESOURCE authority is the permission to create database objects, such as
tables, views, stored procedures, and triggers. RESOURCE authority may be
granted only by the DBA.
To create a trigger, a user needs both RESOURCE authority and ALTER
permissions on the table to which the trigger applies.

Ownership permissions overview


The creator of a database object becomes the owner of that object.
Ownership of a database object carries with it permissions to carry out
actions on that object. These permissions are not assigned to users in the
same way that other permissions in this chapter are assigned.
Owners A user who creates a new object within the database is called the owner of
that object, and automatically has permission to carry out any operation on
that object. The owner of a table may modify the structure of that table, for
instance, or may grant permissions to other database users to update the
information within the table.

737
Database permissions overview

The DBA has permission to modify any component within the database, and
so could delete a table created by another user. The DBA has all the
permissions regarding database objects that the owners of each object have.
As well, the DBA can also create database objects for other users. In this
case the owner of an object is not the user ID that executed the CREATE
statement. A use for this ability is discussed in "Groups without passwords"
on page 757. Despite this possibility, this chapter refers interchangeably to
the owner and creator of database objects as the same person.

Table and views permissions overview


There are several distinct permissions you may grant to user IDs concerning
tables and views:

Permission Description
ALTER Permission to alter the structure of a table or create a trigger on
a table
DELETE Permission to delete rows from a table or view
INSERT Permission to insert rows into a table or view
REFERENCES Permission to create indexes on a table, and to create foreign
keys that reference a table
SELECT Permission to look at information in a table or view
UPDATE Permission to update rows in a table or view. This may be
granted on a set of columns in a table only
ALL All the above permissions

Group permissions overview


Setting permissions individually for each user of a database can be a time-
consuming and error-prone process. For most databases, permission
management based on groups, rather than on individual user IDs, is a much
more efficient approach.
You can assign permissions to a group in exactly the same way as to an
individual user. You can then assign membership in appropriate groups to
each new user of the database, and they gain a set of permissions by virtue of
their group membership.

738
Chapter 23 Managing User IDs and Permissions

Example For example, you may create groups for different departments in a company
database (sales, marketing, and so on) and assign these groups permissions.
Each salesperson becomes a member of the sales group, and automatically
gains access to the appropriate areas of the database.
Each user ID can be a member of multiple groups, and they inherit all
permissions from each of the groups.

739
Setting user and group options

Setting user and group options


In Sybase Central, configurable options for users and groups are located in
the Set Options dialog (the same dialog as for setting database options). In
Interactive SQL, you can specify an option in a SET OPTION statement.

v To set options for a user or group (Sybase Central):


1 Connect to a database and open the Users & Groups folder.
2 Right-click the desired user or group and choose Set Options from the
popup menu.
3 Edit the desired values.

v To set the options for a user or group (SQL):


♦ Specify the desired properties within a SET OPTION statement.
$ See also
♦ "Set Options dialog" on page 1036
♦ "Setting properties for database objects" on page 120
♦ "Database Options" on page 155 of the book ASA Reference

740
Chapter 23 Managing User IDs and Permissions

Managing individual user IDs and permissions


This section describes how to create new users and grant permissions to
them. For most databases, the bulk of permission management should be
carried out using groups, rather than by assigning permissions to individual
users one at a time. However, as a group is simply a user ID with special
properties, read and understand this section before moving on to the
discussion of managing groups.

Creating new users


In both Sybase Central and Interactive SQL, you can create new users. In
Sybase Central, you can manage users or groups in the Users & Groups
folder. In Interactive SQL, you can add a new user using the GRANT
CONNECT statement. For both tools, you need DBA authority to create new
users.
All new users are automatically added to the PUBLIC group. Once you have
created a new user, you can:
♦ add it to other groups
♦ set its permissions on tables, views, and procedures
♦ set it as the publisher or as a remote user of the database

Initial permissions By default, new users are not assigned any permissions beyond connecting to
for new users the database and viewing the system tables. To access tables in the database,
they need to be assigned permissions.
The DBA can set the permissions granted automatically to new users by
assigning permissions to the special PUBLIC user group, as discussed in
"Special groups" on page 758.

v To create a new user (Sybase Central):


1 Open the Users & Groups folder.
2 In the right pane, double-click Add User or Group.
3 Follow the instructions of the wizard.

v To create a new user (SQL):


1 Connect to a database with DBA authority.
2 Execute a GRANT CONNECT TO statement.

741
Managing individual user IDs and permissions

Example Add a new user to the database with the user ID of M_Haneef and a
password of welcome.
GRANT CONNECT TO M_Haneef
IDENTIFIED BY welcome

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference

Changing a password
Changing a user’s Using the GRANT statement, you can change your password or that of
password another user if you have DBA authority. For example, the following
command changes the password for user ID M_Haneef to new_password:
GRANT CONNECT TO M_Haneef
IDENTIFIED BY new_password

Changing the DBA The default password for the DBA user ID for all databases is SQL. You
password should change this password to prevent unauthorized access to your
database. The following command changes the password for user ID DBA to
new_password:
GRANT CONNECT TO DBA
IDENTIFIED BY new_password

Granting DBA and RESOURCE authority


You can grant DBA and RESOURCE authority in the same manner.

v To grant RESOURCE permissions to a user ID:


1 Connect to the database as a user with DBA authority.
2 Type and execute the SQL statement:
GRANT RESOURCE TO userid

For DBA authority, the appropriate SQL statement is:


GRANT DBA TO userid

Notes ♦ Only the DBA may grant DBA or RESOURCE authority to database
users.

742
Chapter 23 Managing User IDs and Permissions

♦ DBA authority is very powerful, since anyone with this authority has the
ability to carry out any action on the database and as well as access to all
the information in the database. It is wise to grant DBA authority to only
a few people.
♦ Consider giving users with DBA authority two user IDs, one with DBA
authority and one without, so they connect as DBA only when
necessary.
♦ RESOURCE authority allows the user to create new database objects,
such as tables, views, indexes, procedures, or triggers.

Granting permissions on tables


You can assign a set of permissions on individual tables and grant users
combinations of these permissions to define their access to a table.
You can use either Sybase Central or Interactive SQL to set permissions. In
Interactive SQL, you can use the following SQL statements to grant
permissions on tables.
♦ The ALTER permission allows a user to alter the structure of a table or
to create triggers on a table. The REFERENCES permission allows a
user to create indexes on a table, and to create foreign keys. These
permissions grant the authority to modify the database schema, and so
will not be assigned to most users. These permissions do not apply to
views.
♦ The DELETE, INSERT, and UPDATE permissions grant the authority
to modify the data in a table. Of these, the UPDATE permission may be
restricted to a set of columns in the table or view.
♦ The SELECT permission grants authority to look at data in a table, but
does not give permission to alter it.
♦ ALL permission grants all the above permissions.

v To grant permissions on tables or columns (Sybase Central):


1 Connect to the database.
2 Open the Tables folder for that database.
3 Right-click a table and choose Properties from the popup menu.
4 On the Permissions tab of the Properties dialog, configure the
permissions for the table:
♦ Click Grant to select users or groups to which to grant full
permissions.

743
Managing individual user IDs and permissions

♦ Click in the fields beside the user or group to set specific


permissions. Permissions are indicated by a check mark, and grant
options are indicated by a check mark with two ’+’ signs.
♦ Select a user and click the button beside References, Select, or
Update to set that type of permission on individual columns.
♦ Select a user or group in the list and click Revoke to revoke all
permissions.

Tips
Legend for the columns on the Permissions page: A=Alter, D=Delete,
I=Insert, R=Reference, S=Select, U=Update
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the table’s
property sheet. To assign permissions to many tables at once, use the
user’s property sheet.

v To grant permissions on tables or columns (SQL):


1 Connect to the database with DBA authority or as the owner of the table.
2 Execute a GRANT statement to assign the permission.
$ For more information, see "GRANT statement" on page 540 of the
book ASA Reference.

Example 1 All table permissions are granted in a very similar fashion. You can grant
permission to M_Haneef to delete rows from the table named sample_table
as follows:
1 Connect to the database as a user with DBA authority, or as the owner of
sample_table.
2 Type and execute the following SQL statement:
GRANT DELETE
ON sample_table
TO M_Haneef

Example 2 You can grant permission to M_Haneef to update the column_1 and
column_2 columns only in the table named sample_table as follows:
1 Connect to the database as a user with DBA authority, or as the owner of
sample_table.
2 Type and execute the following SQL statement:

744
Chapter 23 Managing User IDs and Permissions

GRANT UPDATE (column_1, column_2)


ON sample_table
TO M_Haneef

Table view permissions are limited in that they apply to all the data in a table
(except for the UPDATE permission which may be restricted). You can fine-
tune user permissions by creating procedures that carry out actions on tables,
and then granting users the permission to execute the procedure.
$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference

Granting permissions on views


Setting permissions on views is similar to setting them on tables; for more
information about the SQL statements involved, see "Granting permissions
on tables" on page 743.
A user may perform an operation through a view if one or more of the
following are true:
♦ The appropriate permission(s) on the view for the operation has been
granted to the user by a DBA.
♦ The user has the appropriate permission(s) on all the base table(s) for the
operation.
♦ The user was granted appropriate permission(s) for the operation on the
view by a non-DBA user. This user must be either the owner of the view
or have WITH GRANT OPTION of the appropriate permission(s) on the
view. The owner of the view must be either:
♦ a DBA, or
♦ a non-DBA, but also the owner of all the base table(s) referred to by
the view, or
♦ a non-DBA, and not the owner of some or all of the base table(s)
referred to by the view, but the view owner has SELECT
permission WITH GRANT OPTION on the base table(s) not owned
and any other required permission(s) WITH GRANT OPTION on
the base table(s) not owned for the operation.
Instead of the owner having permission(s) WITH GRANT OPTION
on the base table(s), permission(s) may have been granted to
PUBLIC. This includes SELECT permission on system tables.

745
Managing individual user IDs and permissions

UPDATE permissions can be granted only on an entire view. Unlike tables,


UPDATE permissions cannot be granted on individual columns within a
view.

v To grant permissions on views (Sybase Central):


1 Connect to the database.
2 Open the Views folder for that database.
3 Right-click a view and choose Properties from the popup menu.
4 On the Permissions tab of the Properties dialog, configure the
permissions for the table:
♦ Click Grant to select users or groups to which to grant full
permissions.
♦ Click in the fields beside the user or group to set specific
permissions. Permissions are indicated by a check mark, and grant
options are indicated by a check mark with two ’+’ signs.
♦ Select a user or group in the list and click Revoke to revoke all
permissions.

Tip
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the view’s
property sheet. To assign permissions to many views at once, use the
user’s property sheet.

Behavior change
There was a behavior change with Version 5 of the software concerning
the permission requirements. Previously, permissions on the underlying
tables were required to grant permissions on views.

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference

746
Chapter 23 Managing User IDs and Permissions

Granting users the right to grant permissions


You can assign each of the table and view permissions described in
"Granting permissions on tables" on page 743 with the WITH GRANT
OPTION. This option gives the right to pass on the permission to other users.
In the context of groups, you can read about this feature in section
"Permissions of groups" on page 756.
In Sybase Central, you can specify a grant option by showing the property
sheet of a user, group, or table, clicking the Permissions tab, and double-
clicking in the fields provided so that a check mark with two ’+’ signs
appears.
Example You can grant permission to M_Haneef to delete rows from the table named
sample_table, and the right to pass on this permission to other users, as
follows:
1 Connect to the database as a user with DBA authority, or as the owner of
sample_table:
2 Type and execute the SQL statement:
GRANT DELETE ON sample_table
TO M_Haneef
WITH GRANT OPTION

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference

Granting permissions on procedures


The DBA or the owner of the procedure (the user ID that created the
procedure) may grant permission to execute stored procedures. The
EXECUTE permission is the only permission that may be granted on a
procedure. This permission executes (or calls) the procedure.
The method for granting permissions to execute a procedure is similar to that
for granting permissions on tables and views, discussed in "Granting
permissions on tables" on page 743. However, the WITH GRANT option
clause of the GRANT statement does not apply to the granting of
permissions on procedures.
You can use either Sybase Central or Interactive SQL to set permissions.

v To grant permissions on procedures (Sybase Central):


1 Connect to the database.
2 Open the Procedures & Functions folder for that database.
747
Managing individual user IDs and permissions

3 Right-click a procedure and choose Properties from the popup menu.


4 On the Permissions tab of the Properties dialog, configure the
permissions for the procedure:
♦ Click Grant to select users or groups to which to grant full
permissions.
♦ Click beside users in the Execute column to toggle between
granting or not granting permission.
♦ Select a user or group in the list and click Revoke to revoke all
permissions.

Tip
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the view’s
property sheet. To assign permissions to many views at once, use the
user’s property sheet.

v To grant permissions on procedures (SQL):


1 Connect to the database with DBA authority or as the owner of the
procedure.
2 Execute a GRANT EXECUTE ON statement.
Example You can grant M_Haneef permission to execute a procedure named
my_procedure, as follows:
1 Connect to the database as a user with DBA authority or as owner of
my_procedure procedure.
2 Execute the SQL statement:
GRANT EXECUTE
ON my_procedure
TO M_Haneef

Execution Procedures execute with the permissions of their owner. Any procedure that
permissions of updates information on a table will execute successfully only if the owner of
procedures the procedure has UPDATE permissions on the table.
As long as the procedure owner has the proper permissions, the procedure
executes successfully when called by any user assigned permission to
execute it, whether or not they have permissions on the underlying table.
You can use procedures to allow users to carry out well-defined activities on
a table, without having any general permissions on the table.
$ See also

748
Chapter 23 Managing User IDs and Permissions

♦ "GRANT statement" on page 540 of the book ASA Reference

Execution permissions of triggers


The server executes triggers in response to a user action. Triggers do not
require permissions to be executed. When a trigger executes, it does so with
the permissions of the creator of the table with which they are associated.
$ For more information on trigger permissions, see "Trigger execution
permissions" on page 455.

Granting and revoking remote permissions


In Sybase Central, you can manage the remote permissions of both users and
groups. Remote permissions allow normal users and groups to become
remote users in a SQL Remote replication setup in order to exchange
replication messages with the publishing database.
Granting remote You cannot grant remote permissions to a user or group until you define at
permissions least one message type in the database.
While you can grant remote permissions to a group, those remote
permissions do not automatically apply to users in the group (unlike table
permissions, for example). To do this, you must explicitly grant remote
permissions to each user in the group. Otherwise, remote groups behave
exactly like remote users (and are categorized as remote users).
Revoking remote Revoking remote permissions reverts the remote users/groups to normal
permissions users/groups. Revoking these permissions also automatically unsubscribes
that user/group from all publications.

v To grant remote permissions to users and groups (Sybase Central):


1 Connect to a database.
2 Open the Users & Groups folder.
3 Right-click the desired user or group and choose Set Remote from the
popup menu.
4 In the resulting dialog, enter the desired values.
Once you have granted remote permissions to a user or group, you can
subscribe it to publications:

749
Managing individual user IDs and permissions

v To revoke remote permissions from remote users:


1 Open either the Users & Groups folder or the Remote Users folder
(located within the SQL Remote folder).
2 Right-click the desired remote user and choose Revoke Remote from the
popup menu.
$ See also
♦ "SQL Remote Concepts" on page 291 of the book Replication and
Synchronization Guide
♦ "Change User to a Remote User dialog" on page 1040
♦ "Change Group to a Remote Group dialog" on page 1039

Revoking user permissions


Any user’s permissions are a combination of those that have been granted
and those that have been revoked. By revoking and granting permissions,
you can manage the pattern of user permissions on a database.
The REVOKE statement is the exact converse of the GRANT statement. To
disallow M_Haneef from executing my_procedure, the command is:
REVOKE EXECUTE
ON my_procedure
FROM M_Haneef
The DBA or the owner of the procedure must issue this command.
Permission to delete rows from sample_table may be revoked by issuing the
command:
REVOKE DELETE
ON sample_table
FROM M_Haneef

$ For information on using Sybase Central to grant or revoke permission,


see the following sections:
♦ "Granting permissions on tables" on page 743
♦ "Granting permissions on views" on page 745
♦ "Granting permissions on procedures" on page 747

750
Chapter 23 Managing User IDs and Permissions

Deleting users from the database


You can delete a user from the database using both Sybase Central and
Interactive SQL. The user being removed cannot be connected to the
database during this procedure.
Deleting a user also deletes all database objects (such as tables) that they
own.
Only the DBA can delete a user.

v To delete a user from the database (Sybase Central):


1 Open the Users & Groups folder.
2 Right-click the desired user and choose Delete from the popup menu.

Tips
You cannot delete users when you select them within a group container.

v To delete a user from the database (SQL):


1 Connect to a database.
2 Execute a REVOKE CONNECT FROM statement.

Example Remove the user M_Haneef from the database.


REVOKE CONNECT FROM M_Haneef
$ See also
♦ "REVOKE statement" on page 595 of the book ASA Reference
♦ "Revoking user permissions" on page 750
♦ "Deleting groups from the database" on page 758

751
Managing connected users

Managing connected users


If you are working with Sybase Central, you can keep track of all users
connected to the database. You can view properties of these connected users,
and you can disconnect them if you want.

v To show all users connected to a database:


1 In Sybase Central, connect to a database.
2 Open the Connected Users folder for that database.
This folder shows all other users currently connected to a given database
(not including yourself), regardless of the client that they used to
connect (Sybase Central, Interactive SQL, a custom client application,
etc.).

v To inspect the properties of a user’s connection to a database:


1 In Sybase Central, connect to a database.
2 Open the Connected Users folder for that database.
3 Right-click the desired user and choose Properties from the popup menu.
4 Inspect the desired properties.

v To disconnect users from a database:


1 In Sybase Central, connect to a database.
2 Open the Connected Users folder for that database.
3 Right-click the desired user and choose Disconnect from the popup
menu.
$ See also
♦ "Connected Users properties" on page 1100

752
Chapter 23 Managing User IDs and Permissions

Managing groups
Once you understand how to manage permissions for individual users (as
described in the previous section), working with groups is straightforward. A
user ID identifies a group, just like it does a single user. However, a group
user ID has the permission to have members.
DBA, RESOURCE, When you grant permissions to a group or revoke permissions from a group
and GROUP for tables, views, and procedures, all members of the group inherit those
permissions changes. The DBA, RESOURCE, and GROUP permissions are not
inherited: you must assign them individually to each individual user IDs
requiring them.
A group is simply a user ID with special permissions. You grant permissions
to a group and revoke permissions from a group in exactly the same manner
as any other user, using the commands described in "Managing individual
user IDs and permissions" on page 741.
You can construct a hierarchy of groups, where each group inherits
permissions from its parent group. That means that a group can also be a
member of a group. As well, each user ID may belong to more than one
group, so the user-to-group relationship is many-to-many.
The ability to create a group without a password enables you to prevent
anybody from signing on using the group user ID. For more information
about this security feature, see "Groups without passwords" on page 757.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 120.
$ For information about granting remote permissions for groups, see
"Granting and revoking remote permissions" on page 749.

Creating groups
You can create a new group in both Sybase Central and Interactive SQL.
You need DBA authority to create a new group.
The GROUP permission, which gives the user ID the ability to have
members, is not inherited by members of a group. Otherwise, every user ID
would automatically be a group as a consequence of its membership in the
special PUBLIC group.

v To create a new group (Sybase Central):


1 Open the Users & Groups folder folder.

753
Managing groups

2 In the right pane, double-click Add User or Group.


3 Follow the instructions of the wizard.

v To create a new group (SQL):


1 Connect to a database.
2 Execute a GRANT GROUP TO statement. If the user ID you cite in this
statement has not already been created, you need to create it first.

Example Create the user ID personnel.


GRANT CONNECT
TO personnel
IDENTIFIED BY group_password
Make the user ID personnel a group.
GRANT GROUP TO personnel

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference
♦ "Creating new users" on page 741

Granting group membership to existing users or groups


You can add existing users to groups or add groups to other groups in both
Sybase Central and Interactive SQL. In Sybase Central, you can control
group membership on the property sheets of users or groups. In Interactive
SQL, you can make a user a member of a group with the GRANT statement.
When you assign a user membership in a group, they inherit all the
permissions on tables, views, and procedures associated with that group.
Only the DBA can grant membership in a group.

v To add a user or group to another group (Sybase Central):


1 Open the Users & Groups folder.
2 Right-click the user/group that you want to add to another group and
choose Properties from the popup menu.
3 Click the Membership tab of the property sheet.
4 Click Join Group to open the Join Groups dialog.
5 Select the desired group and click Join Group. The user or group
associated with the property sheet is then added to this desired group.

754
Chapter 23 Managing User IDs and Permissions

v To add a user or group to another group (SQL):


1 Connect to a database.
2 Execute a GRANT MEMBERSHIP IN GROUP statement, specifying
the desired group and the users involved.

Example Grant the user M_Haneef membership in the personnel group:


GRANT MEMBERSHIP
IN GROUP personnel
TO M_Haneef

$ See also
♦ "GRANT statement" on page 540 of the book ASA Reference
♦ "Creating new users" on page 741
♦ "Users and Groups properties" on page 1085

Revoking group membership


You can remove users or groups from a group in both Sybase Central and
Interactive SQL.
Removing a user or group from a group does not delete them from the
database (or from other groups). To do this, you must delete the user/group
itself.
Only the DBA can revoke membership in a group.

v To remove a user or group from another group (Sybase Central):


1 Open the Users & Groups folder.
2 Right-click the desired user/group and choose Properties from the popup
menu.
3 Click the Membership tab of the property sheet.
4 Select the desired group to leave and click Leave Group. The user or
group associated with the property sheet is then removed from this
desired group.

Tip
You can perform this action by opening the Users & Groups folder, right-
clicking the user or group that is currently a member of another group,
and choosing Leave Group.

755
Managing groups

v To remove a user or group from another group (SQL):


1 Connect to a database.
2 Execute a REVOKE MEMBERSHIP IN GROUP statement, specifying
the desired group and the users involved.

Example Remove the user M_Haneef from the personnel group:


REVOKE MEMBERSHIP
IN GROUP personnel
FROM M_Haneef

$ See also
♦ "REVOKE statement" on page 595 of the book ASA Reference
♦ "Creating new users" on page 741
♦ "Users and Groups properties" on page 1085
♦ "Deleting users from the database" on page 751
♦ "Deleting groups from the database" on page 758

Permissions of groups
You may grant permissions to groups in exactly the same way as to any other
user ID. Permissions on tables, views, and procedures are inherited by
members of the group, including other groups and their members. Some
complexities to group permissions exists, that database administrators need
to keep in mind.
Notes Members of a group do not inherit the DBA, RESOURCE, and GROUP
permissions. Even if the personnel user ID has RESOURCE permissions,
the members of personnel do not have RESOURCE permissions.
Ownership of database objects is associated with a single user ID and is not
inherited by group members. If the user ID personnel creates a table, then
the personnel user ID is the owner of that table and has the authority to
make any changes to the table, as well as to grant privileges concerning the
table to other users. Other user IDs who are members of personnel are not
the owners of this table, and do not have these rights. Only granted
permissions are inherited. For example, if the DBA or the personnel user ID
explicitly grants SELECT authority to the personnel user ID, all group
members do have select access to the table.

756
Chapter 23 Managing User IDs and Permissions

Referring to tables owned by groups


Groups are used for finding tables and procedures in the database. For
example, the query
SELECT * FROM SYSGROUPS
always finds the table SYSGROUPS, because all users belong to the
PUBLIC group, and PUBLIC belongs to the SYS group which owns the
SYSGROUPS table. (The SYSGROUPS table contains a list of group_name,
member_name pairs representing the group memberships in your database.)
If a table named employees is owned by the user ID personnel, and if
M_Haneef is a member of the personnel group, then M_Haneef can refer to
the employees table simply as employees in SQL statements. Users who are
not members of the personnel group need to use the qualified name
personnel.employees.

Creating a group to A good practice to follow that allows everyone to access the tables without
own the tables qualifying names, is to create a group whose only purpose is to own the
tables. Do not grant any permissions to this group, but make all users
members of the group. You can then create permission groups and grant
users membership in these permission groups as warranted. For an example,
see the section "Database object names and prefixes" on page 760.

Groups without passwords


Users connected to a group’s user ID have certain permissions. This user ID
can grant and revoke membership in the group. Also, this user would have
ownership permissions over any tables in the database created in the name of
the group’s user ID.
It is possible to set up a database so that only the DBA handles groups and
their database objects, rather than permitting other user IDs to make changes
to group membership. You can do this by disallowing connection as the
group’s user ID when creating the group. To do this, type the GRANT
CONNECT statement without a password. Thus:
GRANT CONNECT
TO personnel
creates a user ID personnel. This user ID can be granted group permissions,
and other user IDs can be granted membership in the group, inheriting any
permissions that have been given to personnel. However, nobody can
connect to the database using the personnel user ID, because it has no valid
password.

757
Managing groups

The user ID personnel can be an owner of database objects, even though no


user can connect to the database using this user ID. The CREATE TABLE
statement, CREATE PROCEDURE statement, and CREATE VIEW
statement all allow the owner of the object to be specified as a user other
than that executing the statement. Only the DBA can carry out this
assignment of ownership.

Special groups
When you create a database, the SYS and PUBLIC groups are also
automatically created. Neither of these groups has passwords, so it is not
possible to connect to the database as either SYS or as PUBLIC. However,
the two groups serve important functions in the database.
The SYS group The SYS group owns the system tables and views for the database, which
contain the full description of database structure, including all database
objects and all user IDs.
$ For a description of the system tables and views, together with a
description of access to the tables, see the chapters "System Tables" on
page 991 of the book ASA Reference, and also "System Views" on page 1051
of the book ASA Reference.
The PUBLIC group The PUBLIC group has CONNECT permissions to the database and
SELECT permission on the system tables. As well, the PUBLIC group is a
member of the SYS group, and has read access for some of the system tables
and views, so any user of the database can find out information about the
database schema. If you wish to restrict this access, you can REVOKE
PUBLIC’s membership in the SYS group.
Any new user ID is automatically a member of the PUBLIC group and
inherits any permissions specifically granted to that group by the DBA. You
can also REVOKE membership in PUBLIC for users if you wish.

Deleting groups from the database


You can delete a group from the database using both Sybase Central and
Interactive SQL.
Deleting users or groups from the database is different from removing them
from other groups. Deleting a group from the database does not delete its
members from the database, although they lose membership in the deleted
group.
Only the DBA can delete a group.

758
Chapter 23 Managing User IDs and Permissions

v To delete a group from the database (Sybase Central):


1 Open the Users & Groups folder.
2 Right-click the desired group and choose Delete from the popup menu.

v To delete a group from the database (SQL):


1 Connect to a database.
2 Execute a REVOKE CONNECT FROM statement.

Example Remove the group personnel from the database.


REVOKE CONNECT FROM personnel

$ See also
♦ "REVOKE statement" on page 595 of the book ASA Reference
♦ "Revoking user permissions" on page 750
♦ "Deleting users from the database" on page 751

759
Database object names and prefixes

Database object names and prefixes


The name of every database object is an identifier. The rules for valid
identifiers appear in "Identifiers" on page 223 of the book ASA Reference.
In queries and sample SQL statements throughout this guide, database
objects from the sample database are generally referred to using their simple
name. For example:
SELECT *
FROM employee
Tables, procedures, and views all have an owner. The owner of the tables in
the sample database is the user ID DBA. In some circumstances, you must
prefix the object name with the owner user ID, as in the following statement.
SELECT *
FROM "DBA".employee
The employee table reference is said to be qualified. In other circumstances
it is sufficient to give the object name. This section describes when you need
to use the owner prefix to identify tables, views and procedures, and when
you do not.
When referring to a database object, you require a prefix unless:
♦ You are the owner of the database object.
♦ The database object is owned by a group ID of which you are a member.

Example Consider the following example of a corporate database. The user ID


company created all the tables, and since the this user ID belongs to the
database administrator, it therefore has DBA authority.
GRANT CONNECT TO company
IDENTIFIED BY secret;
GRANT DBA TO company;
The company user ID created the tables in the database.
CONNECT USER company IDENTIFIED BY secret;
CREATE TABLE company.Customers ( ... );
CREATE TABLE company.Products ( ... );
CREATE TABLE company.Orders ( ... );
CREATE TABLE company.Invoices ( ... );
CREATE TABLE company.Employees ( ... );
CREATE TABLE company.Salaries ( ... );
Not everybody in the company should have access to all information.
Consider two user IDs in the sales department, Joe and Sally, who should
have access to the Customers, Products and Orders tables. To do this, you
create a Sales group.

760
Chapter 23 Managing User IDs and Permissions

GRANT CONNECT TO Sally IDENTIFIED BY xxxxx;


GRANT CONNECT TO Joe IDENTIFIED BY xxxxx;
GRANT CONNECT TO Sales IDENTIFIED BY xxxxx;
GRANT GROUP TO Sales;
GRANT ALL ON Customers TO Sales;
GRANT ALL ON Orders TO Sales;
GRANT SELECT ON Products TO Sales;
GRANT MEMBERSHIP IN GROUP Sales TO Sally;
GRANT MEMBERSHIP IN GROUP Sales TO Joe;
Now Joe and Sally have permission to use these tables, but they still have to
qualify their table references because the table owner is company, and Sally
and Joe are not members of the company group:
SELECT *
FROM company.customers
To rectify the situation, make the Sales group a member of the company
group.
GRANT GROUP TO company;
GRANT MEMBERSHIP IN GROUP company TO Sales;
Now Joe and Sally, being members of the Sales group, are indirectly
members of the company group, and can reference their tables without
qualifiers. The following command now works:
SELECT *
FROM Customers

Note Joe and Sally do not have any extra permissions because of their membership
in the company group. The company group has not been explicitly granted
any table permissions. (The company user ID has implicit permission to
look at tables like Salaries because it created the tables and has DBA
authority.) Thus, Joe and Sally still get an error executing either of these
commands:
SELECT *
FROM Salaries;
SELECT *
FROM company.Salaries
In either case, Joe and Sally do not have permission to look at the Salaries
table.

761
Using views and procedures for extra security

Using views and procedures for extra security


For databases that require a high level of security, defining permissions
directly on tables has limitations. Any permission granted to a user on a table
applies to the whole table. There are many cases when users’ permissions
need to be shaped more precisely than on a table-by-table basis. For
example:
♦ It is not desirable to give access to personal or sensitive information
stored in an employee table to users who need access to other parts of
the table.
♦ You may wish to give sales representatives update permissions on a
table containing descriptions of their sales calls, but limit such
permissions to their own calls.
In these cases, you can use views and stored procedures to tailor permissions
to suit the needs of your organization. This section describes some of the
uses of views and procedures for permission management.
$ For information on how to create views, see "Working with views" on
page 138.
$ For information on view permissions, see "Granting permissions on
views" on page 745.

Using views for tailored security


Views are computed tables that contain a selection of rows and columns
from base tables. Views are useful for security when it is appropriate to give
a user access to just one portion of a table. The portion can be defined in
terms of rows or in terms of columns. For example, you may wish to
disallow a group of users from seeing the salary column of an employee
table, or you may wish to limit a user to see only the rows of a table they
have created.
Example The Sales manager needs access to information in the database concerning
employees in the department. However, there is no reason for the manager to
have access to information about employees in other departments.
This example describes how to create a user ID for the sales manager, create
views that provides the information she needs, and grants the appropriate
permissions to the sales manager user ID.
1 Create the new user ID using the GRANT statement. While logged in as
a user ID with DBA authority, enter the following:

762
Chapter 23 Managing User IDs and Permissions

CONNECT "dba"
IDENTIFIED by sql ;

GRANT CONNECT
TO SalesManager
IDENTIFIED BY sales
2 Define a view which only looks at sales employees as follows:
CREATE VIEW emp_sales AS
SELECT emp_id, emp_fname, emp_lname
FROM "dba".employee
WHERE dept_id = 200
The table should therefore be identified as dba.employee, with the
owner of the table explicitly identified, for the SalesManager user ID to
be able to use the view. Otherwise, when SalesManager uses the view,
the SELECT statement refers to a table that user ID does not recognize.
3 Give SalesManager permission to look at the view:
GRANT SELECT
ON emp_sales
TO SalesManager
You use exactly the same command to grant permission on views and on
tables.

Example 2 The next example creates a view which allows the Sales Manager to look at a
summary of sales orders. This view requires information from more than one
table for its definition:
1 Create the view.
CREATE VIEW order_summary AS
SELECT order_date, region, sales_rep, company_name
FROM "dba".sales_order
KEY JOIN "dba".customer
2 Grant permission for the Sales Manager to examine this view.
GRANT SELECT
ON order_summary
TO SalesManager
3 To check that the process has worked properly, connect to the
SalesManager user ID and look at the views you created:
CONNECT SalesManager
IDENTIFIED BY sales ;
SELECT *
FROM "dba".emp_sales ;
SELECT *
FROM "dba".order_summary ;

763
Using views and procedures for extra security

No permissions have been granted to the Sales Manager to look at the


underlying tables. The following commands produce permission errors.
SELECT * FROM "dba".employee ;
SELECT * FROM "dba".sales_order

Other permissions The previous example shows how to use views to tailor SELECT
on views permissions. You can grant INSERT, DELETE, and UPDATE permissions
on views in the same way.
$ For information on allowing data modification on views, see "Using
views" on page 140.

Using procedures for tailored security


While views restrict access on the basis of data, procedures restrict the
actions a user may take. As described in "Granting permissions on
procedures" on page 747, a user may have EXECUTE permission on a
procedure without having any permissions on the table or tables on which the
procedure acts.
Strict security For strict security, you can disallow all access to the underlying tables, and
grant permissions to users or groups of users to execute certain stored
procedures. This approach strictly defines the manner in which data in the
database can be modified.

764
Chapter 23 Managing User IDs and Permissions

Changing Ownership on Nested Objects


Views and procedures can access underlying objects that are owned by
different users. For example, if usera, userb, userc, and userd were four
different users, userd.viewd could be based on userc.viewc, which could be
based on userb.viewb, which could be based on usera.table. Similarly for
procedures, userd.procd could call userc.procc, which could call userb.procb,
which could insert into usera.tablea.
The following Discretionary Access Control (DAC) rules apply to nested
views and tables:
♦ To create a view, the user must have SELECT permission on all of the
base objects (for example tables and views) in the view.
♦ To access a view, the view owner must have been granted the
appropriate permission on the underlying tables or views with the
GRANT OPTION and the user must have been granted the appropriate
permission on the view.
♦ Updating with a WHERE clause requires both SELECT and UPDATE
permission.
♦ If a user owns the tables in a view definition, the user can access the
tables through a view, even if the user is not the owner of the view and
has not been granted access on the view.
The following DAC rules apply to nested procedures:
♦ A user does not require any permissions on the underlying objects (for
example tables, views or procedures) to create a procedure.
♦ For a procedure to execute, the owner of the procedure needs the
appropriate permissions on the objects that the procedure references.
♦ Even if a user owns all the tables referenced by a procedure, the user
will not be able to execute the procedure to access the tables unless the
user has been granted EXECUTE permission on the procedure.

Following are some examples that describe this behavior.

Example 1: User1 creates table1, and user2 creates view2 on table1


♦ User1 can always access table1, since user1 is the owner.
♦ User1 can always access table1 through view2, since user1 is the owner
of the underlying table. This is true even if user2 does not grant
permission on view2 to user1.

765
Changing Ownership on Nested Objects

♦ User2 can access table1 directly or through view2 if user1 grants


permission on table1 to user2.
♦ User3 can access table1 if user1 grants permission on table1 to user3
♦ User3 can access table1 through view2 if user1 grants permission on
table1 to user2 with grant option AND user2 grants permission on view2
to user3.

Example 2: User2 creates procedure2 that accesses table1


♦ User1 can access table1 through procedure2 if user2 grants EXECUTE
permission on procedure2 to user1. Note that this is different from the
case of view2, where user1 did not need permission on view2.

Example 3: User1 creates table1, user2 creates table2, and user3 creates view3
joining table1 and table2
♦ User3 can access table1 and table2 through view3 if user1 grants
permission on table1 to user3 AND user2 grants permission on table2 to
user3.
♦ If user3 has permission on table1 but not on table2, then user3 cannot
use view3, even to access the subset of columns belonging to table1.
♦ User1 or user2 can use view3 if (a) user1 grants permission with grant
option on table1 to user3, (b) user2 grants permission with grant option
on table2 to user3, AND (c) user3 grants permission on view3 to that
user.

766
Chapter 23 Managing User IDs and Permissions

How user permissions are assessed


Groups do introduce complexities in the permissions of individual users.
Suppose user M_Haneef has SELECT and UPDATE permissions on a
specific table individually, but is also a member of two groups. Suppose one
of these groups has no access to the table at all, and one has only SELECT
access. What are the permissions in effect for this user?
Adaptive Server Anywhere decides whether a user ID has permission to
carry out a specific action in the following manner:
1 If the user ID has DBA authority, the user ID can carry out any action in
the database.
2 Otherwise, permission depends on the permissions assigned to the
individual user. If the user ID has been granted permission to carry out
the action, then the action proceeds.
3 If no individual settings have been made for that user, permission
depends on the permissions of each of the groups to which the member
belongs. If any of these groups has permission to carry out the action,
the user ID has permission by virtue of membership in that group, and
the action proceeds.
This approach minimizes problems associated with the order in which
permissions are set.

767
Managing the resources connections use

Managing the resources connections use


Building a set of users and groups allows you to manage permissions on a
database. Another aspect of database security and management is to limit the
resources an individual user can use.
For example, you may wish to prevent a single connection from taking too
much of the available memory or CPU resources, so you can avoid a having
a connection slow down other users of the database.
Adaptive Server Anywhere provides a set of database options that the DBA
can use to control resources. These options are called resource governors.
Setting options You can set database options using the SET OPTION statement, with the
following syntax:
SET [ TEMPORARY ] OPTION
... [ userid. | PUBLIC. ]option-name = [ option-value ]
$ For reference information about options, see "Database Options" on
page 155 of the book ASA Reference. For information on the SET OPTION
statement, see "SET OPTION statement" on page 612 of the book ASA
Reference.
Resources that can You can use the following options to manage resources:
be managed
♦ JAVA_HEAP_SIZE Sets the maximum size (in bytes) of the part of the
memory allocated to Java applications on a per connection basis.
♦ MAX_CURSOR_COUNT Limits the number of cursors for a
connection.
♦ MAX_STATEMENT_COUNT Limits the number of prepared
statements for a connection.
♦ BACKGROUND_PRIORITY Limits the impact requests on the current
connection have on the performance of other connections
Database option settings are not inherited through the group structure.

768
Chapter 23 Managing User IDs and Permissions

Users and permissions in the system tables


The database system tables and system views stores information about the
current users of a database and about their permissions.
$ For a description of each of these tables, see "System Tables" on
page 991 of the book ASA Reference.
The special user ID SYS owns the system tables. You cannot connect to the
SYS user ID.
The DBA has SELECT access to all system tables, just as to any other tables
in the database. The access of other users to some of the tables is limited. For
example, only the DBA has access to the SYS.SYSUSERPERM table, which
contains all information about the permissions of users of the database, as
well as the encrypted passwords of each user ID. However,
SYS.SYSUSERPERMS is a view containing all information in
SYS.SYSUSERPERM except for the password, and by default all users have
SELECT access to this view. You can fully modify all permissions and
group memberships set up in a new database for SYS, PUBLIC, and DBA.
The following table summarizes the system tables containing information
about user IDs, groups, and permissions. The user ID SYS owns all tables
and views, and so their qualified names are SYS.SYSUSERPERM and so
on.
Appropriate SELECT queries on these tables generates all the user ID and
permission information stored in the database.

Table Default Contents


SYSUSERPERM DBA only Database-level permissions and
password for each user ID
SYSGROUP PUBLIC One row for each member of each group
SYSTABLEPERM PUBLIC All permissions on table given by the
GRANT command s
SYSCOLPERM PUBLIC All columns with UPDATE permission
given by the GRANT command
SYSDUMMY PUBLIC Dummy table, can be used to find the
current user ID
SYSPROCPERM PUBLIC Each row holds one user granted
permission to use one procedure

The following table summarizes the system views containing information


about user IDs, groups, and permissions

769
Users and permissions in the system tables

Views Default Contents


SYSUSERAUTH DBA only All information in SYSUSERPERM
except for user numbers
SYSYUSERPERMS PUBLIC All information in SYSUSERPERM
except for passwords
SYSUSERLIST PUBLIC All information in SYSUSERAUTH
except for passwords
SYSGROUPS PUBLIC Information from SYSGROUP in a
more readable format
SYSTABAUTH PUBLIC Information from
SYSTABLEPERM in a more
readable format
SYSCOLAUTH PUBLIC Information from SYSCOLPERM in
a more readable format
SYSPROCAUTH PUBLIC Information from SYSPROCPERM
in a more readable format

In addition to these, there are tables and views that contain information about
each object in the database.

770
C H A P T E R 2 4

Keeping Your Data Secure

About this chapter This chapter describes Adaptive Server Anywhere features that help make
your database secure.
Many of these features are described in more detail elsewhere in the
documentation, and for such features, pointers to the relevant places are
provided.
Database administrators are responsible for data security. In this chapter,
unless otherwise noted, you require DBA authority to carry out the tasks
described.
$ User IDs and permissions are major security-related topics. For
information on these topics, see "Managing User IDs and Permissions" on
page 735.
Contents
Topic Page
Security features overview 772
Security tips 773
Controlling database access 775
Controlling the tasks users can perform 777
Auditing database activity 778
Running the database server in a secure fashion 782

771
Security features overview

Security features overview


Since databases may contain proprietary, confidential or private information,
ensuring that the database and the data in it are designed for security is very
important.
Adaptive Server Anywhere has several features to assist in building a secure
environment for your data:
♦ User identification and authentication These control who has access
to a database.
$ For information on these subjects, see "Creating new users" on
page 741.
♦ Discretionary access control features These features control the
actions a user can carry out while connected to a database.
$ For more information, see "Database permissions overview" on
page 736.
♦ Auditing The auditing feature helps you maintain a record of actions
on the database.
$ For more information, see "Auditing database activity" on
page 778.
♦ Database server options When you start the database server, you
control who can carry out operations (for example, loading databases).
$ For more information, see "Controlling permissions from the
command line" on page 11.
♦ Views and stored procedures Views and stored procedures allow
you to tune the data a user can access and the operations a user can
execute.
$ For more information, see "Using views and procedures for extra
security" on page 762.

This chapter describes auditing, and presents overviews of the other security
features, providing pointers to where you can find more detailed information.

772
Chapter 24 Keeping Your Data Secure

Security tips
As database administrator, there are many actions you can take to improve
the security of your data. For example, you can:
♦ Change the default user ID and password The default user ID and
password for a newly created database is DBA and SQL. You should
change this password before deploying the database.
♦ Require long passwords You can set the
MIN_PASSWORD_LENGTH public option to disallow short (and
therefore easily guessed) passwords.
$ For information, see "MIN_PASSWORD_LENGTH option" on
page 199 of the book ASA Reference.
♦ Restrict DBA authority You should restrict DBA authority only to
users who absolutely require it since it is very powerful. Users with
DBA authority can see and do anything in the database.
You may consider giving users with DBA authority two user Ids: one
with DBA authority and one without, so they can connect as DBA only
when necessary.
♦ Drop external system functions The following external functions
present possible security risks: xp_cmdshell, xp_startmail, xp_sendmail,
and xp_stopmail.
The xp_cmdshell procedure allows users to execute operating system
commands or programs.
The e-mail commands allow users to have the server send e-mail
composed by the user. Malicious users could use either the e-mail or
command shell procedures to perform operating-system tasks with
authorities other than those they have been given by the operating
system. In a security-conscious environment, you should drop these
functions.
$ For information on dropping procedures, see "DROP statement" on
page 505 of the book ASA Reference.
♦ Protect your database files You should protect the database file, log
files, dbspace files, and write files from unauthorized access. Do not
store them within a shared directory or volume.
♦ Protect your database software You should similarly protect
Adaptive Server Anywhere software. Only give users access to the
applications, DLLs, and other resources they require.

773
Security tips

♦ Run the database server as a service or a daemon To prevent


unauthorized users from shutting down or gaining access to the database
or log files, run the database server as an NT service on Windows NT.
On UNIX, running the server as a daemon serves a similar purpose.
$ For more information, see "Running the server outside the current
session" on page 18.

774
Chapter 24 Keeping Your Data Secure

Controlling database access


By assigning user IDs and passwords, the database administrator controls
who has access to a database. By granting permissions to each user ID, the
database administrator controls what tasks each user can carry out when
connected. This section describes the features available for controlling
database access.
Permission When you log onto the database, you have access to all database objects that
scheme is based meet any of the following criteria:
on user IDs
♦ objects you created.
♦ objects to which you received explicit permission.
♦ objects to which a group you belong to received explicit permission.
The user cannot access any database object that does not meet these criteria.
In short, users can access only the objects they own or objects to which they
explicitly received access permissions.
$ For more information, see the following:
♦ "Managing User IDs and Permissions" on page 735.
♦ "CONNECT statement" on page 423 of the book ASA Reference.
♦ "GRANT statement" on page 540 of the book ASA Reference.
♦ "REVOKE statement" on page 595 of the book ASA Reference.
Using integrated Integrated logins allow users to use a single login name and password to log
logins onto both the Windows NT operating system and onto a database. An
external login name is associated with a database user ID. When you attempt
an integrated login, you log onto the operating system by giving both a login
name and password. The operating system then tells the server who you are,
and the server logs you in as the associated database user ID. No additional
login name or password are required. There are some security implications of
integrated logins to consider
$ For more information see the following
♦ "Using integrated logins" on page 77.
♦ "Security concerns: unrestricted database access" on page 81.
♦ "LOGIN_MODE option" on page 195 of the book ASA Reference.

775
Controlling database access

Increasing password security


Passwords are an important part of any database security system. To be
secure, passwords must be difficult to guess, and they must not be easily
accessible on users’ hard drives or other locations.
Restricting By default, passwords can be any length. For greater security, you can
password length enforce a minimum length requirement on all new passwords. You do this by
setting the MIN_PASSWORD_LENGTH database option to a value greater
than zero. The following statement enforces passwords to be at least 8 bytes
long.
SET OPTION PUBLIC.MIN_PASSWORD_LENGTH = 8

$ For more information, see "MIN_PASSWORD_LENGTH option" on


page 199 of the book ASA Reference.
Encrypt the Passwords are the key to accessing databases. They should not be easily
passwords available to unauthorized people in a security-conscious environment.
When you create an ODBC data source, or a Sybase Central connection
profile, you can optionally include a password. Avoid including passwords
for greater security. If you do include a password in the data source, make
sure you encrypt the password by checking the box.
$ For information on creating ODBC data sources, see "Creating an
ODBC data source" on page 49.

776
Chapter 24 Keeping Your Data Secure

Controlling the tasks users can perform


Users can access only those objects to which they have been granted access.
You grant permission on an object to another user with the GRANT
statement. You can also grant a user permission to pass on the permissions
on an object to other users.
The GRANT statement also gives more general permissions to users.
Granting CONNECT permissions to a user allows them to connect to the
database and change their passwords. Granting RESOURCE authority allows
the user to create tables, views, procedures, and so on. Granting DBA
authority to a user gives that user the ability to see and do anything in the
database. The DBA would also use the GRANT statement to create and
administer groups.
The REVOKE statement is the opposite of the GRANT statement—any
permission that GRANT has explicitly given, REVOKE can take away.
Revoking CONNECT from a user removes the user from the database,
including all objects owned by that user.
Negative Adaptive Server Anywhere does not support negative permissions. This
permissions means that you cannot revoke a permission that was not explicitly granted.
For example, suppose user bob is a member of a group called sales. If a user
grants DELETE permission on a table T to sales, then bob can delete rows
from T. If you want to prevent bob from deleting from T, you cannot simply
execute a REVOKE DELETE on T from bob, since the DELETE ON T
permission was never granted directly to bob. In this case, you would have to
revoke bob's membership in the sales group.
$ For more information see:
♦ "GRANT statement" on page 540 of the book ASA Reference.
♦ "REVOKE statement" on page 595 of the book ASA Reference.

Designing database objects for security


Views and stored procedures provide alternative ways of tuning the data
users can access and the tasks they can perform.
$ For more information on these features, see:
♦ "Benefits of procedures and triggers" on page 438.
♦ "Using views and procedures for extra security" on page 762.

777
Auditing database activity

Auditing database activity


Auditing is a way of keeping track of the activity performed on a database.
The record of activities stays in the transaction log. By turning on auditing,
the DBA increases the amount of data saved in the transaction log to include
the following:
♦ All login attempts (successful and failed), including the terminal ID.
♦ Accurate timestamps of all events (to a resolution of milliseconds)
♦ All permissions checks (successful and failed), including the object on
which the permission was checked (if applicable)
♦ All actions that require DBA authority.
The transaction log Each database has an associated transaction log file. The transaction log is
used for database recovery. It is a record of transactions executed against a
database.
$ For information about the transaction log, see "The transaction log" on
page 651.
The transaction log stores all executed data definition statements, and the
user ID that executed them. It also stores all updates, deletes, and inserts and
which user executed those statements. However, this is insufficient for some
auditing purposes. By default, the transaction log does not contain the time
of the event, just the order in which events occurred. It also contains neither
failed events, nor select statements.

Turning on auditing
The database administrator can turn on auditing to add security-related
information to the transaction log.
Auditing is off by default. To enable auditing on a database, the DBA must
set the value of the public option AUDITING to ON. Auditing then remains
enabled until explicitly disabled, by setting the value of the AUDITING
option to OFF. You must have DBA permissions to set this option.

v To turn on auditing:
1 Ensure that your database is upgraded to at least version 6.0.2.
2 If you had to upgrade your database, create a new transaction log.
3 Execute the following statement:
SET OPTION PUBLIC.AUDITING = ’ON’

778
Chapter 24 Keeping Your Data Secure

$ For more information, see "AUDITING option" on page 172 of the


book ASA Reference.

Retrieving audit information


You can use the Log Translation utility to retrieve audit information. You
can access this utility from Sybase Central or from the command line. It
operates on a transaction log to produce a SQL script containing all of the
transactions, along with some information on what user executed each
command. By using the -g option, dbtran includes more comments
containing the auditing information.
To ensure a complete and readable audit record, the -g option automatically
sets the following switches:
♦ -d Display output in chronological order.
♦ -t Include trigger-generated operations in the output.
♦ -a Include rolled back transactions in the output.
You can run the Log Translation Utility against a running database server or
against a database log file

v To retrieve auditing information from a running database server:


1 Make sure your user ID has DBA authority.
2 With the database server running, execute the following statement at a
system command prompt.
dbtran -g -c "uid=dba;pwd=sql;..." -n asademo.sql

$ For information about connection strings, see "Connection


parameters" on page 64.

v To retrieve auditing information from a transaction log file:


1 Close the database server to ensure the log file is available.
2 At a system command prompt, execute the following statement to place
the information from the file asademo.log and into the file asademo.sql.
dbtran -g asademo.log

The -g command-line option includes auditing information in the output


file.
$ For more information see "The Log Translation utility" on page 117 of
the book ASA Reference.

779
Auditing database activity

Adding audit comments


You can add comments to the audit trail using the sa_audit_string system
stored procedure. It takes a single argument, which is a string of up to 200
bytes. You must have DBA permissions to call this procedure.
For example:
call sa_audit_string( ’Started audit testing here.’ )
This comment is stored in the transaction log as an audit statement.

An auditing example
This example shows how the auditing feature records attempts to access
unauthorized information.
1 As database administrator, turn on auditing.
You can do this from Sybase Central as follows:
♦ Connect to the ASA 7.0 Sample data source. This connects you as
the DBA user.
♦ Right-click the asademo database icon and choose Set Options
from the popup menu.
♦ Select Auditing from the list of options, and enter the value ON in
the Public Setting box. Click Set Permanent Now to set the option
and Close to exit.
Alternatively, you can use Interactive SQL. Connect to the sample
database from Interactive SQL as user ID DBA with password SQL and
execute the following statement:
SET OPTION PUBLIC.AUDITING = ’ON’
2 Add a user to the sample database, named BadUser, with password
BadUser. You can do this from Sybase Central. Alternatively, you can
use Interactive SQL and enter the following statement:
GRANT CONNECT TO BadUser
IDENTIFIED BY ’BadUser’
3 Using Interactive SQL, connect to the sample database as BadUser and
attempt to access confidential information in the employee table with
the following query:
SELECT emp_lname, salary
FROM dba.employee

780
Chapter 24 Keeping Your Data Secure

You receive an error message: do not have permission to select from


employee.
4 From a command prompt, change directory to your Adaptive Server
Anywhere installation directory, which holds the sample database, and
execute the following command:
dbtran -g -c "dsn=ASA 7.0 Sample" -n asademo.sql
This command produces a file named asademo.sql, containing the
transaction log information and a set of comments holding audit
information. The lines indicating the unauthorized BadUser attempt to
access the employee table are included in the file as follows:
--AUDIT-1001-0000287812 -- 1999/02/11 13:59:58.765
Checking Select permission on employee - Failed
--AUDIT-1001-0000287847 -- 1999/02/11 13:59:58.765
Checking Select permission on employee(salary) -
Failed
5 Restore the sample database to its original state so other examples you
try in this documentation give the expected results.
Connect as the DBA user, and carry out the following operations:
♦ Revoke Connect privileges from the user ID BadUser.
♦ Set the PUBLIC.AUDITING option to ’OFF’.

Auditing actions outside the database server


Some database utilities act on the database file directly. In a secure
environment, only trusted users should have access to the database files.
To provide auditing of actions, under Windows NT only, any use of dbtran,
dbwrite, and dblog generates a text file in the same directory as the database
file, with the extension .alg. For example, for asademo.db, the file is called
asademo.alg. Records containing the tool name, NT user name, and date/time
are appended to this file. Records are only appended if the database being
accessed has auditing set to ON.
Records are only added to the .alg file if the AUDITING option is set to ON.

781
Running the database server in a secure fashion

Running the database server in a secure fashion


There are several security features you can set either when starting the
database server or during server operation, including:
♦ Starting and stopping databases By default, any user can start an
extra database on a running server. The –gd option allows you to limit
access to this option to users with a certain level of permission in the
database to which they are already connected. The permissible values
include dba, all, or none.
$ For more information, see "–gd command-line option" on page 26
of the book ASA Reference.
♦ Creating and deleting databases By default, any user can use the
CREATE DATABASE statement to create a database file. The –gu
option allows you to limit access to this option to users with a certain
level of permission in the database to which they are connected. The
permissible values include dba, all, none, or utility_db.
$ For information, see "–gu command-line option" on page 30 of the
book ASA Reference.
♦ Stopping the server The dbstop command-line utility stops a
database server. It is useful in batch files, or in other cases where
interactive stopping of the server (by clicking Shutdown on the server
window) is impractical. By default, any user can run dbstop to shut
down a server. The –gk option allows you to limit access to this option
to users with a certain level of permission in the database. The
permissible values include dba, all, or none.
$ For more information, see "–gk command-line option" on page 27
of the book ASA Reference.
♦ Loading and unloading data The LOAD TABLE, UNLOAD
TABLE, and UNLOAD statements all access the file system on the
database server machine. If you are running the personal database
server, you already have access to the file system and this is not a
security issue. If you are running the network database server,
unwarranted file system access may be a security issue. The –gl
command-line option allows you to control the database permissions
required to carry out loading and unloading of data. The permissible
values are dba, all, or none.
$ For more information, see "–gl command-line option" on page 27
of the book ASA Reference.

782
Chapter 24 Keeping Your Data Secure

♦ Encrypting client/server communications over the network For


greater security, you can force client/server network communications to
be encrypted as they pass over the network. For more information, see
"Encrypting client/server communications" on page 783.

Encrypting client/server communications


You can set client/server encryption when you start the database server or,
from one client, in the client connection parameters.

v To force encryption of client/server communications from the


server:
♦ Start the database server using the -e command-line option. For
example:
dbsrv7 -e -x tcpip asademo.db

$ For more information, see "–e command-line option" on page 24 of the


book ASA Reference.

v To force encryption of client/server communications from a


particular client:
♦ Add the Encryption connection parameter to your connection string.
...UID=dba;PWD=sql;ENC=YES;...
You can also set this parameter on the Network tab of the connection
dialog box and the ODBC data source dialog box.
$ For more information, see "Encryption connection parameter" on
page 58 of the book ASA Reference.

783
Running the database server in a secure fashion

784
C H A P T E R 2 5

Working with Database Files

About this Chapter This chapter describes how to create and work with database and associated
files.
Contents
Topic Page
Overview of database files 786
Using additional dbspaces 788
Working with write files 792
Using the utility database 794

785
Overview of database files

Overview of database files


Basic database Each database has the following files associated with it.
files
♦ The database file This file holds the database information. It typically
has the extension .db.
$ For information on creating databases, see "Working with
databases" on page 115.
♦ The transaction log This file holds a record of the changes made to
the database, and is necessary for recovery and replication. It typically
has the extension .log.
$ For information on the transaction log, see "Backup and Data
Recovery" on page 645.
♦ The temporary file The database server uses the temporary file to hold
information needed during a database session. The database server
disregards this file once the database shuts down — even if the server
remains running. The file has a server-generated name with the
extension .tmp.
The temporary file is held in a location determined by an environment
variable. The following environment variables are checked, in order:
♦ ASTMP
♦ TMP
♦ TMPDIR
♦ TEMP
If none of these exist, it is held in the system temporary directory.
The server creates, maintains and removes the temporary file. You only
need to ensure that there is enough free space available for the
temporary file.

Additional files Other files can also become part of a database system, including:
♦ Additional database files You can spread your data over several
separate files. These additional files are called dbspaces.
$ For information on dbspaces, see "CREATE DBSPACE
statement" on page 431 of the book ASA Reference.
♦ Transaction log mirror files For additional security, you can create a
mirror copy of the transaction log. This file typically has the extension
.mlg.

786
Chapter 25 Working with Database Files

$ For information on mirrored transaction logs, see "Transaction log


mirrors" on page 652.
♦ Write files If the database file you are working with is designated
read-only (for example, because it is distributed on CD-ROM), you can
use an additional write file to hold changes you make to the data.
However, note that you can only use write files with read-only database
files, not with a server running in read-only mode. Running the server in
read-only mode allows no changes to the database file whatsoever.
♦ Compressed database files You can compress a database file. The
resulting file is read only, but can be used in conjunction with a write
file. Compressed database files are used in place of the actual database
file.
$ For information on compression, see "The Compression utility" on
page 85 of the book ASA Reference.

Chapter goals This chapter describes how to create, name, and delete different types of files
found in a database system.

787
Using additional dbspaces

Using additional dbspaces


This section describes how to use additional database files, named dbspaces.

Typically needed for large databases


For most databases, a single database file is sufficient. However, for users
of large databases, additional database files are often necessary.
Additional database files are also convenient tools for clustering related
information in separate files.

When you initialize a database, it contains one database file. This first
database file is called the main file. All database objects and all data are
placed, by default, in the main file.
Each database file has a maximum allowable size of 256M database pages.
For example, a database file created with a database page size of 4 KB can
grow to a maximum size of one terabyte (256M*4KB). However, in practice,
the maximum file size allowed by the physical file system in which the file is
created affects the maximum allowable size significantly.
While many commonly employed file systems restrict file size to a
maximum of 2GB, some, such as the Windows NT file system, allow you to
exploit the full database file size. In scenarios where the amount of data
placed in the database exceeds the maximum file size, it is necessary to
divide the data into more than one database file. As well, you may wish to
create multiple dbspaces for reasons other than size limitations, for example
to cluster related objects.
Splitting existing If you wish to split existing database objects among several dbspaces, you
databases need to unload your database and modify the generated command file for
rebuilding the database. To do so, add IN clauses to specify the dbspace for
each table you do not wish to place in the main file.
$ See also
♦ "UNLOAD TABLE statement" on page 635 of the book ASA Reference
♦ "Setting properties for database objects" on page 120

Creating a dbspace
You create a new database file, or dbspace, either from Sybase Central, or by
using the CREATE DBSPACE statement. The database file for a new
dbspace may be on the same disk drive as the main file or on another disk
drive. You must have DBA authority to create dbspaces.

788
Chapter 25 Working with Database Files

For each database, you can create up to twelve dbspaces in addition to the
main dbspace.
Placing tables in A newly created dbspace is empty. When you create a new table you can
dbspaces place it in a specific dbspace with an IN clause in the CREATE TABLE
statement. If you don’t specify an IN clause, the table appears in the main
dbspace.
Each table is entirely contained in the dbspace it is created in. By default,
indexes appear in the same dbspace as their table, but you can place them in
a separate dbspace by supplying an IN clause.

v To create a dbspace (Sybase Central):


1 Connect to the database.
2 Open the DB Spaces folder for that database.
3 Double-click Add DB Space.
4 Follow the instructions in the wizard.
The new dbspace then appears in the DB Spaces folder.

v To create a dbspace (SQL):


1 Connect to the database.
2 Execute a CREATE DBSPACE statement.

Example The following command creates a new dbspace called library in the file
library.db in the same directory as the main file:
CREATE DBSPACE library
AS ’library.db’
The following command creates a table LibraryBooks and places it in the
library dbspace.
CREATE TABLE LibraryBooks (
title char(100),
author char(50),
isbn char(30)
) IN library

$ See also
♦ "CREATE DBSPACE statement" on page 431 of the book ASA
Reference
♦ "Creating tables" on page 126
♦ "CREATE INDEX statement" on page 448 of the book ASA Reference

789
Using additional dbspaces

Deleting a dbspace
You can delete a dbspace using either Sybase Central or Interactive SQL.
Before you can delete a space, you must delete all tables that use the space.
You must have DBA authority to delete a dbspace.

v To delete a dbspace (Sybase Central):


1 Open the DB Spaces folder.
2 Right-click the desired dbspace and choose Delete from the popup
menu.

v To delete a dbspace (SQL):


1 Connect to a database.
2 Execute a DROP DBSPACE statement.
$ See also
♦ "Deleting tables" on page 130
♦ "DROP statement" on page 505 of the book ASA Reference

Pre-allocating space for database files


Adaptive Server Anywhere automatically grows database files as needed.
Rapidly changing database files can lead to excessive file fragmentation on
the disk, resulting in potential performance problems. Unless you are
working with a database with a high rate of change, you do not need to worry
about explicitly allocating space for database files. If you are working with a
database with a high rate of change, you may pre-allocate disk space for
dbspaces or for transaction logs using either Sybase Central or the ALTER
DBSPACE statement.
You must have DBA authority to alter the properties of a database file.

Performance Tip
Running a disk defragmentation utility after pre-allocating disk space
helps ensure that the database file is not fragmented over many disjoint
areas of the disk drive. Performance can suffer if there is excessive
fragmentation of database files.

v To pre-allocate space (Sybase Central):


1 Open the DB Spaces folder.

790
Chapter 25 Working with Database Files

2 Right-click the desired dbspace and choose Properties from the popup
menu.
3 On the General tab of the property sheet, click Add Pages.
4 Enter the number of pages to add to the dbspace.

v To pre-allocate space (SQL):


1 Connect to a database.
2 Execute an ALTER DBSPACE statement.

Example Increase the size of the SYSTEM dbspace by 200 pages.


ALTER DBSPACE system
ADD 200
$ See also
♦ "Creating a dbspace" on page 788
♦ "Database Space properties" on page 1101
♦ "ALTER DBSPACE statement" on page 385 of the book ASA Reference

791
Working with write files

Working with write files


If you have a read-only database file (for example, if you distribute a
database on a CD-ROM), you can use a write file to make local changes to
the database.
You create a write file using the Write File utility or using the CREATE
WRITEFILE statement. In this section, the examples use the command-line
utilities.
$ For a description of the CREATE WRITEFILE statement, see
"CREATE WRITEFILE statement" on page 484 of the book ASA Reference.
$ For more information about opening a database as read-only to prevent
local changes to the database, see "–r command-line option" on page 33 of
the book ASA Reference.

v To use a write file:


1 Create the write file for your database.
For example, to create a write file for the demo database, execute the
following command in a directory containing a copy of the demo
database file asademo.db:
dbwrite -c asademo.db
This command creates a write file named asademo.wrt, with a
transaction log named asademo.wlg.
2 Start a database server and load the write file. By default, the server
locates files with the extension .wrt first, so the following command
starts the personal server running the demo database write file:
dbeng7 asademo
Messages on the server window indicate which file starts.
3 Connect to the database using Interactive SQL. You can use the user ID
DBA and the password SQL, as the demo database is the default.
4 Execute queries as usual. For example, the following query lists the
contents of the department table.
SELECT *
FROM department
The data for the department table is obtained from the database file
asademo.db
5 Try inserting a row. The following statement inserts a row into the
department table:

792
Chapter 25 Working with Database Files

INSERT
INTO department (dept_id, dept_name)
VALUES (202, ’Eastern Sales’)
If you committed this change, it would be written to the asademo.wlg
transaction log, and when the database checkpoints, the changes are
written to the asademo.wrt write file.
If you now query the department table, the results come from the write
file and the database file.
6 Try deleting a row. Set the WAIT_FOR_COMMIT option to avoid
referential integrity complaints here:
SET TEMPORARY OPTION wait_for_commit = ’on’ ;
DELETE
FROM department
WHERE dept_id = 100
If you committed this change, the deletion would be marked in the write
file. No changes occur to the database file.
For some purposes, it is useful to use a write file with a shared database. If,
for example, you have a read-only database on a network server, each user
could have their own write file. In this way, they could add local
information, which would be stored on their own machine, without affecting
the shared database. This approach can be useful for application development
also.
Deleting a write file You can use the dberase utility to delete a write file and its associated
transaction log.

793
Using the utility database

Using the utility database


The utility database is a phantom database with no physical representation.
The utility database has no database file, and therefore it cannot contain data.
The utility database feature allows you to execute database file
administration statements such as CREATE DATABASE, or ALTER
WRITEFILE without first connecting to an existing physical database.
For example, executing the following statement after having connected to the
utility database creates a database named new.db in the directory C:\temp.
CREATE DATABASE ’C:\\temp\\new.db’

$ For more information on the syntax of those statements, see "CREATE


DATABASE statement" on page 427 of the book ASA Reference.
You can also retrieve values of connection properties and database properties
using the utility database.
For example, executing the following statement against the utility database
returns the default collation sequence, which will be used when creating
database:
SELECT property( ’DefaultCollation’ )
$ For a list of database properties and connection properties, see
"Database properties" on page 1090 of the book ASA Reference.
Sybase Central You cannot connect to the utility database from Sybase Central. However, as
you can already create and delete a database from Sybase Central without
first connecting to a database, this is not a practical limitation.

Connecting to the utility database


You can start the utility database on a database server by specifying
utility_db as the database name when connecting to the server. The user ID
and password requirements are different for the personal server and the
network server.
For the personal database server, there are no security restrictions for
connecting to the utility database. It is assumed that anybody who can
connect to the personal database server can access the file system directly,
and so no attempt is made to screen users based on passwords.
$ For more information, see "Utility database passwords" on page 796.

794
Chapter 25 Working with Database Files

v To connect to the utility database on the personal server (Interactive


SQL):
1 Start a database server with the following command:
dbeng7.exe -n TestEng
2 Start Interactive SQL.
3 In the Connect dialog, enter DBA as the user ID, and enter any non-
blank password. The password itself is not checked, but it must be non-
empty.
4 On the Database tab, enter utility_db as the database name.
5 Click OK to connect.
Interactive SQL connects to the utility database on the personal server
named TestEng, without loading a real database.
For the network server, there are security restrictions on connections. A
password is held in the file util_db.ini in the same directory as the database
server executable.

v To connect to the utility database on the network server (Interactive


SQL):
1 Add a password to the file util_db.ini in the same directory as the
database server executable. For example, the following util_db.ini file
has the password ASA.
[UTILITY_DB]
PWD=ASA
2 Start a database server with the following command:
dbsrv7.exe -n TestEng
3 Start Interactive SQL.
4 In the Connect dialog, enter DBA as the user ID, and enter the password
held in the file util_db.ini.
5 On the Database tab, enter utility_db as the database name.
6 Click OK to connect.
Interactive SQL connects to the utility database on the network server
named TestEng, without loading a real database.
$ See also
♦ "Connecting to a Database" on page 33
♦ "Connect dialog" on page 1045

795
Using the utility database

Utility database server security


There are two aspects to utility database server security:
♦ Who can connect to the utility database? This is controlled by the use of
passwords.
♦ Who can execute file administration statements? This is controlled by
database server command-line options.

Utility database passwords


The personal server and the network server have different security models
for connections.
For the personal server, you must specify the user ID DBA. You must also
specify a password, but it can be any password. Since the personal server is
for single machine use, security restrictions (for example passwords) are
unnecessary.
For the network server, you must specify the user ID DBA, and the password
that is held in a file named util_db.ini, stored in the same directory as the
database server executable file. As this directory is on the server, you can
control access to the file, and thereby control who has access to the
password.
util_db.ini The util_db.ini file has the following contents:
[UTILITY_DB]
PWD=password
Use of the utility_db security level relies on the physical security of the
computer hosting the database server, since the util_db.ini file can be easily
read using a text editor.

Permission to execute file administration statements


A level of security is provided for the ability to execute certain
administration tasks. The -gu database server command-line option controls
who can execute the file administration statements.
There are four levels of permission for the use of file administration
statements. These levels include: all, none, dba, and utility_db. The
utility_db level permits only a person with authority to connect to the utility
database to use the file administration statements.

796
Chapter 25 Working with Database Files

-gu switch option Effect applies to


All Anyone can execute file Any database including
administration statements utility database
None No one can execute file Any database including
administration statements utility database
Dba Only dba-authority users Any database including
can execute file utility database
administration statements
Utility_db Only the user who can Only the utility
connect to utility database database
can execute file
administration statements

$ For more information on the database server –gu command line switch,
see "–gu command-line option" on page 30 of the book ASA Reference.
Examples ♦ To prevent the use of the file administration statements, start the
database server using the none permission level of the –gu switch. The
following command starts a database server and names it TestSrv. It
loads the sample database, but prevents anyone from using that server to
create or delete a database, or execute any other file administration
statement regardless of their resource creation rights, or whether or not
they can load and connect to the utility database.
dbsrv7.exe –n TestSrv -gu none asademo.db
♦ To permit only the users knowing the utility database password to
execute file administration statements, start the server at the command
line with the following command.
dbsrv7 –n TestSrv –gu utility_db
Assuming the utility database password has been set during installation
to asa, the following command starts the Interactive SQL utility as a
client application, connects to the server named TestSrv, loads the
utility database and connects the user.
dbisql -c
"uid=dba;pwd=asa;dbn=utility_db;eng=TestSrv"
Having executed the above statement successfully, the user connects to
the utility database, and can execute file administration statements.

797
Using the utility database

798
C H A P T E R 2 6

Monitoring and Improving Performance

About this chapter This chapter describes how to monitor and improve the performance of your
database.
Contents
Topic Page
Top performance tips 800
Using the cache to improve performance 807
Using keys to improve query performance 811
Using indexes to improve query performance 816
Search strategies for queries from more than one table 820
Sorting query results 823
Temporary tables used in query processing 824
How the optimizer works 826
Monitoring database performance 829

799
Top performance tips

Top performance tips


Adaptive Server Anywhere provides excellent performance automatically.
However, the following information provides some tips to help you achieve
the most from the product.

Always use a transaction log


You might think that Adaptive Server Anywhere would run faster without a
transaction log because it would have to maintain less information on disk.
Yet, the opposite is actually true. Not only does a transaction log provide a
large amount of protection, it can dramatically improve performance.
Operating without a transaction log, Adaptive Server Anywhere must
perform a checkpoint at the end of every transaction. Writing these changes
consumes considerable resources.
With a transaction log, however, Adaptive Server Anywhere need only write
notes detailing the changes as they occur. It can choose to write the new
database pages all at once, at the most efficient time. Checkpoints make sure
information enters the database file, and that it is consistent and up to date.

Tip
Always use a transaction log. It helps protect your data and it greatly
improves performance.

If you can store the transaction log on a different physical device than the
one containing the main database file, you can further improve performance.
The extra drive head does not generally have to seek to get to the end of the
transaction log.

Increase the cache size


Adaptive Server Anywhere stores recently used pages in a cache. Should a
request need to access the page more than once, or should another connection
require the same page, it may find it already in memory and hence avoid
having to read information from disk.
If your cache is too small, Adaptive Server Anywhere cannot keep pages in
memory long enough to reap these benefits.

800
Chapter 26 Monitoring and Improving Performance

On UNIX, Windows NT, Windows 95/98, and other 32-bit Windows


operating systems, the database server dynamically changes cache size as
needed. However, the cache is still limited by the amount of memory
physically available, and by the amount used by other applications.
On Windows CE and Novell NetWare, the size of the cache is set on the
command line when you launch the database server. Be sure to allocate as
much memory to the database cache as possible, given the requirements of
the other applications and processes that run concurrently. In particular,
databases using Java objects benefit greatly from larger cache sizes. If you
use Java in your database, consider a cache of at least 8 Mb.

Tip
Increasing the cache size can often improve performance dramatically,
since retrieving information from memory is many times faster than
reading it from disk. You may even find it worthwhile to purchase more
RAM to allow a larger cache.

$ For more information, see "Using the cache to improve performance"


on page 807.

Normalize your table structure


In general, the information in each column of a table should depend solely on
the value of the primary key. If this is not the case, then one table may
contain multiple copies of the same information, and your table may need to
be normalized.
Normalization reduces duplication in a relational database. For example,
suppose the people in your company work at a number of offices. To
normalize the database, consider placing information about the offices (such
as its address and main telephone numbers) in a separate table, rather than
duplicating all this information for every employee.
You can, however, take the generally good notion of normalization too far. If
the amount of duplicate information is small, you may find it better to
duplicate the information and maintain its integrity using triggers, or other
constraints.

801
Top performance tips

Use indexes effectively


When executing a query, Adaptive Server Anywhere chooses how to access
each table. Indexes greatly speed up the access. When the database server
cannot find a suitable index, it instead resorts to scanning the table
sequentially—a process that can take a long time.
For example, suppose you need to search a large database for people, but you
only know either their first or their last name, but not both. If no index exists,
Adaptive Sever Anywhere scans the entire table. If however, you created two
indexes (one that contains the last names first, and a second that contains the
first names first), Adaptive Sever Anywhere scans the indexes first, and can
generally return the information to you faster.
Although indexes let Adaptive Server Anywhere locate information very
efficiently, exercise some caution when adding them. Each index creates
extra work every time you insert, delete, or update a row, because Adaptive
Server Anywhere must also update all affected indexes.
Consider adding an index when it will allow Adaptive Server Anywhere to
access data more efficiently. In particular, add an index when it eliminates
unnecessarily accessing a large table sequentially. If, however, you need
better performance when you add rows to a table, and finding information
quickly is not an issue, use as few indexes as possible.

Use an appropriate page size


Large page sizes help Adaptive Server Anywhere read databases more
efficiently. For example, if you use a large database, or if you access
information sequentially, try using a larger page size. Large page sizes also
bring other benefits, including improving the fan-out of your indexes,
reducing the number of index levels, and letting you create tables with more
columns.
You cannot change the page size of an existing database. Instead you must
create a new database and use the -p flag of dbinit to specify the page size.
For example, the following command creates a database with 4 kb pages.
dbinit –p 4096 new.db
$ For more information about larger page sizes, see "Setting a maximum
page size" on page 11.

802
Chapter 26 Monitoring and Improving Performance

In contrast, however, benefits associated with smaller page sizes often go


unrecognized. It is true that smaller pages hold less information and may
force less efficient use of space, particularly if you insert rows that are
slightly more than half a page in size. However, small page sizes also allow
Adaptive Server Anywhere to run with fewer resources because it can store
more pages in a cache of the same size. They are particularly useful if your
database must run on small machines with limited memory. They can also
help in situations when you use your database primarily to retrieve small
pieces of information from random locations.

Place different files on different devices


Disk drives operate much more slowly than modern processors or RAM.
Often, simply waiting for the disk to read or write pages is responsible for
slowing a database server.
You almost always improve database performance when you put different
physical database files on different physical devices. For example, while one
disk drive is busy swapping database pages to and from the cache, another
device can be writing to the log file.
Notice that to gain these benefits, the two or more devices involved must be
independent. A single disk, partitioned into smaller logical drives, is unlikely
to yield benefits.
Adaptive Server Anywhere uses four types of files:
1 database (.db)
2 transaction log (.log)
3 transaction log mirror (.mlg)
4 temporary (.tmp)
The database file holds the entire contents of your database. A single file
contains a single database. You choose a location for it, appropriate to your
needs.
The transaction log file is required for recovery of the information in your
database in the event of a failure. For extra protection, you can maintain a
duplicate in a third type of file called a transaction log mirror file.
Adaptive Server Anywhere writes the same information at the same time to
each of these files.

803
Top performance tips

Tip
Place the transaction log mirror file (if you use one) on a physically
separate drive. Not only do you gain better protection against disk failure,
but also Adaptive Server Anywhere runs faster because it can efficiently
write to the log and log mirror files. Use the dblog transaction log utility
to specify the location of the transaction log and transaction log mirror
files.

Adaptive Server Anywhere may need more space than is available to it in the
cache for such operations as sorting and forming unions. When it needs this
space, it generally uses it intensively. The overall performance of your
database becomes heavily dependent on the speed of the device containing
the fourth type of file, the temporary file.

Tip
Make sure Adaptive Server Anywhere places its temporary file on a fast
device, physically separate from the one holding the database file.
Adaptive Server Anywhere will run faster because many of the operations
that necessitate using the temporary file also require retrieving a lot of
information from the database. Placing the information on two separate
disks allows the operations to take place simultaneously.

Adaptive Server Anywhere examines the following environment variables, in


the order shown, to determine the directory in which to place the temporary
file.
1 ASTMP
2 TMP
3 TMPDIR
4 TEMP
If none of these is defined, Adaptive Server Anywhere places its temporary
file in the current directory—not a good location for the best performance.
If your machine has a sufficient number of fast devices, you can gain even
more performance by placing each of these files on a separate device. You
can even divide your database into multiple data spaces, located on separate
devices. In such a case, group tables in the separate data spaces so that
common join operations read information from different files.

804
Chapter 26 Monitoring and Improving Performance

A similar strategy involves placing the temporary and database files on a


RAID device or a Windows NT stripe set. Although such devices act as a
logical drive, they dramatically improve performance by distributing files
over many physical drives and accessing the information using multiple
heads.
$ For information about data recovery, see "Backup and Data Recovery"
on page 645.
$ For information about transaction logs and the dbcc utility, see
"Administration utilities overview" on page 77 of the book ASA Reference.

Turn off autocommit mode


If your application runs in autocommit mode, then Adaptive Server
Anywhere treats each of your statements as a separate transaction. In effect,
it is equivalent to appending a COMMIT statement to the end of each of your
commands.
Instead of running in autocommit mode, consider grouping your commands
so each group performs one logical task. If you do disable autocommit, you
must execute an explicit commit after each logical group of commands. Also,
be aware that if logical transactions are large, blocking and deadlock can
happen.
The cost of using autocommit mode is particularly high if you are not using a
transaction log file. Every statement forces a checkpoint—an operation that
can involve writing numerous pages of information to disk.
Each application interface has its own way of setting autocommit behavior.
For the Open Client, ODBC, and JDBC interfaces, Autocommit is the default
behavior.
$ For more information about autocommit, see "Setting autocommit or
manual commit mode" on page 283.

Defragment your drives


Performance can suffer if your hard disk is excessively fragmented. This
becomes more important as your database increases in size. You can put the
database on a disk partition by itself to eliminate fragmentation problems, or
periodically run one of the available utilities to defragment your hard disk.

805
Top performance tips

Use bulk operations methods


If you find yourself loading huge amounts of information into your database,
you can benefit from the special tools provided for these tasks.
If you are loading large files, it is more efficient to create indexes on the
table after the data is loaded.
$ For information on improving bulk operation performance, see
"Performance tips" on page 713.

806
Chapter 26 Monitoring and Improving Performance

Using the cache to improve performance


The database cache is an area of memory used by the database server to store
database pages for repeated fast access. The more pages that are accessible in
the cache, the fewer times the database server needs to read data from disk.
As reading data from disk is a slow operation, the amount of cache available
is often a key factor in determining performance.
You can control the size of the database cache on the database server
command line when the database is started.
Dynamic cache Adaptive Server Anywhere provides automatic resizing of the database
sizing cache. The capabilities are different on different operating systems. On
Windows NT, Windows 95/98, and UNIX operating systems, the cache
grows and shrinks. On other operating systems, the cache can increase in
size, but not decrease. Details are provided in the following sections.
Full dynamic cache sizing helps to ensure that the performance of your
database server is not impacted by allocating inadequate memory. The cache
grows when the database server can usefully use more as long as memory is
available, and shrinks when cache is not required, so that the database server
does not unduly impact other applications on the system. The effectiveness
of dynamic cache sizing is limited, of course, by the physical memory
available on your system.
Dynamic cache sizing removes the need for explicit configuration of
database cache in many situations, making Adaptive Server Anywhere even
easier to use.
There is no cache resizing on Windows CE or Novell NetWare.

Limiting the memory used by the cache


The initial, minimum, and maximum cache sizes are all controllable from the
database server command line.
♦ Initial cache size You can control the initial cache size by specifying
the database server –c command-line option. The default value is as
follows:
♦ Windows CE The formula is as follows:
max( 600K, min( dbsize , physical-memory ) )
where dbsize is the total size of the database file or files started, and
physical-memory is 25% of the physical memory on the machine.

807
Using the cache to improve performance

♦ Windows NT, Windows 95/98, NetWare The formula is as


follows:
max( 2M, min( dbsize , physical-memory ) )
where dbsize is the total size of the database file or files started, and
physical-memory is 25% of the physical memory on the machine.
♦ UNIX At least 8 Mb.
$ For information about UNIX initial cache size, see "Dynamic
cache sizing (UNIX)" on page 809.
♦ Maximum cache size You can control the maximum cache size by
specifying the database server –ch command-line option. The default is
based on an heuristic that depends on the physical memory in your
machine.
♦ Minimum cache size You can control the minimum cache size by
specifying the database server -cl command-line option. By default, the
minimum cache size is the same as the initial cache size.
You can also disable dynamic cache sizing by using the –ca command-line
option.
$ For more information on command-line options, see "The database
server" on page 14 of the book ASA Reference.

Dynamic cache sizing (Windows NT, Windows 95/98)


On Windows NT and Windows 95/98, the database server evaluates cache
and operating statistics once per minute, to decide on an optimum cache size.
The server computes a target cache size that uses all physical memory
currently not in use, except for approximately 5 Mb, that is to be left free for
use by the system. The target cache size is never smaller than the specified or
implicit minimum cache size and never larger than the specified or implicit
maximum cache size. The target cache size never exceeds the sum of the
sizes of all open database and temporary files.
To avoid cache size oscillations, the server does not adjust the cache size
immediately to the target cache size. Instead, it adjusts the cache size by 75%
of the difference between the current and target cache size.

808
Chapter 26 Monitoring and Improving Performance

Dynamic cache sizing (UNIX)


On UNIX, the database server uses swap space and memory to manage the
cache size. The swap space is a system-wide resource on most UNIX
operating systems, but not on all. In this section, the sum of memory and
swap space is called the system resources. See your operating system
documentation for details.
On startup, the database allocates the specified maximum cache size from the
system resources. It loads some of this into memory (the initial cache size)
and keeps the remainder as swap space.
The total amount of system resources used by the database server is constant
until the database server shuts down, but the proportion loaded into memory
changes. Each minute, the database server evaluates cache and operating
statistics. If the database server is busy and demanding of resources, it may
move cache pages from swap space into memory. If the server is quiet it may
move them out from memory to swap space.
Initial cache size By default, the initial cache size is assigned using a heuristic based on the
available system resources. The initial cache size is always less than 1.1
times the total database size.
If the initial cache size is greater than 3/4 of the available system resources,
the database server exits with a Not Enough Memory error
Maximum cache The maximum cache must be less than the available system resources on the
size machine. By default, the maximum cache size is assigned using a heuristic
based on the available system resources and the total physical memory on the
machine.
If you specify a value of -ch greater than the available system resources, the
server exits with a Not Enough Memory error. If you specify a value of –ch
greater than the available memory, the server warns of performance
degradation, but does not exit.
The database server allocates all the maximum cache size from the system
resources, and does not relinquish it until the server exits. You should be sure
that you choose a value for -ch that gives good Adaptive Server Anywhere
performance while leaving space for other applications. The formula for the
default maximum cache size is a heuristic that attempts to achieve this
balance. You only need to tune the value if the default value is not
appropriate on your system.
If you specify a value of –ch less than 8 Mb, you will not be able to run Java
applications. Low maximum cache sizes will impact performance.
Minimum cache By default, the minimum cache size is 8 Mb.
size

809
Using the cache to improve performance

Monitoring cache size


The following statistics have been added to the performance monitor and to
the property functions.
♦ CurrentCacheSize The current cache size in kilobytes
♦ MinCacheSize The minimum allowed cache size in kilobytes
♦ MaxCacheSize The maximum allowed cache size in kilobytes
♦ PeakCacheSize The peak cache size in kilobytes
$ For more information on these properties, see "Server-level properties"
on page 1095 of the book ASA Reference.
$ For information on monitoring performance, see "Monitoring database
performance" on page 829.

810
Chapter 26 Monitoring and Improving Performance

Using keys to improve query performance


The foreign key and the primary key, used primarily for validation
purposes, can also improve performance.
Example The following example illustrates how keys can make commands execute
faster.
SELECT *
FROM employee
WHERE emp_id = 390
The simplest way for the server to perform this query would be to look at all
75 rows in the employee table and check the employee ID number in each
row to see if it is 390. This does not take very long since there are only 75
employees, but for tables with many thousands of entries the search can take
a long time.
The emp_id column is the primary key for the employee table. A built-in
index mechanism finds primary and foreign key values quickly.
The same index mechanism automatically finds the employee number 390
quickly. This quick search takes almost the same amount of time whether
there are 100 rows or 1,000,000 rows in the table.

Using Interactive SQL to examine query performance


The Interactive SQL Messages pane tells you when keys are being used to
improve performance.

811
Using keys to improve query performance

Information in the If you execute a query to look at every row in the employee table:
Messages pane SELECT *
FROM employee
three lines appear in the Messages pane:
PLAN> employee (seq)
75 rows in query (I/O estimate 14)
Execution time: 0.359 seconds

812
Chapter 26 Monitoring and Improving Performance

The first line summarizes the execution plan for the query: the tables
searched and any indexes used to search through a table.
♦ The letters seq inside parentheses mean that the server looked at the
employee table sequentially (that is, one page at a time, in the order that
the rows appear on the pages).
♦ The second line indicates the number of rows in the query. Sometimes
the database knows exactly, as in this case where there are 75 rows.
Other times it estimates the number of rows. The line also indicates an
internal I/O estimate of how many times the server will have to look at
the database on your hard disk to examine the entire employee table.
♦ The third line shows how long it took for your query to be executed. In
this case, it took exactly 0.359 seconds.
Setting the level of The amount and type of information in the Messages pane depends on
plan detail Interactive SQL settings . Interactive SQL provides three levels of detail.
On the Messages tab of the Options dialog (accessed by choosing
Tools➤Options), you have the following plan options:
♦ None No information about an execution appears in the Messages
pane.
♦ Short plan Basic information about an execution appears in one line
in the Messages pane. This line can show the table(s) accessed and
whether the rows are read sequentially or accessed through an index.
This plan is the default.
♦ Long plan Detailed information about an execution appears in
multiple lines in the Messages pane.
On this tab of the dialog, you can also specify whether to include the
execution time in the Messages pane.
Resetting statistics The Messages pane may contain estimates that differ from what appears
here. The optimizer maintains statistics as it evaluates queries and uses these
statistics to optimize subsequent queries. These statistics can be reset by
executing the following statement:
DROP OPTIMIZER STATISTICS

Dropping optimizer statistics can slow execution


Dropping optimizer statistics causes queries to execute slower, since the
optimizer has less information about the actual distribution of data in the
database tables.

813
Using keys to improve query performance

Using primary keys to improve query performance


A primary key improves performance on the following statement:
SELECT *
FROM employee
WHERE emp_id = 390

Statistic window The Messages pane contains the following two lines:
information
Estimated 1 rows in query (I/O estimate 2)
PLAN> employee (employee)
Whenever the name inside the parentheses in the Message pane PLAN
description is the same as the name of the table, it means that the primary
key for the table is used to improve performance. Also, the Messages pane
shows that the database optimizer estimates there will be one row in the
query and it will have to go to the disk twice to get the data.

Using foreign keys to improve query performance


The following query lists the orders from customer with customer ID 113:
SELECT *
FROM sales_order
WHERE cust_id = 113

Statistic window The Messages pane contains the following information:


information
Estimated 2 rows in query (I/O estimate 2)
PLAN> sales_order (ky_so_customer)
Here ky_so_customer refers to the foreign key that the sales_order table has
for the customer table.
Primary and foreign keys are just indexes used to maintain referential
integrity and the integrity of primary key values.

Separate primary and foreign key indexes


In versions of the software before Adaptive Server Anywhere7.0, a single
physical index was created automatically for any primary key and all foreign
keys that referenced it. While this structure has some advantages for inserts
and updates, it can also create a performance bottleneck in cases where many
different foreign keys reference a single primary key, or where there are
many duplicate values among the foreign keys.

814
Chapter 26 Monitoring and Improving Performance

In version 7.0, while indexes are still created automatically for primary and
foreign keys, they are done so separately.

815
Using indexes to improve query performance

Using indexes to improve query performance


Sometimes you need to search for something which is neither a primary nor a
foreign key. In this case, although you cannot use a key to improve
performance, creating indexes speeds up searches on particular columns. For
example, suppose you wanted to look up all the employees with a last name
beginning with M.
A query for this is as follows:
SELECT *
FROM employee
WHERE emp_lname LIKE ’M%’
If you execute this command, the plan description in the Interactive SQL
Messages pane shows that the table is searched sequentially.
Creating an index If a searching by one particular column (employee last names, for example)
is common, you may wish to create an index on that column (emp_lname)
to speed up the queries, using the CREATE INDEX statement. For example:
CREATE INDEX lname
ON employee ( emp_lname )
The column name emp_lname indicates which column is indexed. An index
can contain one, two, or more columns. However, if you create a multiple-
column index, and then do a search with a condition using only the second
column in the index, the index cannot speed up the search.
An index is similar to a telephone book, which first sorts people by their last
name, and then all the people with the same last name by their first name. A
telephone book is useful if you know the last name, even more useful if you
know both the first name and last name, but worthless if you only know the
first name and not the last name.
Once you have created the index, rerunning the query produces the following
plan description in the Messages pane:
PLAN> employee (lname)

How indexes are Once you create an index, it is automatically kept up to date and used to
used improve performance whenever it can.
You could create an index for every column of every table in the database,
but that would make data modifications slow, since all indexes affected by
the change have to be updated. Further, each index requires space in the
database. For these reasons, only create indexes you intend to use frequently.
If you will not be using this index again, delete it with the following
statement:

816
Chapter 26 Monitoring and Improving Performance

DROP INDEX lname


$ For more information, see "Working with indexes" on page 145.

How indexes work


This section provides a technical description of how the server uses indexes
when searching databases.
Index page The query processor uses modified B+ trees. Each index page is a node in the
structure tree and each node has many index entries. Leaf page index entries have a
reference to a row of the indexed table. Indexes remain of uniform depth and
pages remain close to full.
All leaf index entries are at a uniform depth, but the trees are not necessarily
balanced. In particular, deletions can unbalance an index.
An index lookup An index lookup starts with the root page. The index entries on a nonleaf
page determine which child page has the correct range of values. The index
lookup moves down to the appropriate child page, continuing until it reaches
a leaf page. An index with N levels requires N reads for index pages and one
read for the data page containing the actual row. Index pages tend to be
cached due to the frequency of use.
The leaf nodes of the index are linked together. Once a row has been looked
up, the rows of the table can be scanned in index order. Scanning all rows
with a given value requires only one index lookup, followed by scanning the
leaf nodes of the index until the value changes. This occurs when you have a
WHERE clause that filters out rows with a certain value or a range of values.
It also occurs when joining rows in a one-to-many relationship.
Recommended By default, index pages have a hash size of ten bytes: they store
page sizes approximately the first 10 bytes of data for each index entry. This allows for
a fan-out of roughly 200 using 4K pages, meaning that each index page holds
200 rows, or 40,000 rows with a two-level index. Each new level of an index
allows for a table 200 times larger. Page size can significantly affect fan-out,
in turn affecting the depth of index required for a table. Large databases
should have 4K pages.
You can find the number of levels in any index in the database using the
sa_index_levels system procedure.
$ For more information, see "sa_index_levels system procedure" on
page 969 of the book ASA Reference.

817
Using indexes to improve query performance

Configuring index If a query needs access to more than the hashed data, it needs to fetch it from
hash sizes the underlying row, and this may affect index performance. You can
explicitly configure the amount of data that is stored in the index by
specifying WITH HASH SIZE in the CREATE INDEX statement.
Increasing the hash size can reduce the number of times the underlying row
needs to be accessed. However, this is at the expense of a decreased index
fanout. For optimal performance, you should not increase hash size for all
indexes, only for indexes for which the first ten bytes of data does not
provide high selectivity.

How large a hash Your hash size should be large enough that most index entries can be
size do I need? uniquely identified based on the information stored in the index itself,
without needing to access the underlying row.
The hash size that you need depends on the following factors:
♦ The data types of the index columns Each data type has its own
storage requirements. Here is a summary of the index storage
requirements for commonly indexed data types. For the storage
requirements for each data type, see "SQL Data Types" on page 263 of
the book ASA Reference.

Data type Storage requirements / bytes


BIT 1
TINYINT 1
SMALLINT 2
INT 4
BIGINT 8
CHAR, VARCHAR, One byte per character (more for
multi-byte character sets).

♦ The number and order of columns in your index In addition to the


storage requirements for your data types, each index uses an additional
byte for each column in the index. Therefore, an index with a maximum
hash size of 15 would be needed to hold three integers (four bytes each,
plus one byte for each column).
If your index has multiple columns, and the hash size is not enough to
store the entire value of each column, the value for each column in turn
is stored until the hash space is exhausted.
The order of column in primary key indexes is the order in which the
columns were listed when the table was created.

818
Chapter 26 Monitoring and Improving Performance

♦ The distribution of your data If you create an index on a column that


has many repeated values, the index does not help discriminate among
those values. Similarly, if you create an index on a character column,
and the first several characters are similar in many entries, the index
does not discriminate until you have enough characters hashed to ensure
uniqueness.

Indexes on temporary tables


You can create indexes on global temporary tables (for which the schema is
permanent and shared across connections, but each connection has its own
data) but not on local temporary tables, which are entirely local to a single
connection.
If you create an index on a global temporary table, each connection gets their
own copy of the index. You may want to consider indexing on a global
temporary table if the table is expected to be large and to be accessed several
times in sorted order or in a join. Otherwise, any improvement in
performance for queries is likely to be outweighed by the cost of
dynamically creating and dropping the index.
$ For information on local temporary tables, see "DECLARE LOCAL
TEMPORARY TABLE statement" on page 495 of the book ASA Reference.
For information on global temporary tables, see "CREATE TABLE
statement" on page 466 of the book ASA Reference.

819
Search strategies for queries from more than one table

Search strategies for queries from more than


one table
This section uses sample queries to illustrate how the server selects an
optimal processing route for each query. If you execute each of the
commands in this section in Interactive SQL, the Messages pane displays the
execution plan chosen to process each query.
Using a key join The following simple query uses a key join to search more than one table:
SELECT customer.company_name, sales_order.id
FROM sales_order
KEY JOIN customer
The Messages pane displays the following:
Estimated 711 rows in query (I/O estimate 2)
PLAN> customer(seq), sales_order(ky_so_customer)
When this query is executed, the Interactive SQL Messages pane indicates
that Adaptive Server Anywhere first examines each row in the customer
table, then finds the corresponding sales order numbers in the sales_order
table using the ky_so_customer foreign key joining the sales_order and
customer tables.
The database accesses the tables in the same order listed in the Messages
pane.
Adding a WHERE If you modify the query by adding a WHERE clause, as follows, the search
clause occurs in a different order:
SELECT customer.company_name, sales_order.id
FROM sales_order
KEY JOIN customer
WHERE sales_order.id = 2583
The Message pane displays the following plan:
PLAN> sales_order(sales_order), customer(customer)
Now, Adaptive Server Anywhere looks in the sales_order table first, using
the primary key index. Then, for each sales order numbered 2583 (there is
only one), it looks up the company_name in the customer table using the
customer table primary key to identify the row. The primary key can be used
here because the row in the sales_order table is linked to the rows of the
customer table by the customer id number, which is the primary key of the
customer table.

820
Chapter 26 Monitoring and Improving Performance

The query determines which order to examine the tables in. The Adaptive
Server Anywhere built-in query optimizer estimates the cost of different
possible execution plans, and chooses the plan with the least estimated cost.
For some more complicated examples, try the following commands that each
join four tables. The Interactive SQL Messages pane shows that each query
is processed in a different order.
Example 1
v To list the customers and the sales reps they have dealt with:
♦ Type the following:
SELECT customer.lname, employee.emp_lname
FROM customer
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN employee

lname emp_lname
Colburn Chin
Smith Chin
Sinnot Chin
Piper Chin
Phipps Chin

The plan for this query is as follows:


PLAN> employee (seq), sales_order (ky_so_employee_id),
customer (customer), sales_order_items (id_fk)

Example 2 The following command restricts the results to list all sales reps the customer
named Piper has dealt with:
SELECT customer.lname, employee.emp_lname
FROM customer
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN employee
WHERE customer.lname = ’Piper’
The plan for this query is as follows:
PLAN> customer (ix_cust_name), sales_order (ky_so_customer),
employee (employee), sales_order_items (id_fk)

Example 3 The third example shows all customers who have dealt with a sales
representative who has the same name they have:

821
Search strategies for queries from more than one table

SELECT customer.lname, employee.emp_lname


FROM customer
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN employee
WHERE customer.lname = employee.emp_lname
The plan for this query is as follows:
PLAN> employee (seq), customer (ix_cust_name),
sales_order (ky_so_employee_id), sales_order_items (id_fk)
$ For information on how the optimizer selects a strategy for each search,
see "How the optimizer works" on page 826.

822
Chapter 26 Monitoring and Improving Performance

Sorting query results


Many queries against a database have an ORDER BY clause so the rows
come out in a predictable order. Indexes order the information quickly. For
example,
SELECT *
FROM customer
ORDER BY customer.lname
can use the index on the lname column of the customer table to access the
rows of the customer table in alphabetical order by last name.
Queries with WHERE A potential problem arises when a query has both a WHERE clause and an
and ORDER BY ORDER BY clause.
clauses SELECT *
FROM customer
WHERE id > 300
ORDER BY company_name
The server must decide between two strategies:
1 Go through the entire customer table in order by company name,
checking each row to see if the customer id is greater than 300.
2 Use the key on the id column to read only the companies with id greater
than 300. The results would then need to be sorted by company name.
If there are very few id values greater than 300, the second strategy is better
because only a few rows need to be scanned and quickly sorted. If most of
the id values are greater than 300, the first strategy is much better because it
requires no sorting.
Solving the The example above could be solved by creating a two-column index on id
problem and company_name. (The order of the two columns is important.) The server
could then use this index to select rows from the table and have them in the
correct order. However, keep in mind that indexes take up space in the
database file and involve some overhead to keep up to date. Create indexes
only for columns that you intend to search frequently.
$ For more information about sorting, see "The ORDER BY clause:
sorting query results" on page 187, or "The GROUP BY clause: organizing
query results into groups" on page 178.

823
Temporary tables used in query processing

Temporary tables used in query processing


Sometimes Adaptive Server Anywhere needs to make a temporary table for
a query. This occurs in the following cases:
When temporary ♦ When a query has an ORDER BY or a GROUP BY clause and Adaptive
tables occur Server Anywhere does not use an index for sorting the rows, it is
because no suitable index exists.
♦ When a multiple-row UPDATE is being performed and the column
being updated appears in the WHERE clause of the update or in an
index being used for the update.
♦ When a multiple-row UPDATE or DELETE has a subquery in the
WHERE clause that references the table being modified.
♦ When performing an INSERT from a SELECT statement and the
SELECT statement references the insert table.
♦ When performing a multiple row INSERT, UPDATE, or DELETE, and
triggers defined on the table the operation causes to fire.
In these cases, Adaptive Server Anywhere makes a temporary table before
the operation begins. The records affected by the operation go into the
temporary table and a temporary index is built on the temporary table. The
operation of extracting the required records into a temporary table can take a
significant amount of time before the query results appear. Creating indexes
that can be used to do the sorting in first case, above, improves the
performance of these queries since temporary tables are not necessary.
The query optimizer in the database server analyzes each query to determine
whether a temporary table is needed. Enhancements to the optimizer in new
releases of Adaptive Server Anywhere may improve the access plan for
queries. No user action is required to take advantage of these optimizations.
Notes The INSERT, UPDATE and DELETE cases above are usually not a
performance problem since they are usually one-time operations. However, if
problems occur, you can rephrase the command to avoid the conflict and
avoid building a temporary table. This is not always possible.
If Adaptive Server Anywhere creates a temporary table in carrying out the
search, the Interactive SQL Messages pane displays TEMPORARY TABLE
before listing the optimization strategy.
Avoiding use of a There is overhead associated with the creation of temporary tables. An
temporary table optimization introduced in Adaptive Server Anywhere version 7.0 avoids the
use of internal temporary tables during the processing of the following
classes of query:

824
Chapter 26 Monitoring and Improving Performance

♦ Queries with multiple OR clauses on a column referenced in an ORDER


BY clause. For example:
select x
from t
where x = 1 or x = 2 or x = 3
order by x
in the case that there is an index on x.
♦ Queries with an IN clause that on a column referenced in an ORDER
BY clause. For example:
select x
from t
where x in ( 2, 1, 3 )
order by x
in the case that there is an index on x.

Internal temporary Rows from internal temporary tables must be read during processing, and
table cost during the search for efficient access plans, the query optimizer must
estimates estimate the cost associated with accessing temporary tables along with other
costs.
Any indexes associated with internal temporary tables assign a hash size
based on distribution estimates, with an upper limit of 20 bytes. You can
extend this limit, or restrict it further, by setting the
MAX_WORK_TABLE_HASH_SIZE database option. Setting this option to
a value of 10 returns the behavior to that of the software before version 7.0.

825
How the optimizer works

How the optimizer works


Adaptive Server Anywhere’s optimizer must decide on which order to access
tables in a query, and whether or not to use an index for each table. The
optimizer attempts to pick the best strategy.
The best strategy for executing each query is the one that gets the results in
the shortest period of time, with the least cost. For example, if a query joins
N tables, there are N factorial possible ways to access the tables. The
optimizer determines the cost of each strategy by estimating the number of
disk reads and writes required, and chooses the strategy with the lowest cost.
The query execution plan in the Interactive SQL Messages pane shows the
table ordering for the current query and indicates in parentheses the index
used for each table.
$ This section provides an introduction to the optimizer. For more
information, see "Query Optimization" on page 835.

Optimizer estimates
The optimizer uses heuristics (educated guesses) to help decide the best
strategy.
For each table in a potential execution plan, the optimizer estimates the
number of rows that will form part of the results. The number of rows
depends on the size of the table and the restrictions in the WHERE clause or
the ON clause of the query.
In many cases, the optimizer uses more sophisticated heuristics. For
example, the optimizer uses default estimates only in cases where better
statistics are unavailable. As well, the optimizer makes use of indexes and
keys to improve its guess of the number of rows. Here are a few single-
column examples:
Single-column ♦ Equating a column to a value: estimate one row when the column has a
examples unique index or is the primary key.
♦ A comparison of an indexed column to a constant: use the index to
estimate the percentage of rows that satisfy the comparison.
♦ Equating a foreign key to a primary key (key join): use relative table
sizes in determining an estimate. For example, if a 5000 row table has a
foreign key to a 1000 row table, the optimizer guesses that there are five
foreign rows for each primary row.

826
Chapter 26 Monitoring and Improving Performance

Self tuning of the query optimizer


One of the most common constraints in a query is equality with a column
value. The following example tests for equality of the sex column.
SELECT *
FROM employee
WHERE sex = ’f’
Queries often optimize differently at the second execution. For the above
type of constraint, Adaptive Server Anywhere’s optimizer learns from
experience. The estimate for an equality constraint will be modified for
columns that have an unusual distribution of values. The database stores this
information permanently, or until you explicitly delete it using the DROP
OPTIMIZER STATISTICS command.

Providing estimates to improve query performance


Since the query optimizer guesses at the number of rows in a result based on
the size of tables and particular restrictions used in the WHERE clause, it
almost always makes inexact guesses. In many cases, the query optimizer’s
guess is close enough to the real number of rows that the optimizer choses
the best search strategy. However, in some cases this does not occur.
The following query displays a list of order items that shipped later than the
end of June, 1994:
SELECT ship_date
FROM sales_order_items
WHERE ship_date > ’1994/06/30’
ORDER BY ship_date DESC
The estimated number of rows is 274. However, the actual number of rows
returned is only 12. This estimate is wrong because the query optimizer
guesses that a test for greater than will succeed 25 percent of the time. In this
example, the condition on the ship_date column:
ship_date > ’1994/06/30’
is assumed to choose 25 percent of rows in the sales_order_items table.
Supplying an If you know that a condition has a success rate that differs from the
estimate optimizer’s default rule, you can give the database the more accurate
information using an estimate. You can form an estimate by enclosing in
brackets the expression, followed by a comma and a number. The number
represents the percentage of rows you estimate the expression should select.
In this case, you could estimate a success rate of one percent:
SELECT ship_date
FROM sales_order_items

827
How the optimizer works

WHERE ( ship_date > ’1994/06/30’, 1 )


ORDER BY ship_date DESC
With this estimate, the optimizer estimates ten rows in the query.
Note Incorrect estimates are only a problem if they lead to poorly optimized
queries.
$ For further information about the optimizer and query optimization, see
"Query Optimization" on page 835.

828
Chapter 26 Monitoring and Improving Performance

Monitoring database performance


Adaptive Server Anywhere provides a set of statistics you can use to monitor
database performance. Accessible from Sybase Central, client applications
can access the statistics as functions. In addition, the server makes these
statistics available to the Windows NT Performance Monitor.
This section describes how to access performance and related statistics from
client applications, how to monitor database performance using Sybase
Central, and how to monitor database performance using the Windows NT
Performance Monitor.

Obtaining database statistics from a client application


Adaptive Server Anywhere provides a set of system functions that can access
information on a per-connection, per-database, or server-wide basis. The
kind of information available ranges from static information (such as the
server name) to detailed performance-related statistics (such as disk and
memory usage).
Functions that The following functions retrieve system information:
retrieve system
♦ property Provides the value of a given property on an engine-wide
information
basis.
♦ connection_property Provides the value of a given property for a
given connection, or by default, for the current connection.
♦ db_property Provides the value of a given property for a given
database, or by default, for the current database.
Supply as an argument only the name of the property you wish to retrieve.
The functions return the value for the current server, connection, or database.
$ For more information, see "System functions" on page 310 of the book
ASA Reference.
$ For a complete list of the properties available from the system
functions, see "System functions" on page 310 of the book ASA Reference.
Examples ♦ The following statement sets a variable named server_name to the
name of the current server:
SET server_name = property( ’name’ )
♦ The following query returns the user ID for the current connection:
SELECT connection_property( ’userid’ )

829
Monitoring database performance

♦ The following query returns the filename for the root file of the current
database:
SELECT db_property( ’file’ )

Improving query For better performance, a client application monitoring database activity
efficiency should use the property_number function to identify a named property, and
then use the number to repeatedly retrieve the statistic. The following set of
statements illustrates the process from Interactive SQL:
CREATE VARIABLE propnum INT ;
CREATE VARIABLE propval INT ;
SET propnum = property_number( ’cacheread’ );
SET propval = property( propnum )
Property names obtained in this way are available for many different
database statistics, from the number of transaction log page write operations
and the number of checkpoints carried out, to the number of reads of index
leaf pages from the memory cache.
You can view many of these statistics in graph form from the Sybase Central
database management tool.

Monitoring database statistics from Sybase Central


With the Sybase Central Performance Monitor, you can graph the statistics
of any Adaptive Server Anywhere database server that you can connect to in
Sybase Central. All statistics in Sybase Central are shown in the Statistics
folder.
Features of the Performance Monitor include:
♦ Real-time updates (at adjustable intervals)
♦ A color-coded and resizable legend
♦ Configurable appearance properties
When you’re using the Performance Monitor, note that it uses actual queries
against the server to gather its statistics, so the monitor itself affects some
statistics (such as Cache Reads/sec). As a more precise alternative, you can
graph server statistics using the Windows NT Performance Monitor.
$ For information on setting properties, see "Setting properties for
database objects" on page 120.

Opening the Sybase Central Performance Monitor


You can display the Sybase Central Performance Monitor in the right pane of
the viewer where you have the Statistics folder open.
830
Chapter 26 Monitoring and Improving Performance

v To open the Performance Monitor:


1 Open the Statistics folder for the desired server.
2 In the right pane, click the Performance Monitor tab.

Note
The Performance Monitor only graphs statistics that you have added to it
ahead of time.

$ See also
♦ "Adding and removing statistics" on page 831
♦ "Configuring the Sybase Central Performance Monitor" on page 832
♦ "Resizing the Sybase Central Performance Monitor legend" on page 832
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 833

Adding and removing statistics

v To add statistics to the Sybase Central Performance Monitor:


1 Open the Statistics folder.
2 Make sure that the Details page in the right pane is showing.
3 Right-click a statistic that is not currently being graphed and choose Add
to Performance Monitor from the popup menu.

v To remove statistics from the Sybase Central Performance Monitor:


1 Do one of the following:
♦ If you are working in the Statistics folder (with the Details page
showing in the right pane), right-click a statistic that is currently
being graphed.
♦ If you are working in the Performance Monitor, right-click the
desired statistic in the legend.
2 From the popup menu, choose Remove from Performance Monitor.

Tip
You can also add a statistic to or remove one from the Performance
Monitor on the statistic’s property sheet.

$ See also
831
Monitoring database performance

♦ "Opening the Sybase Central Performance Monitor" on page 830


♦ "Configuring the Sybase Central Performance Monitor" on page 832
♦ "Resizing the Sybase Central Performance Monitor legend" on page 832
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 833

Configuring the Sybase Central Performance Monitor


The Sybase Central Performance Monitor is configurable; you can choose
the type of graph is uses and the amount of time between updates to the
graph.

v To choose a graph type:


1 Choose Tools➤Options.
2 Click the Chart tab of the resulting dialog.
3 In the lower half of the page, choose a type of graph.

v To set the update interval:


1 Choose Tools➤Options.
2 Click the Chart tab of the resulting dialog.
3 At the top of the page, move the slider to reflect a new time value (or
type the value directly in the text box provided).
$ See also
♦ "Opening the Sybase Central Performance Monitor" on page 830
♦ "Adding and removing statistics" on page 831
♦ "Resizing the Sybase Central Performance Monitor legend" on page 832
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 833

Resizing the Sybase Central Performance Monitor legend


The legend of the Sybase Central Performance Monitor can be resized in the
following ways:
♦ You can resize the columns within the legend by dragging the divisions
between the column headings.

832
Chapter 26 Monitoring and Improving Performance

♦ You can change the relative size of the legend by positioning your
cursor along the top border of the column headings (so that a two-sided
arrow appears) and dragging up or down.
♦ You can minimize or maximize the legend by clicking one of the two
arrow icons located immediately above the legend.

$ See also
♦ "Opening the Sybase Central Performance Monitor" on page 830
♦ "Adding and removing statistics" on page 831
♦ "Configuring the Sybase Central Performance Monitor" on page 832
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 833

Monitoring database statistics from the Windows NT Performance


Monitor
As an alternative to using the Sybase Central Performance Monitor, you can
use the Windows NT Performance Monitor (included with Windows NT)
The Windows NT monitor is only operational in a NT-to-NT setup, and has
two advantages:
♦ It offers more performance statistics (mainly those concerned with
network communications).
♦ Unlike the Sybase Central monitor, the Windows NT monitor is non-
intrusive. It uses a shared-memory scheme instead of performing queries
against the server, so it does not affect the statistics themselves.
$ For a complete list of performance statistics you can monitor, see
"Performance Monitor statistics" on page 1082 of the book ASA Reference.

v To start the Windows NT Performance Monitor:


1 Choose Start➤Programs➤Administrative Tools
(Common)➤Performance Monitor

$ For information about the Windows NT Performance Monitor, see the


online help for the program.

833
Monitoring database performance

v To monitor performance statistics:


1 With an Adaptive Server Anywhere engine or database running, start the
Performance Monitor.
2 Choose Edit➤Add To Chart, or click the Plus sign button on the toolbar.
The Add To Chart dialog appears.
3 From the Object list, select one of the following:
♦ Adaptive Server Anywhere Connection Choose this to monitor
performance for a single connection. Choose a connection to
monitor from the displayed list.
♦ Adaptive Server Anywhere Database Choose this to monitor
performance for a single database. Choose a database to monitor
from the displayed list.
♦ Adaptive Server Anywhere Engine Choose this to monitor
performance on a server-wide basis.
The Counter box displays a list of the statistics you can view.
4 From the Counter list, click a statistic to view. Hold the CTRL or SHIFT
keys while clicking to select multiple statistics.
5 If you selected Adaptive Server Anywhere Connection or Adaptive
Server Anywhere Database, choose an instance from the Instance box.
6 For a description of the selected counter, click Explain.
7 To display the counter, click Add.
8 When you have selected all the counters you wish to display, click
Done.

834
C H A P T E R 2 7

Query Optimization

About this chapter Once each query is parsed, the optimizer analyzes it and decides on an access
plan that will compute the result using as few resources as possible. This
chapter describes the steps the optimizer goes through to optimize a query. It
begins with the assumptions that underlie the design of the optimizer, then
proceeds to discuss selectivity estimation, cost estimation, and the other steps
of optimization.
Although update, insert, and delete statements must also be optimized, the
focus of this chapter is on select queries. The optimization of these other
commands follows similar principles.
Contents
Topic Page
The role of the optimizer 836
Steps in optimization 838
Reading access plans 839
Underlying assumptions 841
Physical data organization and access 845
Indexes 848
Predicate analysis 850
Semantic query transformations 852
Selectivity estimation 856
Join enumeration and index selection 862
Cost estimation 864
Subquery caching 865

835
The role of the optimizer

The role of the optimizer


The role of the optimizer is to devise an efficient way to execute the SQL
statement. The optimizer expresses its chosen method in the form of an
access plan. The access plan describes which tables to scan, which index, if
any, to use for each table, and what order to read the tables in. Often, a great
number of plans exist that all accomplish the same goal. Other variables may
further enlarge the number of possible access plans.
A single statement can contain multiple subqueries. A join strategy is a
portion of an access plan that describes how to satisfy a single subquery,
including table permutation and access methods. When a subquery refers to
many tables, the number of possible join strategies can become very large.
For example, if seven tables must be joined to execute a subquery, then the
optimizer must select one of the 7! = 5040 orders in which these tables could
be accessed. It must also decide which index, if any, to use when accessing
each table.
Cost based The optimizer begins selecting for the choices available using efficient, and
in some cases proprietary, algorithms. It bases its decisions on predictions of
the resources each requires. The optimizer takes into account both the cost of
disk access operations and the estimated CPU cost of each operation.
Syntax Most commands may be expressed in many different ways using the SQL
independent language. These expressions are semantically equivalent in that they
accomplish the same task, but may differ substantially in syntax. With few
exceptions, the Anywhere optimizer devises a suitable access plan based
only on the semantics of each statement.
Syntactic differences, although they may appear substantial, usually have no
effect. For example, differences in the order of predicates, tables, and
attributes in the query syntax have no affect on the choice of access plan.
Neither is the optimizer affected by whether or not a query contains a view.
A good plan, not The goal of the optimizer is to find a good access plan. Ideally, the optimizer
necessarily the would identify the most efficient access plan possible, but this goal is often
best plan impractical. Given a complicated query, a great number of possibilities may
exist.
However efficient the optimizer, analyzing each option takes time and
resources. The optimizer is conscious of the resources it uses. It periodically
compares the cost of further optimization with the cost of executing the best
plan it has found so far. If a plan has been devised that has a relatively low
cost, the optimizer stops and allows execution of that plan to proceed.
Further optimization might consume more resources than would execution of
an access plan already found.

836
Chapter 27 Query Optimization

The governor limits The governor is the part of the optimizer that performs this limiting function.
the optimizer’s It lets the optimizer run until it has analyzed a minimum number of
work strategies. After considering a reasonable number of strategies, the governor
cuts off further analysis.
In the case of expensive and complicated queries, the optimizer works
longer. In the case of very expensive queries, it may run long enough to
cause a discernable delay.

837
Steps in optimization

Steps in optimization
The steps the Anywhere optimizer follows in generating a suitable access
plan include.
1 The parser converts the query, expressed in SQL, into an internal
representation. In doing so, it may rewrite the query, converting it to a
syntactically different, but semantically equivalent, form. These
conversions make the statement easier to analyze.
2 Optimization proper commences at OPEN CURSOR. Unlike many other
commercial database systems, Anywhere optimizes each statement just
before executing it.
3 Perform semantic optimization on the statement. The optimizer rewrites
each command whenever doing so leads to better, more efficient access
plans.
4 Perform join enumeration and group-by optimization for each subquery.
5 Optimize access order.
Because Anywhere performs just-in-time optimization of each statement, the
optimizer has access to the value of host variables and stored procedure
variables. Hence, it makes better choices because it performs better
selectivity analysis.
Adaptive Server Anywhere optimizes each query you execute, regardless of
how many times you executed it before. Because Anywhere saves statistics
each time it executes a query, the optimizer can learn from the experience of
executing previous plans and can adjust its choices when appropriate.

838
Chapter 27 Query Optimization

Reading access plans


The optimizer can tell you the plan it has chosen in response to any
statement. If you are using Interactive SQL, you can simply look to the
Messages pane. Otherwise, you can use the PLAN function to ask Anywhere
to return a plan

The optimizer can rewrite your query


The optimizer’s job is to understand the semantics of your query and to
construct a plan that computes its result. This plan may not correspond
exactly to the syntax you used. The optimizer is free to rewrite your query
in any semantically equivalent form.

Commas separate Join strategies in plans appear as a list of correlation names. Each correlation
tables within a join name is followed immediately, in brackets, by the method to be used to
strategy locate the required rows. This method can either be the word seq, which
indicates sequential scanning of the table, or it the name of an index. The
name of a primary index is the name of the table.
The following self-join creates a list of employees and their managers.
SELECT e.emp_fname, m.emp_fname
FROM employee AS e JOIN employee AS m
ON e.manager_id = m.emp_id
PLAN> e (seq), m (employee)
To compute this result, Adaptive Server Anywhere first accesses the
employee table sequentially. For each row, it accesses the employee table
again, but this time using the primary index.
Temporary tables Adaptive Server Anywhere must use a temporary table to execute some
query results, or it may choose to use a temporary table to lower the overall
cost of computing the result. When it uses a temporary table for a join
strategy, the words TEMPORARY TABLE precede the description of that
strategy.
SELECT DISTINCT quantity
FROM sales_order_items
PLAN> TEMPORARY TABLE sales_order_items (seq)
A temporary table is necessary in this case to compute the distinct quantities.
Colons separate The following command contains two query blocks: the outer select
join strategies statement from the sales_order and sales_order_items tables, and the
subquery that selects from the product table.
SELECT *
FROM sales_order AS o

839
Reading access plans

KEY JOIN sales_order_items AS i


WHERE EXISTS
( SELECT *
FROM product p
WHERE p.id = 300 )
PLAN> o (seq), i (id_fk): p (product)
Colons separate join strategies. Plans always list the join strategy for the
main block first. Join strategies for other query blocks follow. The order of
join strategies for these other query blocks may not correspond to the order
in your statement nor to the order in which they execute.
In this case, the optimizer has decided to first access o sequentially (the
sales_order table) and join it to I (the sales_order_items table) using the
foreign key index (contained in the primary index of the product table). At
some point, rows from p (the product table) will be located using the
primary index.
The optimizer can When the optimizer discovers a more efficient means of computing your
rewrite your query result, the access plan may not appear to follow the structure of your query.
Adding a condition to the subquery in the previous command causes the
optimizer to choose a different strategy.
SELECT *
FROM sales_order AS o
KEY JOIN sales_order_items AS i
WHERE EXISTS
( SELECT *
FROM product p
WHERE p.id = 300
AND p.id = i.prod_id )
PLAN> i (ky_prod_id), o (sales_order)
The optimizer rewrites this command as a single query block consisting of
single join between three tables. Next, the optimizer discovers that the
product table can be eliminated due to the referential integrity between this
table and the sales_order_items table. This query rewriting technique is
called join elimination. For more information on join elimination, see
"Semantic query transformations" on page 852.
$ For more information about the rules Adaptive Server Anywhere obeys
when rewriting your query, see "Rewriting subqueries as exists predicates"
on page 843 and "Semantic query transformations" on page 852.

840
Chapter 27 Query Optimization

Underlying assumptions
A number of assumptions underlie the design direction and philosophy of the
Adaptive Server Anywhere query optimizer. You can improve the quality or
performance of your own applications through an understanding of the
optimizer’s decisions. These assumptions provide a context in which you
may understand the information contained in the remaining sections.

Assumptions
The list below summarizes the assumptions upon which the Adaptive Server
Anywhere optimizer is based.

Assumption Implications
Minimal administration work ♦ Self-tuning design that requires fewer
performance controls.
♦ No separate statistics-gathering utility
Applications tend to retrieve only ♦ Indices are used whenever possible
the first few rows of a cursor
♦ Use of temporary tables is discouraged
Selectivity statistics necessary for ♦ Optimization decisions are based on prior
optimization are available in the query execution.
Column Statistics Registry
♦ Dropping optimizer statistics makes the
optimizer ineffective.
An index can be found to satisfy a ♦ Performance is poor if a suitable index
join predicate in virtually all cases cannot be found.
Virtual memory is a scarce ♦ Intermediate results are not materialized
resource unless absolutely necessary.

Minimal administration work


Traditionally, high-performance database engines have relied heavily on the
presence of a knowledgeable, dedicated, database administrator. This person
spent a great deal of time adjusting data storage and performance controls of
all kinds to achieve good database performance. These controls often
required continuing adjustment as the data in the database changed.
Anywhere learns and adjusts as the database grows and changes. Each query
betters its knowledge of the data distribution in the database. Anywhere
automatically stores and uses this information to optimize future queries.

841
Underlying assumptions

Every query both contributes to this internal knowledge and benefits from it.
Every user can benefit from knowledge that Anywhere has gained through
executing another users’ query.
Statistics gathering mechanisms are thus an integral part of the database
server, and require no external mechanism. Should you find an occasion
where it would help, you can provide the database server with estimates of
data distributions to use during optimization. If you encode these into a
trigger or procedure, for example, you then assume responsibility for
maintaining these estimates and updating them whenever appropriate.

Only first few rows of a cursor used frequently


Many application programs examine only the first few rows of a cursor,
particularly with ordered cursors. Select the ordering carefully for best
results. Materializing a cursor means computing the entire result set before
returning any rows to the application.
To accommodate this observation, the optimizer avoids materializing cursors
whenever possible. Since few rows of the cursor are likely to be fetched,
avoiding materialization allows Adaptive Server Anywhere to reduce the
time required to pass the first rows of the result to the application.

Statistics are present and correct


The optimizer is self-tuning, storing all the needed information internally.
The column statistics registry is a persistent repository of data distributions
and predicate selectivity estimates. At the completion of each query,
Adaptive Server Anywhere uses statistics gathered during query execution to
update this registry. In consequence, all subsequent queries gain access to
more accurate estimates.
The optimizer relies heavily on these statistics and, therefore, the quality of
the access plans it generates depends heavily on them. If you recently
reloaded your database or inserted a lot of new rows, these statistics may no
longer accurately describe the data. You may find that your first subsequent
queries execute unusually slowly.
You can assist Anywhere in its efforts to correct its statistical information by
executing sample queries. As Anywhere executes these statements, it learns
from its experience. Correct statistical information can dramatically improve
the efficiency of subsequent queries.

842
Chapter 27 Query Optimization

An index can usually be found to satisfy a predicate


Often, Anywhere can evaluate predicates with the aid of an index. Using an
index, the optimizer speeds access to data and reduces the amount of
information read. Whenever possible, Anywhere uses indices to satisfy
ORDER BY, GROUP BY, and DISTINCT clauses.
When the optimizer cannot find a suitable index, it resorts to a sequential
table scan, which can be expensive. An index can improve performance
dramatically when joining tables. Add indices to tables or rewrite queries
wherever doing so facilitates the efficient processing of common requests.

Virtual Memory is a scarce resource


The operating system and a number of applications frequently share the
memory of a typical computer. Adaptive Server Anywhere treats memory as
a scarce resource. Because it uses memory economically, Anywhere can run
on relatively small computers. This economy is important if you wish your
database to operate on portable computers or on older machines.
Reserving extra memory, for example to hold the contents of a cursor, may
be expensive. If the buffer cache is full, one or more pages may have to be
written to disk to make room for new pages. Some pages may need to be re-
read to complete a subsequent operation.
In recognition of this situation, Adaptive Server Anywhere associates a
higher cost with execution plans that require additional buffer cache
overhead. This cost discourages the optimizer from choosing plans that use
temporary tables.
On the other hand, it is careful to use memory where it improves
performance. For example, it caches the results of subqueries when they will
be needed repeatedly during the processing of the query.

Rewriting subqueries as exists predicates


The assumptions which underlie the design of Anywhere require that it
conserves memory and that it returns the first few results of a cursor as
quickly as possible. In keeping with these objectives, Adaptive Server
Anywhere rewrites all set-operation subqueries, such as IN, ANY, or SOME
predicates, as EXISTS predicates. By doing so, Adaptive Server Anywhere
avoids creating unnecessary temporary tables and may more easily identify a
suitable index through which to access a table.
Non-correlated subqueries are subqueries that contain no explicit reference
to the table or tables contained in the rest higher-level portions of the tables.

843
Underlying assumptions

The following is an ordinary query that contains a non-correlated subquery.


It selects information about all the customers who did not place an order on
January 1, 1998.
Non-correlated SELECT *
subquery FROM customer c
WHERE c.id NOT IN
( SELECT o.cust_id
FROM sales_order o
WHERE o.order_date = ’1998-01-01’ )
PLAN> c (seq): o (ky_so_customer)
One possible access plan is to first read the sales_order table and create a
temporary table of all the customers who placed orders on January 1, 1998,
then, read the customer table and extract one row for each customer listed in
the temporary table.
However, Adaptive Server Anywhere avoids materializing results. It also
gives preference to plans that return the first few rows of a result most
quickly. Thus, the optimizer rewrites such queries using EXISTS predicates.
In this form, the subquery becomes correlated: the subquery now contains
an explicit reference to the id column of the customer table.
Correlated SELECT *
subquery FROM customer c
WHERE NOT EXISTS
( SELECT *
FROM sales_order o
WHERE o.order_date = ’1993-01-01’
AND ( o.cust_id = c.id
OR o.cust_id IS NULL
OR c.id IS NULL ) )
PLAN> c (seq): o (seq)
This query is semantically equivalent to the one above, but when expressed
in this new syntax, two advantages become apparent.
1 The optimizer can choose to use either the index on the cust_id attribute
or the order_date attribute of the sales_order table. (However, in the
sample database, only the id and cust_id columns are indexed.)
2 The optimizer has the option of choosing to evaluate the subquery
without materializing intermediate results.
Anywhere can cache the results of this subquery during processing. This
strategy lets Anywhere reuse previously computed results. In the case of
query above, caching does not help because customer identification numbers
are unique in the customer table.
$ Further information on subquery caching is located in "Subquery
caching" on page 865.
844
Chapter 27 Query Optimization

Physical data organization and access


Storage allocations for each table or entry have a large impact on the
efficiency of queries. The following points are of particular importance
because each influence how fast your queries execute.

Memory allocation for inserted rows


Anywhere inserts Every new row that is smaller than the page size of the database file will
each new row into always be stored on a single page. If no present page has enough free space
pages so that, if at for the new row, Anywhere writes the row to a new page. For example, if the
all possible, the new row requires 600 bytes of space but only 500 bytes are available on a
entire row can be partially filled page, then Anywhere places the row on a new page at the end
stored contiguously of the table.
Anywhere may The engine locates space on pages and inserts rows in the order it receives
store rows in any them in. It assigns each to a page, but the locations it chooses in the table
order may not correspond to the order they were inserted in. For example, the
engine may have to start a new page to store a long row contiguously. Should
the next row be shorter, it may fit in an empty location on a previous page.
The rows of all tables are unordered. If the order that you receive or process
the rows is important, use an ORDER BY clause in your SELECT statement
to apply an ordering to the result. Applications that rely on the order of rows
in a table can fail without warning.
If you frequently require the rows of a table in a particular order, consider
creating an index on those columns using an ORDER BY clause. Anywhere
always tries to take advantage of indices when processing queries.
Space is not Whenever Anywhere inserts a row, it reserves only the space necessary to
reserved for NULL show the row with the values it contains at the time of creation. It reserves no
columns space to store values which are NULL. It reserves no extra space to
accommodate fields, such as text strings, which may enlarge.
Once inserted rows Once assigned a home position on a page, a row never moves from that page.
are immutable If an update changes any of the values in the row so it no longer fits in its
assigned page, then the row splits and the extra information is inserted on
another page.

845
Physical data organization and access

This characteristic deserves special attention, especially since Anywhere


allows no extra space when you insert the row. For example, suppose you
insert a large number of empty rows into a table, then fill in the values, one
column at a time, using update statements. The result would be that almost
every value in a single row will be stored on a separate page. To retrieve all
the values from one row, the engine may need to read several disk pages.
This simple operation would become extremely and unnecessarily slow.
You should consider filling new rows with data at the time of insertion. Once
inserted, they then have sufficient room for the data you expect them to hold.
A database never As you insert and delete rows from the database, Anywhere automatically
shrinks reuses the space they occupy. Thus, Anywhere may insert a row into space
formerly occupied by another row.
Anywhere keeps a record of the amount of empty space on each page. When
you ask it to insert a new row, it first searches its record of space on existing
pages. If it finds enough space on an existing page, it places the new row on
that page, reorganizing the contents of the page if necessary. If not, it starts a
new page.
Over time, however, if you delete a number of rows and don’t insert new
rows small enough to use the empty space, the information in the database
may become sparse. No utility exists to defragment the database file, as
moving even one row might involve updating numerous index entries.
Since Anywhere automatically reuses empty space, the presence of these
empty slots rarely affects performance. If necessary, you can reduce disk
fragmentation by unloading, then reloading the database.
Reloading also accomplishes another task. Since you are likely to reload
each table in the order you frequently search them, the stored order of rows
in pages is likely to correspond closely to your preferred order. Hence, it is
possible that this operation will improve database performance, much as a
defragmentation utility improves disk performance by grouping all the pieces
of each file together on the surface of the disk.

846
Chapter 27 Query Optimization

Table and page sizes


The page size you choose for your database can affect the performance of
your database. In general, smaller page sizes are likely to benefit operations
that retrieve relatively small number of rows from random locations. By
contrast, larger pages tend to benefit queries that perform sequential scans,
particularly when the rows are stored on pages in the order the rows are
retrieved in via an index. In this situation, reading one page of memory to
obtain the values of one row may have the side effect of loading the contents
of the next few rows into memory. Often, the physical design of disks
permits them to retrieve few large blocks more efficiently than many small
ones.
Should you choose a larger page size, such as 4 kb, you may wish to increase
the size of the cache. Fewer large pages can fit into the same space. For
example, 1 Mb of memory can hold 1000 pages that are each 1 kb in size,
but only 250 pages that are 4 kb in size. How many pages is enough depends
entirely on your database and the nature of the queries your application
performs. You can conduct performance tests with various cache sizes. If
your cache cannot hold enough pages, performance suffers as Anywhere
begins swapping frequently-used pages to disk.
Anywhere attempts to fill pages as much as possible. Empty space
accumulates only when new objects are too large to fit empty space on
existing pages. Consequently, adjusting the page size may not significantly
affect the overall size of your database.

847
Indexes

Indexes
There are many situations in which creating an index improves the
performance of a database. An index provides an ordering of the rows of a
table on the basis of the values in some or all of the columns. An index
allows Anywhere to find rows quickly. It permits greater concurrency by
limiting the number of database pages accessed. An index also affords
Anywhere a convenient means of enforcing a uniqueness constraint on the
rows in a table.

Hash values
Adaptive Server Anywhere must represent values in an index to decide how
to order them. For example, if you index a column of names, then it must
know that Amos comes before Smith.
For each value in your index, Anywhere creates a corresponding hash value.
It stores the hash value in the index, rather than the actual value. Anywhere
can perform operations with the hash value. For example, it can tell when
two values are equal or which of two values is greater.
When you index a small storage type, such as an integer, the hash value that
Anywhere creates takes the same amount of space as the original value. For
example, the hash value for an integer is 4 bytes in size, the same amount of
space as required to store an integer. Because the hash value is the same size,
Anywhere can use hash values with a one-to-one correspondence to the
actual value. Anywhere can always tell whether two values are equal, or
which is greater by comparing their hash values. However, it can retrieve the
actual value only by reading the entry from the corresponding table.
When you index a column containing larger data types, the hash value will
often be shorter than the size of the type. For example, if you index a column
of string values, the hash value used is at most 9 bytes in length.
Consequently, Adaptive Server Anywhere cannot always compare two
strings using only the hash values. If the hash values are equal, Anywhere
must retrieve and compare the actual two values from the table.
For example, suppose you index the titles of books, many of which are
similar. If you wish to search for a particular title, the index may identify
only a set of possible rows. In this case, Anywhere must retrieve each of the
candidate rows and examine the full title.

848
Chapter 27 Query Optimization

Composite indexes An ordered sequence of columns is also called a composite index. However,
each index key in these indexes is at most a 9 byte hash value. Hence, the
hash value cannot necessarily identify the correct row uniquely. When two
hash values are equal, Anywhere must retrieve and compare the actual
values.

The effect of column order in a composite index


When you create a composite index, the order of the columns affects the
suitability of the index to different tasks.
Example Suppose you create a composite index on two columns. One column contains
employee’s first names, the other their last names. You could create an index
that contains their first name, then their last name. Alternatively, you could
index the last name, then the first name. Although these two indices organize
the information in both columns, they have different functions.
CREATE INDEX fname_lname
ON employee emp_fname, emp_lname;
CREATE INDEX lname_fname
ON employee emp_lname, emp_lname;
Suppose you then want to search for the first name John. The only useful
index is the one containing the first name in the first column of the index.
The index organized by last name then first name is of no use because
someone with the first name John could appear anywhere in the index.
If you think it likely that you will need to look up people by first name only
or second name only, then you should consider creating both of these
indices.
Alternatively, you could make two indices that each index only one of the
columns. Remember, however, that Anywhere only uses one index to access
any one table while processing a single query. Even if you know both names,
it is likely Anywhere will need to read extra rows, looking for those with the
correct second name.
When you create an index using the CREATE INDEX command, as in the
example above, the columns appear in the order shown in your command.
Primary indexes Adaptive Server Anywhere uses a primary index to index primary keys.
and column order
The columns in the primary index always appear in the same order in which
the columns appear in the definition of the primary table. In situations where
more than one column appears in a primary key, you should consider the
types of searches needed. If appropriate, switch the order of the columns in
the table definition so the most frequently searched-for column appears first,
or create separate indexes, as required, for the other columns.
849
Predicate analysis

Predicate analysis
A predicate is a conditional expression that, combined with the logical
operators AND and OR, makes up the set of conditions in a WHERE or
HAVING clause. In SQL, a predicate that evaluates to UNKNOWN is
interpreted as FALSE.
A predicate that can exploit an index to retrieve rows from a table is called
sargable. This name comes from the phrase search argument-able. Both
predicates that involve comparisons with constants and those that compare
columns from two or more different tables may be sargable.
The predicate in the following statement is sargable. Adaptive Server
Anywhere can evaluate it efficiently using the primary index of the employee
table.
SELECT *
FROM employee
WHERE employee.emp_id = 123
PLAN> employee (employee)
In contrast, the following predicate is not sargable. Although the emp_id
column is indexed in the primary index, using this index does not expedite
the computation because the result contains all, or all except one, row.
SELECT *
FROM employee
employee.emp_id <> 123
PLAN> employee (seq)
Similarly, no index can assist in a search for all employees whose first name
ends in the letter "k". Again, the only means of computing this result is to
examine each of the rows individually.
Examples In each of these examples, attributes x and y are each columns of a single
table. Attribute z is contained in a separate table. Assume that an index exists
for each of these attributes.

Sargable Non-sargable
x = 10 x <> 10
x IS NULL x IS NOT NULL
x > 25 x = 4 OR y = 5
x=z x=y
x IN (4, 5, 6) x NOT IN (4, 5, 6)
x LIKE ’pat%’ x LIKE ’%tern’

850
Chapter 27 Query Optimization

Sometimes it may not be obvious whether a predicate is sargable. In these


cases, you may be able to rewrite the predicate so it is sargable. For each
example, you could rewrite the predicate x LIKE ’pat%’ using the fact that
"u" is the next letter in the alphabet after "t": x >= ’pat’ and x < ’pau’. In this
form, an index on attribute x is helpful in locating values in the restricted
range. Fortunately, Adaptive Server Anywhere makes this particular
transformation for you automatically.
A sargable predicate used for indexed retrieval on a table is a matching
predicate. A WHERE clause can have a number of matching predicates.
Which is most suitable can depend on the join strategy. The optimizer re-
evaluates its choice of matching predicates when considering alternate join
strategies.
In other cases, a predicate may not be sargable simply because no suitable
index exists. For example, consider the predicate X = Z. This predicate is
sargable if these two attributes reside in different tables and at least one of
them is the first attribute in an index. Should one of these conditions not be
satisfied, the same predicate becomes non-sargable.

851
Semantic query transformations

Semantic query transformations


To operate efficiently, Adaptive Server Anywhere usually rewrites your
query, possibly in several steps, into a new form. It ensures that the new
version computes the same result, even though it expresses the query in a
new way. In other words, Anywhere rewrites your queries into semantically
equivalent, but syntactically different, forms.
Anywhere can perform a number of different rewrite operations. If you read
the access plans, you will frequently find that they do not correspond to a
literal interpretation of your statement. For example, the optimizer tries as
much as possible to rewrite subqueries with joins. The fact that the optimizer
has the freedom to rewrite your commands and some of the ways in which it
does so, are of importance to you.
Example Unlike the SQL language definition, some languages mandate strict behavior
for AND and OR operations. Some guarantee that the left-hand condition
will be evaluated first. If the truth of the entire condition can then be
determined, the compiler guarantees that the right-hand condition will not be
evaluated.
This arrangement lets you combine conditions that would otherwise require
two nested IF statements into one. For example, in C you can test whether a
pointer is NULL before you use it as follows. You can replace the nested
conditions
if ( X != NULL ) {
if ( X->var != 0 ) {
... statements ...
}
}
with the more compact expression
if ( X != NULL && X->var != 0 ) {
... statements ...
}
Unlike C, SQL has no such rules concerning execution order. Anywhere is
free to rearrange the order of such conditions as it sees fit. The reordered
form is semantically equivalent because the SQL language specification
makes no distinction. In particular, query optimizers are completely free to
reorder predicates in a WHERE or HAVING clause.

852
Chapter 27 Query Optimization

Types of semantic transformations


The optimizer can perform a number of transformations in search of more
efficient and convenient representations of your query. Because the
optimizer performs these transformations, the plan may look quite different
than a literal interpretation of your original query. Common manipulations
include:
♦ unnecessary DISTINCT elimination
♦ subquery unnesting
♦ predicate pushdown in UNION or GROUPed views
♦ join elimination
♦ optimization for minimum or maximum functions
♦ OR, in-list optimization
♦ LIKE optimizations
The following subsections discuss each of these operations.

Unnecessary DISTINCT elimination


Sometimes a DISTINCT condition is unnecessary. For example, the
properties of one or more column in your result may contain a UNIQUE
condition, either explicitly, or implicitly because it is in fact a primary key.
Examples 1 The distinct keyword in the following command is unnecessary because
the product table contains the primary key p.id, which is part of the
result set.
SELECT DISTINCT p.id, p.quantity
FROM product p
PLAN> p (seq)
The database server actually executes the semantically equivalent query:
SELECT p.id, p.quantity
FROM product p
2 Similarly, the result of the following query contains the primary keys of
both tables so each row in the result must be distinct.
SELECT DISTINCT *
FROM sales_order o JOIN customer c
ON o.cust_id = c.id
WHERE c.state = ’NY’
PLAN> c (seq), o (ky_so_customer)

853
Semantic query transformations

Subquery unnesting
You may express statements as nested queries, given the convenient syntax
provided in the SQL language. However, rewriting nested queries as joins
often leads to more efficient execution and more effective optimization, since
Anywhere can take better advantage of highly selective conditions in a
subquery’s WHERE clause.
Examples 1 The subquery in the following example can match at most one row for
each row in the outer block. Because it can match at most one row,
Anywhere recognizes that it can convert it to an inner join.
SELECT s.*
FROM sales_order_items s
WHERE EXISTS
( SELECT *
FROM product p
WHERE s.prod_id = p.id
AND p.id = 300 AND p.quantity > 300)
Following conversion, this same statement is expressed internally using
join syntax:
SELECT s.*
FROM product p JOIN sales_order_items s
ON p.id = s.prod_id
WHERE p.id = 300 AND p.quantity > 20
PLAN> p (product), s (ky_prod_id)
2 Similarly, the following query contains a conjunctive EXISTS predicate
in the subquery. This subquery can match more than one row.
SELECT p.*
FROM product p
WHERE EXISTS
( SELECT *
FROM sales_order_items s
WHERE s.prod_id = p.id
AND s.id = 2001)
Anywhere converts this query to a inner join, with a DISTINCT in the
SELECT list.
SELECT DISTINCT p.*
FROM product p JOIN sales_order_items s
ON p.id = s.prod_id
WHERE s.id = 2001
PLAN> TEMPORARY TABLE s (id_fk), p (product)
3 Anywhere can also eliminate subqueries in comparisons, when the
subquery can match at most one row for each row in the outer block.
Such is the case in the following query.

854
Chapter 27 Query Optimization

SELECT *
FROM product p
WHERE p.id =
( SELECT s.prod_id
FROM sales_order_items s
WHERE s.id = 2001
AND s.line_id = 1 )
Anywhere rewrites this query as follows.
SELECT p.*
FROM product p, sales_order_items s
WHERE p.id = s.prod_id
AND s.id = 2001
AND s.line_id = 1
PLAN> s (sales_order_items), p (product)

855
Selectivity estimation

Selectivity estimation
Selectivity is a ratio The selectivity of a predicate measures how often the predicate evaluates to
that measures how TRUE. Selectivity is the ratio of the number of times the predicate evaluates
frequently a to true, to the total number of possible instances that must be tested.
predicate is true. Selectivity is most commonly expressed as a percentage. For example, if 2%
of employees have the last name Smith, then the selectivity of the following
predicate is 2%.
emp_lname = ’Smith’
Selectivity is second only to join enumeration in importance to the process of
optimization. Hence, the performance of the optimizer relies heavily on the
presence of accurate selectivity information.
Adaptive Server Anywhere can obtain estimates of selectivity from four
possible sources. It assumes no correlation between columns of a table and
so calculates the selectivity of each column independently.
♦ Column-statistics registry Each time Anywhere performs a query, it
saves selectivity information about the data in a column for future
reference.
♦ Partial index scans The optimizer examines the upper levels of an
index to obtain a selectivity estimate for a condition on an indexed
column.
♦ User-supplied values You can supply selectivity estimates in your
SQL statement. If you do so, Anywhere uses them in preference to those
from other sources, only for current statement execution.
♦ Default values If no other source is available, Anywhere falls back on
the built-in default values.

The scan factor is The scan factor queries the fraction of pages in a table that needs to be read
the fraction of to compute the result, and expresses the result as a percentage. For example,
pages in a table to find the first name of the employee with employee number 123, Anywhere
that need to be may have to read two index pages and, finally, the name contained in the
read. appropriate row. If there are 1000 pages in the employee table, then the scan
factor for this query would be 0.1%, meaning 1 page out of 1000 pages in the
table has to be read to find the appropriate row.
Although the scan factor is frequently small when the selectivity is small,
this is not always the case. Consider a request to find all employees who live
on Phillip Street. Less than one percent of employees may live on this street,
yet, because street names are not indexed, Anywhere can only find the
records by examining every row in the employee table.

856
Chapter 27 Query Optimization

Column-statistics registry
Adaptive Server Anywhere caches skewed predicate selectivity values and
column distribution statistics. It stores this information in the database.
Anywhere stores, logs, and checkpoints this information like other data.
Adaptive Server Anywhere updates these statistics automatically during
query processing.
The optimizer automatically retrieves and uses cached statistics when
processing subsequent queries. Selectivity information is available to all
transactions, regardless of the user or connection.
Adaptive Server Anywhere manages the column-statistics registry on a first-
in, first-out basis. Limited in size to 15,000 entries, Anywhere saves the
following types of information:
♦ column distribution statistics
♦ LIKE predicate selectivity statistics
♦ equality predicate statistics

Do not give You can reset the optimizer statistics using the DROP OPTIMIZER
Anywhere STATISTICS command. If you do so, you erase all the statistics Anywhere
amnesia! has accumulated.

Caution
Use the DROP OPTIMIZER STATISTICS command only when you have
made recent wholesale changes that render previous statistical
information invalid. Otherwise, avoid this command because it can cause
the optimizer to choose very inefficient access plans.

If you erase the statistics, Anywhere resorts to initial guesses about the
distribution of your data as though accessing it for the first time, losing all
performance improvements the statistics could have provided.
Subsequent queries gradually restore the statistics. In the interim, the
performance of many commands can suffer seriously. Consequently, this
command rarely improves performance and certainly never provides a long-
term solution.

Partial index scans


When cached results are unavailable to the optimizer, it can decide to probe
the directory of an index to estimate the proportion of entries that may satisfy
a given predicate. Depending on the predicate and the index, this information
may be very accurate.

857
Selectivity estimation

For example, the optimizer might examine an index of dates to estimate what
proportion refer to days before a given date, such as March 3, 1998. To
obtain such an estimate, Anywhere examines the upper pages of the index
you created on that column. It locates the approximate position of the given
date, then, from its relative position in the index, estimates the proportion of
values that occur before it.
Some cost may be involved in performing such scans because some index
pages, not already available in the buffer cache, may need to be retrieved
from disk. In addition, indices for very large tables, or primary indices for
tables pointed to by a large number of foreign keys may be extremely large.
Low fan-out may mean that the optimizer could only obtain specific
estimates by examining many pages. To limit this expense, the optimizer
examines at most two levels of the index.
Naturally, this method is effective only when the column about which
selectivity information is sought is the first column of the index. Should the
column comprise the second or other column of the index, the index is of no
help because the values will be distributed thoughout the index.
Similarly, estimates of LIKE selectivity values may be obtained by this
method only when the first few letters of the pattern are available. In cases
where only the middle or final sections of a word pattern appear, the
optimizer must rely on one of the other three sources of selectivity
information.

User estimates
Adaptive Server Anywhere allows you, as the user, to supply selectivity
estimates of any predicate. These estimates are expressed as a percentage and
must be supplied as a floating-point value. You may explicitly state such an
estimate for any predicate you choose.
The optimizer always uses user-supplied estimates in preference to an
estimate available from any other source. In this situation, the optimizer even
ignores cached selectivity values for that predicate. Because the optimizer
always uses any explicit estimates you provide, you can use these estimates
to guide the optimizer in its choice of access plan.
You should use explicit estimates with care. Estimates in triggers or stored
procedures are easily forgotten. Anywhere has no means to update them. For
these reasons, all responsibility for their maintenance rests with the author of
the procedure or administrator of the database. Should the distribution of
data change over time, the values may prove inappropriate and lead the
optimizer to choose access plans that are no longer optimal.

858
Chapter 27 Query Optimization

Default selectivity estimates


When all else fails and it can obtain estimates from none of the other three
sources, the optimizer falls back on default selectivity estimates.
Anywhere assumes that statistics in the column statistics registry are both
present and accurate. For example, if the optimizer is considering a LIKE
predicate, it looks in the column-statistics registry. If the registry contains no
entry for that predicate, it assumes that none is stored because the selectivity
is less than a small threshold value. Since default selectivity estimates are not
specific to your data, they can mislead the optimizer into selecting a poor
access plan.
When Anywhere executes that plan, it uses the results to save better
selectivity estimates in the column statistics registry. If you execute the same
query later, it finds these more accurate estimates and adjusts the access plan
if appropriate. For this reason, performance may be poor the first time or two
you execute a particular query on a new database, or after dropping the
optimizer statistics.
Anywhere uses the following default selectivities.

Predicate Default selectivity


Column comparisons: equality to 0.035%
a constant, IS NULL, or LIKE (if not stored in registry)
Column comparisons: inequality to 25%
a constant
Other LIKE or EXISTS 50%
Other equalities 5%
Other inequalities 25%
Other IS NULL or BETWEEN 6%

For a column a.x compared with a constant, the selectivity is computed to be


the maximum between 1/cardinality(a) and the default selectivity. In other
words, it assumes that at least one row will satisfy the comparison.

Equijoin selectivity estimation


What is an Frequently, you will need to join two or more tables to obtain the results you
equijoin? need. Equijoins join two tables through equality conditions on one or more
columns, as in the case of the following query:

859
Selectivity estimation

SELECT *
FROM tablea AS a JOIN tableb AS b
ON a.x = b.y

Join selectivity for In the case of equijoins, Anywhere calculates the selectivity of the join based
equijoins on the cardinality of the individual tables according to the following formula.
selectivity = cardinality(a JOIN b))/(cardinality(a) cardinality(b))
If the join condition involves two columns, then the optimizer uses data
distribution estimates from the column statistics registry to estimate the
cardinality of the result, and hence the selectivity of the join. Otherwise, if
the join condition involves a mathematical expression, the join predicate
selectivity estimate defaults to 5%.
Key joins—a rare The optimizer takes advantage of joins that are based on foreign key
case where syntax relationships. You can identify these to Adaptive Server Anywhere using the
matters KEY JOIN syntax. When you use this syntax, the optimizer estimates
selectivity accurately using special information contained in the primary
index. Anywhere only takes full advantage of these relationships when you
explicitly use the KEY JOIN syntax. As such, it is a rare exception to the
general rule that Anywhere optimizes your commands based on their
semantics, not their syntax. When estimating the selectivity of key joins, the
Anywhere optimizer assumes a uniform distribution of the values in the table
containing the foreign key.

Diagnosing and solving selectivity problems


Selectivity estimation problems are the root of most optimization problems.
The following sources of information are available to help you.

Displaying estimates and their source


Anywhere can tell you the value of a selectivity estimate and the source of
that estimate. You have access to this information through the built-in
functions ESTIMATE and ESTIMATE-SOURCE.
The following command displays the selectivity estimate the optimizer uses
for queries which select entries from table T in which column x contains a
value greater than 20 and displays the result as a percentage.
SELECT ESTIMATE (quantity, 20, ’>’)
FROM product
Similarly, the following command displays the source of that estimate. The
optimizer can contain estimates from a number of sources, including column
statistics registry and user supplied values.

860
Chapter 27 Query Optimization

SELECT ESTIMATE_SOURCE (quantity, 20, ’>’)


FROM product

Solving selectivity problems


If you find that Anywhere is obtaining an incorrect selectivity value from the
registry, you can easily reset the value by issuing any command that will
perform a complete scan of the table for that condition. For example, the
following SQL statement causes Anywhere to locate all the entries in
product table which have quantities equal to 28.
SELECT *
FROM product
WHERE quantity = 28
PLAN> product (seq)
Whenever Anywhere completes execution of a statement such as this one, it
automatically updates the column statistics registry based on the results.
Note, however, that in general, only the selectivities that are bigger than
1
/cardinality(a) are saved in the registry.
You can use a similar tactic to load initial selectivity information into a new
database. Simply issue commands containing conditions that appear in
common statements. When the optimizer later prepares to execute a
statement, it generates a better plan because it can use the correct statistics.
Maintain or remove Users can supply and hard-code selectivity estimates for statements used in
hard-coded triggers or stored procedures while developing the database. Once encoded,
selectivity the database administrator assumes responsibility for their maintenance as
estimates Anywhere has no means to update them automatically.
Unfortunately, these hard-coded values are hidden and may become
inaccurate as the information in the database grows and changes. For this
reason, avoid using them except in very unusual cases where you can
encourage the optimizer to choose a better access plan by no other means.
Often, you can avoid using them by priming the database using sample
queries as described above.

861
Join enumeration and index selection

Join enumeration and index selection


Join enumeration, the process of costing each possible join strategy and
making a selection, is the heart of any optimizer. Adaptive Server Anywhere
uses a proprietary join enumeration algorithm to search for an optimal access
plan. This algorithm considers the cost of various strategies and works to
find an inexpensive access plan.
When processing any query, Anywhere always chooses one method to access
any one table: it either scans the table sequentially, or selects one—and only
one—index and accesses the rows through it.

Join enumeration
In selecting a join strategy, Anywhere considers the following information.
♦ selectivity estimates of the number of rows in each intermediate result
♦ estimates of scan factor for each indexed retrieval
♦ the size of the cache—different cache sizes can lead to different join
strategies.
Anywhere begins by using selectivity information, as determined in the
previous step, to select an access order.
Next, Anywhere derives the estimates of scan factors from estimates of index
fan-out. The fan-out of an index can vary greatly depending on the type of
index and the page size you selected when you launched the engine or
created the database. Larger fan-out is better, because it allows Anywhere
access to locate specific rows using fewer pages and hence fewer resources.
Cache size affects Finally, the amount of cache space available to Anywhere can affect the
the access plan outcome of the optimizer's choice of join strategy. The larger the fraction of
space consumed by any one query, the more likely pages will need to be
swapped for those on disk. If Anywhere decides that a particular strategy will
result in using excessive cache space, it assigns that strategy a higher cost.
The number of possible join strategies can be huge. A join of n tables allows
n! possible join orders. For example, a join of 10 tables may have
10! = 3,628,800 possible orders.
When faced with joins that involve a large number of tables, Anywhere
attempts to prune the set of possible strategies. It eliminates those that fall
into certain categories, so as to focus effort on investigating more efficient
possibilities.

862
Chapter 27 Query Optimization

Anywhere chooses Anywhere always selects plans that minimize the number of Cartesian
plans with fewer products required to compute the result, favoring indexed access instead.
Cartesian products

Index selection
In addition to selecting an order, the optimizer must choose a method of
accessing each of these tables. It can choose to either scan a table
sequentially, or to access it through an index. Some tables may have a few
indexes, further increasing the number of possible strategies.
The optimizer analyzes each join strategy to determine which type of
access—indexed or sequential scan—would best suit each table in that
strategy. Although one index may be well suited to one join strategy, it can
be a poor choice for another strategy that joins the tables in a different order.
By making a custom index selection for each join order, the optimizer has
the opportunity to choose a better access plan.
Anywhere decides to use an index instead of embarking on a sequential scan
whenever an index is available and the selectivity is less than 20%.

863
Cost estimation

Cost estimation
The optimizer bases its selection of access plan on the expected cost of each
plan. It uses a mix of metrics to estimate the cost of an access plan:
♦ expected number of rows
♦ use of temporary tables
♦ anticipated amount of CPU and I/O for the access plan
♦ amount of cache utilized
Anywhere gives particular weight to the fact that disk access is substantially
more time-consuming than other operations.
Associate high cost In keeping with the assumption that Anywhere is to use both disk and
with temporary memory efficiently, it avoids using temporary tables. To achieve this goal,
tables the optimizer assigns significant cost to plans that use them.
Anywhere bases its estimate of the cost of temporary tables on both the row
size and the expected number of rows the table will contain. The optimizer
often pessimistically overestimates the actual cost of using a temporary table.
When few queries are competing for cache space, the actual cost of a plan
with a temporary table can be significantly less than the estimate.

Costing index access


Anywhere calculates a scan factor for each table accessed. For this
calculation, it uses both selectivity estimates and the fan-out of the index.
If the index is a key index, then Anywhere assumes the entries are uniformly
distributed in the corresponding table. However, Anywhere assumes that
values in the primary-key index are clustered near similar values. This
assumption is usually valid. For example, suppose you use an auto-increment
column to generate primary-key values. The rows in the table lie in roughly
the same order in the pages of the table as they do in the primary index.
Arranging the rows in a table on the database pages in the order you wish to
read them requires less cache space because Anywhere can avoid rereading
the same pages from disk.

864
Chapter 27 Query Optimization

Subquery caching
New to Adaptive Server Anywhere 7.0 is the ability to cache the result of
evaluating a subquery. When Anywhere processes a subquery, it caches the
result. Should it need to re-evaluate the subquery for the same value, it can
simply retrieve the result from the cache. In this way, Anywhere avoids
many repetitious and redundant computations.
At the end of each subquery, Anywhere releases the stored values. Since
values may change between queries, these values may not be reused to
process subsequent queries. For example, another transaction might modify
values in a table involved in the subquery.
As the processing of a query progresses, Anywhere monitors the frequency
with which cached subquery values are reused. If the values of the correlated
variable rarely repeat, then Anywhere needs to compute most values only
once. In this situation, Anywhere recognizes that it is more efficient to
recompute occasional duplicate values, than to cache numerous entries that
occur only once.
Anywhere also does not cache if the size of the dependent column is more
than 255 bytes. In such cases, you may wish to rewrite your query or add
another column to your table to make such operations more efficient.
As soon as Adaptive Server Anywhere recognizes that few values are
repeated, it suspends subquery caching for the remainder of the statement
and proceeds to re-evaluate the subquery for each and every row in the outer
query block.

865
Subquery caching

866
C H A P T E R 2 8

Deploying Databases and Applications

About this chapter This chapter describes how to deploy Adaptive Server Anywhere
components. It identifies the files required for deploying client applications,
and addresses related issues such as connection settings.

Check your license agreement


Redistribution of files is subject to your license agreement with Sybase.
No statements in this document override anything in your license
agreement. Please check your license agreement before considering
deployment.

Contents
Topic Page
Deployment overview 868
Understanding installation directories and file names 870
Using InstallShield templates for deployment 873
Using a silent installation for deployment 875
Deploying client applications 878
Deploying database servers 887
Deploying embedded database applications 889

867
Deployment overview

Deployment overview
When you have completed a database application, you must deploy the
application to your end users. Depending on the way in which your
application uses Adaptive Server Anywhere (as an embedded database, in a
client/server fashion, and so on) you may have to deploy components of the
Adaptive Server Anywhere software along with your application. You may
also have to deploy configuration information, such as data source names,
that enable your application to communicate with Adaptive Server
Anywhere.

Check your license agreement


Redistribution of files is subject to your license agreement with Sybase.
No statements in this document override anything in your license
agreement. Please check your license agreement before considering
deployment.

The following deployment steps are examined in this chapter:


♦ Determining required files based on the choice of application platform
and architecture.
♦ Configuring client applications.
Much of the chapter deals with individual files and where they need to be
placed. However, the recommended way of deploying Adaptive Server
Anywhere components is to use a silent installation. For information, see
"Using a silent installation for deployment" on page 875.

Deployment models
The files you need to deploy depend on the deployment model you choose.
Here are some possible deployment models:
♦ Client deployment You may deploy only the client portions of
Adaptive Server Anywhere to your end-users, so that they can connect
to a centrally located network database server.
♦ Network server deployment You may deploy network servers to
offices, and then deploy clients to each of the users within those offices.
♦ Embedded database deployment You may deploy an application
that runs with the personal database server. In this case, both client and
personal server need to be installed on the end-user’s machine.

868
Chapter 28 Deploying Databases and Applications

♦ SQL Remote deployment Deploying a SQL Remote application is an


extension of the embedded database deployment model.

Ways to distribute files


There are two ways to deploy Adaptive Server Anywhere:
♦ Use the Adaptive Server Anywhere installation You can make the
Setup program available to your end-users. By selecting the proper
option, each end-user is guaranteed of getting the files they need.
This is the simplest solution for many deployment cases. In this case,
you must still provide your end users with a method for connecting to
the database server (such as an ODBC data source).
$ For more information, see "Using a silent installation for
deployment" on page 875.
♦ Develop your own installation There may be reasons for you to
develop your own installation program that includes Adaptive Server
Anywhere files. This is a more complicated option, and most of this
chapter addresses the needs of those who are developing their own
installation.
If Adaptive Server Anywhere has already been installed for the server
type and operating system required by the client application architecture,
the required files can be found in the appropriately named subdirectory,
located in the Adaptive Server Anywhere installation directory.
For example, assuming the default installation directory was chosen, the
win32 subdirectory of your installation directory contains the files
required to run the server for the Windows 95/98 or Windows NT
platform.
As well, users of InstallShield Professional 5.5 and up can use the SQL
Anywhere Studio InstallShield Template Projects to deploy their own
application. This feature allows you to quickly build your application’s
installation using the entire template project, or just the parts that apply
to your install.
Whichever option you choose, you must not violate the terms of your license
agreement.

869
Understanding installation directories and file names

Understanding installation directories and file


names
For a deployed application to work properly, the database server and client
libraries must each be able to locate the files they need. The deployed files
should be located relative to each other in the same fashion as your Adaptive
Server Anywhere installation.
In practice, this means that on PCs, most files belong in a single directory.
For example, on Windows 95/98 or Windows NT, both client and database
server required files are installed in a single directory, which is the win32
subdirectory of the Adaptive Server Anywhere installation directory.
$ For a full description of the places where the software looks for files,
see "How Adaptive Server Anywhere locates files" on page 4 of the book
ASA Reference.

UNIX deployment issues


UNIX deployments are different from PC deployments in some ways:
♦ Directory structure For UNIX installations, the directory structure is
as follows:

Directory Contents
/opt/sybase/SYBSsa7/bin Executable files
/opt/sybase/SYBSsa7/lib Shared objects and libraries
/opt/sybase/SYBSsa7/res String files

On AIX, the default root directory is /usr/lpp/sybase/SYBSsa7 instead of


/opt/sybase/SYBSsa7.
♦ File extensions In the tables in this chapter, the shared objects are
listed with an extension .so. For HP-UX, the extension is .sl.
On the AIX operating system, shared objects that applications need to
link to are given the extension .a.
♦ Symbolic links Each shared object is installed as a symbolic link to a
file of the same name with the additional extension .1 (one). For
example, the libdblib7.so is a symbolic link to the file libdblib7.so.1 in
the same directory.

870
Chapter 28 Deploying Databases and Applications

If patches are required to the Adaptive Server Anywhere installation,


these will be supplied with extension .2, and the symbolic link must be
redirected.
♦ Threaded and unthreaded applications Most shared objects are
provided in two forms, one of which has the additional characters _r
before the file extension. For example, in addition to libdblib7.so, there is
a file named libdblib7_r.so. In this case, threaded applications must be
linked to the _r shared object, while non-threaded applications must be
linked to the shared object without the _r characters.
$ For a description of the places where the software looks for files, see
"How Adaptive Server Anywhere locates files" on page 4 of the book ASA
Reference.

File naming conventions


Adaptive Server Anywhere uses consistent file naming conventions to help
identify and group system components.
These conventions include:
♦ Version number The Adaptive Server Anywhere version number is
indicated in the filename of the main server components (.exe and .dll
files).
For example, the file dbeng7.exe is a Version 7 executable.
♦ Language The language used in a language-resource library is
indicated by a two-letter code within its filename. These two-letter codes
are specified by ISO standard. For example, dblgen7.dll is the language
resource library for English. The two characters before the version
number indicate the language used in the library.
Identifying other The following table identifies the platform and function of Adaptive Server
file types Anywhere files according to their file extension. Adaptive Server Anywhere
follows standard file extension conventions where possible.

871
Understanding installation directories and file names

File extension Platform File type


.nlm Novell NetWare NetWare Loadable Module
.cnt, .ftg, .fts, Windows NT, Windows 95/98 Help system file
.gid, .hlp, .chm,
.chw
.lib Varies by development tool Static runtime libraries for
the creation of Embedded
SQL executables
.cfg, .cpr, .dat, Windows 95/98, Windows NT Sybase Adaptive Server
.loc, .spr, .srt, Enterprise components
.xlt
.cmd .bat Windows 95/98, Windows NT Command files
.res NetWare, UNIX Language resource file for
non-Windows
environments
.dll Windows 95/98, Windows NT Dynamic Link Library
.so .sl .a UNIX Shared object (Sun Solaris
and IBM AIX) or shared
library (HP-UX) file. The
equivalent of a DLL on PC
platforms.

Database file Adaptive Server Anywhere databases are composed of two elements:
names
♦ Database file This is used to store information in an organized
format. This file uses a .db file extension.
♦ Transaction log file This is used to record all changes made to data
stored in the database file. This file uses a .log file extension, and is
generated by Adaptive Server Anywhere if no such file exists and a log
file is specified to be used. A mirrored transaction log has the default
extension of .mlg.
♦ Write file If your application uses a write file, it typically has a .wrt file
extension.
♦ Compressed database file If you supply a read-only compressed
database file, it typically has extension .cdb.
These files are updated, maintained and managed by the Adaptive Server
Anywhere relational database management system.

872
Chapter 28 Deploying Databases and Applications

Using InstallShield templates for deployment


Users of InstallShield Professional 5.5 and up can use the SQL Anywhere
Studio InstallShield Template Projects to ease the deployment workload.
Templates for deploying a network server, personal server, or client
interfaces can be found in the SQL Anywhere 7/deployment folder.
You can use the templates in whole or in part. For example,
♦ You can build your install directly from the template project, adding
your application’s files, registry entries and so on to the template
project. You can also remove those portions of the template that don’t
apply to your application. For example, extra server interfaces may not
be required by your application.
♦ You can start a fresh install project and cut and paste only those portions
of the template that apply to your project.
If you already have an install project for your application, you can include
portions of the template as required or launch the template install (possibly
silently) from your install.

v To add a template project to your InstallShield IDE:


1 Start InstallShield IDE.
2 Choose File➤Open.
3 Navigate to your SQL Anywhere 7 installation and to the deployment
folder
For example, navigate to
C:\Program Files\Sybase\SQL Anywhere 7\deployment.
4 Open the folder corresponding to the type of object you want to deploy.
You can choose NetworkServer, PersonalServer, or Client.
5 Select the file with the .ipr extension.
The project opens in the InstallShield IDE. The Projects pane displays
an icon for the template.
The templates will be modified at install time so that the paths to the
individual files listed in all of the .fgl files point to the actual install of
ASA. Simply load the template in the InstallShield IDE, build the media,
and the template will run immediately.

873
Using InstallShield templates for deployment

Notes: When building the media, you will see warnings about empty file
groups. These warnings are caused by empty file groups which have
been added to the templates as placeholders for your application’s files.
To remove these warnings, you can either add your application’s files to
the file groups, or delete or rename the file groups.

874
Chapter 28 Deploying Databases and Applications

Using a silent installation for deployment


Silent installations run without user input and with no indication to the user
that an installation is occurring. On Windows operating systems you can call
the Adaptive Server Anywhere InstallShield setup program from your own
setup program in such a way that the Adaptive Server Anywhere installation
is silent. Silent installs are also used with Microsoft’s Systems Management
Server (see "SMS Installation" on page 877).
You can use a silent installation for any of the deployment models described
in "Deployment models" on page 868.

Creating a silent install


The installation options used by a silent installation are obtained from a
response file. The response file is created by running the Adaptive Server
Anywhere setup program using the –r command line option. A silent install
is performed by running setup using the –s command line option.

v To create a silent install:


1 (Optional) Remove any existing installations of Adaptive Server
Anywhere.
2 Open a system command prompt, and change to the directory containing
the install image (including setup.exe, setup.ins, and so on).
3 Install the software, using Record mode.
Type the following command:
setup –r
This command runs the Adaptive Server Anywhere setup program and
creates the response file from your selections. The response file is
named setup.iss, and is located in your Windows or Winnt directory.
This file contains the responses you made to the dialog boxes during
installation.
When run in record mode, the installation program does not offer to
reboot your operating system, even if a reboot is needed.
4 Install Adaptive Server Anywhere using the options, and settings that
you want to be used when you deploy Adaptive Server Anywhere on the
end-user’s machine for use with your application. You can override the
paths during the silent install.

875
Using a silent installation for deployment

Running a silent install


Your own installation program must call the Adaptive Server Anywhere
silent install using the –s command-line option. This section describes how
to use a silent install.

v To use a silent install:


1 Add the command to invoke the Adaptive Server Anywhere silent install
to your installation procedure.
If the response file is present in the install image directory, you can run
the silent install by entering the following command from the directory
containing the install image:
setup –s
If the response file is located elsewhere you must specify the response
file location using the –f1 option.
setup –s –f1"c:\winnt\setup.iss"
To invoke the install from another InstallShield script you could use the
following:
DoInstall( "ASA_install_image_path\SETUP.INS",
"-s", WAIT );
You can use command-line options to override the choices of paths for
both the Adaptive Server Anywhere directory and the shared directory:
setup TARGET_DIR=dirname SHARED_DIR=shared_dir –s
The TARGET_DIR and SHARED_DIR arguments must precede all
other command-line options.
2 Check whether the target computer needs to reboot.
Setup creates a file named silent.log in the target directory. This file
contains a single section called ResponseResult containing the
following line:
Reboot=X
This line indicates whether the target computer needs to be rebooted to
complete the installation, and has a value of 0 or 1, with the following
meanings.
♦ X=0 No reboot is needed.
♦ X=1 The BATCH_INSTALL flag was set during the installation,
and the target computer does need to be rebooted. The installation
procedure that called the silent install is responsible for checking
the Reboot entry and for rebooting the target computer, if necessary.

876
Chapter 28 Deploying Databases and Applications

3 Check that the setup completed properly.


Setup creates a file named setup.log in the directory containing the
response file. The log file contains a report on the silent install. The last
section of this file is called ResponseResult, and contains the following
line:
ResultCode=X
This line indicates whether the installation was successful. A non-zero
value of X indicates an error occurred during installation. For a
description of the error codes, see your InstallShield documentation.

SMS Installation
Microsoft System Management Server (SMS) requires a silent install that
does not reboot the target computer. The Adaptive Server Anywhere silent
install does not reboot the computer.
Your SMS distribution package should contain the response file, the install
image and the asa7.pdf package definition file (provided on the Adaptive
Server Anywhere CD ROM in the \extras folder). The setup command line in
the PDF file contains the following options:
♦ The –s option for a silent install
♦ The –SMS option to indicate that it is being invoked by SMS.
♦ The –m option to generate a MIF file. The MIF file is used by SMS to
determine whether the installation was successful.

877
Deploying client applications

Deploying client applications


In order to deploy a client application that runs against a network database
server, you must provide each end user with the following items:
♦ Client application The application software itself is independent of
the database software, and so is not described here.
♦ Database interface files The client application requires the files for
the database interface it uses (ODBC, JDBC, Embedded SQL, or Open
Client).
♦ Connection information Each client application needs database
connection information.
The interface files and connection information required varies with the
interface your application is using. Each interface is described separately in
the following sections.

Deploying ODBC clients


Each ODBC client machine must have the following:
♦ A working ODBC installation ODBC files and instructions for their
redistribution are available for redistribution from Microsoft
Corporation. They are not described in detail here.
Microsoft provides their ODBC Driver Manager for Windows 95/98 and
for Windows NT. Third party vendors such as Intersolv provide ODBC
Driver managers for UNIX. There is no ODBC Driver Manager for
Windows CE
ODBC applications can run without the driver manager, but except on
platforms for which an ODBC driver manager is not available, this is
generally not recommended.
♦ The Adaptive Server Anywhere ODBC driver This is the file
dbodbc7.dll together with some additional files.
$ For more information, see "ODBC driver required files" on
page 879.
♦ Connection information The client application must have access to
the information needed to connect to the server. This information is
typically included in an ODBC data source.

878
Chapter 28 Deploying Databases and Applications

ODBC driver required files


The following table shows the files needed for a working Adaptive Server
Anywhere ODBC driver. These files should be placed in a single directory.
The Adaptive Server Anywhere installation places them all in the operating-
system subdirectory of your SQL Anywhere installation directory (for
example: win32).

Description 32-bit Windows Windows CE UNIX


ODBC driver dbodbc7.dll dbodbc7.dll libdbodbc7.so
libdbtasks7.so
ODBC translator dbodtr7.dll N/A N/A
Language-resource dblgen7.dll N/A dblgen7.res
library
IPX network ports dbipx7.dll N/A N/A
Connection Dialog dbcon7.dll N/A N/A

Notes ♦ Your end user must have a working ODBC installation, including the
driver manager. Instructions for deploying ODBC are included in the
Microsoft ODBC SDK.
♦ The IPX port library handles network communications over IPX. It is
required only if the client is working with the network database server
over an IPX network.
♦ The Connection dialog is needed if your end users are to create their
own data sources, if they need to enter user IDs and passwords when
connecting to the database, or if they need to display the Connection
dialog for any other purpose.
♦ The ODBC translator is required only if your application relies on OEM
to ANSI character set conversion.
♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.
$ For more information, see "Creating databases for Windows CE"
on page 305, and "Using ODBC on Windows " on page 149 of the book
ASA Programming Interfaces Guide.

Configuring the ODBC driver


In addition to copying the ODBC driver files onto disk, your Setup program
must also make a set of registry entries to install the ODBC driver properly.

879
Deploying client applications

Windows NT and The Adaptive Server Anywhere Setup program makes changes to the
Windows 95/98 Windows NT and Windows 95/98 system Registry to identify and configure
the ODBC driver. If you are building a setup program for your end users,
you should make the same settings.
You can use the Windows regedit utility to inspect registry entries.
The Adaptive Server Anywhere ODBC driver is identified to the system by a
set of registry values in the following registry key:
HKEY_LOCAL_MACHINE\
SOFTWARE\
ODBC\
ODBCINST.INI\
Adaptive Server Anywhere 7.0
The values are as follows:

Value name Value type Value data


Driver String path\dbodbc7.dll
Setup String path\dbodbc7.dll

There is also a registry value in the following key:


HKEY_LOCAL_MACHINE\
SOFTWARE\
ODBC\
ODBCINST.INI\
ODBC Drivers
The value is as follows:

Value name Value type Value data


Adaptive Server Anywhere 7.0 String Installed

Third party ODBC If you are using a third-party ODBC driver on an operating system other than
drivers Windows, consult the documentation for that driver on how to configure the
ODBC driver.

Deploying connection information


ODBC client connection information is generally deployed as an ODBC data
source. You can deploy an ODBC data source in one of the following ways:
♦ Programatically Add a data source description to your end-user’s
Registry or ODBC initialization files.

880
Chapter 28 Deploying Databases and Applications

♦ Manually Provide your end-users with instructions, so that they can


create an appropriate data source on their own machine.
You create a data source manually using the ODBC Administrator, from
the User DSN tab or the System DSN tab. The Adaptive Server
Anywhere ODBC driver displays the configuration dialog for entering
settings. Data source settings include the location of the database file,
the name of the database server, as well as any start up parameters and
other options.
This section provides you with the information you need to know for either
approach.
Types of data There are three kinds of data sources: User data sources, System data
source sources, and File data sources.
User data source definitions are stored in the part of the registry containing
settings for the specific user currently logged on to the system. System data
sources, however, are available to all users and to Windows NT or
Windows 95/98 services, which run regardless of whether a user is logged
onto the system or not. Given a correctly configured System data source
named MyApp, any user can use that ODBC connection by providing
DSN=MyApp in the ODBC connection string.
File data sources are not held in the registry, but are held in a special
directory. A connection string must provide a FileDSN connection parameter
to use a File data source.
Data source Each user data source is identified to the system by registry entries.
registry entries
You must enter a set of registry values in a particular registry key. For User
data sources the key is as follows:
HKEY_CURRENT_USER\
SOFTWARE\
ODBC\
ODBC.INI\
userdatasourcename
For System data sources the key is as follows:
HKEY_LOCAL_MACHINE\
SOFTWARE\
ODBC\
ODBC.INI\
systemdatasourcename
The key contains a set of registry values, each of which corresponds to a
connection parameter. For example, the ASA 7.0 Sample key corresponding
to the ASA 7.0 Sample data source contains the following settings:

881
Deploying client applications

Value name Value type Value data


Autostop String Yes
DatabaseFile String Path\asademo.db
Description String Adaptive Server Anywhere Sample Database
Driver String Path\win32\dbodbc7.dll
PWD String Sql
Start String Path\win32\dbeng7.exe -c 8m
UID String Dba

In these entries, path is the Adaptive Server Anywhere installation directory.


In addition, you must add the data source to the list of data sources in the
registry. For User data sources, you use the following key:
HKEY_CURRENT_USER\
SOFTWARE\
ODBC\
ODBC.INI\
ODBC Data Sources
For System data sources, use the following key:
HKEY_LOCAL_MACHINE\
SOFTWARE\
ODBC\
ODBC.INI\
ODBC Data Sources.
The value associates each data source with an ODBC driver. The value name
is the data source name, and the value data is the ODBC driver name. For
example, the User data source installed by Adaptive Server Anywhere is
named ASA 7.0 Sample , and has the following value:

Value name Value type Value data


ASA 7.0 Sample String Adaptive Server Anywhere 7.0

Caution: ODBC settings are easily viewed


User data source configurations can contain sensitive database settings
such as a user’s ID and password. These settings are stored in the registry
in plain text, and can be view using the Windows registry editors
regedit.exe or regedt32.exe, which are provided by Microsoft with the
operating system. You can choose to encrypt passwords, or require users
to enter them on connecting.

882
Chapter 28 Deploying Databases and Applications

Required and You can identify the data source name in an ODBC configuration string in
optional connection this manner,
parameters DSN=userdatasourcename
... identifies which user data source or system data source from the Registry
is to be used for the ODBC connection.
When a DSN parameter is provided in the connection string, the Current
User data source definitions in the Registry are searched, followed by
System data sources. File data sources are searched only when FileDSN is
provided in the ODBC connection string.
The following table illustrates the implications to the user and developer
when a data source exists and is included in the application’s connection
string as a DSN or FileDSN parameter.

When the data The connection string The user must


source… must also identify… supply…
Contains the ODBC No additional information No additional
driver name and location; information.
the name of the database
file/server; startup
parameters; and the user
ID and password.
Contains only the name The name of the database User ID and password
and location of the file/ server; and, if not provided in the
ODBC driver. optionally, the user ID and DSN or ODBC
the password. connection string.
Does not exist The name of the ODBC User ID and password
driver to be used, in the if not provided in the
following format: ODBC connection
string.
Driver={ODBCdriver
name}
Also, the name of the
database, the database file
or the database server;
and, optionally, other
connection parameters
such as user ID and
password.

$ For more information on ODBC connections and configurations, see


the following:
♦ "Connecting to a Database" on page 33.

883
Deploying client applications

♦ The Open Database Connectivity (ODBC) SDK, available from


Microsoft.

Deploying Embedded SQL clients


Deploying Embedded SQL clients involves the following:
♦ Installed files Each client machine must have the files required for an
Adaptive Server Anywhere Embedded SQL client application.
♦ Connection information The client application must have access to
the information needed to connect to the server. This information may
be included in an ODBC data source.

Installing files for Embedded SQL clients


The following table shows which files are needed for Embedded SQL
clients.

Description 32-bit Windows UNIX


Interface library dblib7.dll libdblib7.so,
libdbtasks7.so
Language resource library dblgen7.dll dblgen7.res
IPX network dbipx7.dll N/A
communications
Connection Dialog dbcon7.dll N/A

Notes ♦ The network ports DLL is not required if the client is working only with
the personal database server.
♦ If the client application uses an ODBC data source to hold the
connection parameters, your end user must have a working ODBC
installation. Instructions for deploying ODBC are included in the
Microsoft ODBC SDK.
$ For more information on deploying ODBC information, see
"Deploying ODBC clients" on page 878.
♦ The Connection dialog is needed if your end users will be creating their
own data sources, if they will need to enter user IDs and passwords
when connecting to the database, or if they need to display the
Connection dialog for any other purpose.
♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.

884
Chapter 28 Deploying Databases and Applications

Connection information
You can deploy Embedded SQL connection information in one of the
following ways:
♦ Manual Provide your end-users with instructions for creating an
appropriate data source on their machine.
♦ File Distribute a file that contains connection information in a format
that your application can read.
♦ ODBC data source You can use an ODBC data source to hold
connection information. In this case, you need a subset of the ODBC
redistributable files, available from Microsoft. For details see
"Deploying ODBC clients" on page 878.
♦ Hard coded You can hard code connection information into your
application. This is an inflexible method, which may be limiting, for
example when databases are upgraded.

Deploying JDBC clients


In addition to a Java Runtime Environment, each JDBC client requires the
Sybase jConnect JDBC driver. Instructions on deploying jConnect can be
found on the Sybase Web site at the following location:
http://www.sybase.com/products/internet/jconnect
Your Java application needs a URL in order to connect to the database. This
URL specifies the driver, the machine to use, and the port on which the
database server is listening.
$ For more information on URLs, see "Supplying a URL for the server"
on page 614.

Deploying Open Client applications


In order to deploy Open Client applications, each client machine needs the
Sybase Open Client product. You must purchase the Open Client software
separately from Sybase. It contains its own installation instructions.
$ Connection information for Open Client clients is held in the interfaces
file. For information on the interfaces file, see the Open Client
documentation and "Configuring Open Servers" on page 994.

885
Deploying client applications

Deploying Interactive SQL


Subject to your license agreement, and depending on the nature of your
customer base and application, you may want to deploy Interactive SQL as a
technical support aid.
If your customer application is running on machines with limited resources,
you may want to deploy the C version of Interactive SQL, (dbisqlc.exe)
instead of the development version (dbisql.exe and its associated Java
classes).
The dbisqlc executable requires libunicl.dll, in addition to the standard client-
side libraries..

886
Chapter 28 Deploying Databases and Applications

Deploying database servers


You can deploy a database server by making the SQL Anywhere Studio
Setup program available to your end-users. By selecting the proper option,
each end-user is guaranteed of getting the files they need.
In order to run a database server, you need to install a set of files. The files
are listed in the following table. All redistribution of these files is governed
by the terms of your license agreement. You must confirm whether you have
the right to redistribute the database server files before doing so.

32-bit Windows UNIX NetWare


dbeng7.exe dbeng7 N/A
dbsrv7.exe dbsrv7 dbsrv7.nlm
dbserv7.dll dbserv7.so, N/A
libdbtasks7.so
dblgen7.dll dblgen7.res dblgen7.res
dbjava7.dll (1) libdbjava7.so (1) dbjava7.nlm (1)
dbctrs7.dll N/A N/A
dbextf.dll (2) libdbextf.so (2) dbextf.nlm (2)
asajdbc.zip (1,3) asajdbc.zip (1,3) asajdbc.zip (1,3)
classes.zip (1,3) classes.zip (1,3) classes.zip (1,3)
dbmem.vxd (5)
1. Required only if using Java in the database.
2. Required only if using system extended stored procedures and functions (xp_).
3. Install such that the CLASSPATH environment variable can locate classes in this file.
4. Required only if JAVA_INPUT_OUTPUT is enabled.
5. Required on Windows 95/98 if using dynamic cache sizing.

Notes ♦ Depending on your situation, you should choose whether to deploy the
personal database server (dbeng7) or the network database server
(dbsrv7).
♦ The Java DLL (dbjava7.dll) is required only if the database server is to
use the Java in the Database functionality.
♦ The table does not include files needed to run command-line
applications such as dbbackup.
$ For information about deploying database utilities, see "Deploying
database utilities and Interactive SQL" on page 889.

887
Deploying database servers

♦ The zip files are required only for applications that use Java in the
database, and must be installed into a location so that they can be located
in the user’s CLASSPATH environment variable.

Deploying databases
You deploy a database file by installing the database file onto your end user’s
disk.
As long as the database server shuts down cleanly, you do not need to deploy
a transaction log file with your database file. When your end-user starts
running the database, a new transaction log is created.
For SQL Remote applications, the database should be created in a properly
synchronized state, in which case no transaction log is needed. You can use
the Extraction utility for this purpose.

Deploying databases on read-only media


You can distribute databases on read-only media, such as a CD-ROM, as
long as you run them in read-only mode or use a write file.
$ For more information on running databases in read-only mode, see "–r
command-line option" on page 33 of the book ASA Reference.
To enable changes to be made to Adaptive Server Anywhere databases
distributed on read-only media such as a CD-ROM, you can use a write file.
The write file records changes made to a read-only database file, and is
located on a read/write storage media such as a hard disk.
In this case, the database file is placed on the CD-ROM, while the write file
is placed on disk. The connection is made to the write file, which maintains a
transaction log file on disk.
$ For more information on write files, see "Working with write files" on
page 792.

888
Chapter 28 Deploying Databases and Applications

Deploying embedded database applications


This section provides information on deploying embedded database
applications, where the application and the database both reside on the same
machine.
An embedded database application includes the following:
♦ Client application This includes the Adaptive Server Anywhere client
requirements.
$ For information on deploying client applications, see "Deploying
client applications" on page 878.
♦ Database server The Adaptive Server Anywhere personal database
server.
$ For information on deploying database servers, see "Deploying
database servers" on page 887.
♦ SQL Remote If your application uses SQL Remote replication, you
must deploy the SQL Remote Message Agent.
♦ The database You must deploy a database file holding the data the
application uses.

Deploying personal servers


When you deploy an application that uses the personal server, you need to
deploy both the client application components and the database server
components.
The language resource library (dblgen7.dll) is shared between the client and
the server. You need only one copy of this file.
It is recommended that you follow the Adaptive Server Anywhere
installation behavior, and install the client and server files in the same
directory.
Remember to provide the Java zip files and the Java DLL if your application
takes advantage of Java in the Database.

Deploying database utilities and Interactive SQL


If you need to deploy database utilities (such as dbbackup.exe) along with
your application, then you need the utility executable together with the
following additional files:

889
Deploying embedded database applications

Description 32-bit Windows UNIX


Database tools library dbtool7.dll libdbtools7.so,
libdbtasks7.so
Additional library dbwtsp7.dll libdbwtsp7.so
Language resource library dblgen7.dll dblgen7.res
Connection dialog dbcon7.dll
(Interactive SQL only)

Notes ♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.
♦ The database tools are Embedded SQL applications, and you must
supply the files required for such applications, as listed in "Deploying
Embedded SQL clients" on page 884.

Deploying SQL Remote


If you are deploying the SQL Remote Message Agent, you need to include
the following files:

Description 32-bit Windows UNIX


Message Agent dbremote.exe dbremote
Database tools library dbtool7.dll libdbtools7.so,
libdbtasks7.so
Additional library dbwtsp7.dll libdbwtsp7.so
Language resource library dblgen7.dll dblgen7.res
VIM message link library (1) dbvim7.dll
SMTP message link library (1) dbsmtp7.dll
FILE message link library (1) dbfile7.dll libdbfile7.so
MAPI message link library (1) dbmapi7.dll
Interface library Dblib7.dll
FTP message link library Dbftp7.dll
1 Only deploy the library for the message link you are using.

Notes ♦ It is recommended that you follow the Adaptive Server Anywhere


installation behavior, and install the SQL Remote files in the same
directory as the Adaptive Server Anywhere files.

890
Chapter 28 Deploying Databases and Applications

♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.

891
Deploying embedded database applications

892
C H A P T E R 2 9

Accessing Remote Data

About this chapter Adaptive Server Anywhere can access data located on different servers, both
Sybase and non-Sybase, as if the data were stored on the local server.
This chapter describes how to configure Adaptive Server Anywhere to
access remote data.
Contents
Topic Page
Introduction 894
Basic concepts 896
Working with remote servers 898
Working with external logins 903
Working with proxy tables 905
Example: a join between two remote tables 910
Accessing multiple local databases 912
Sending native statements to remote servers 913
Using remote procedure calls (RPCs) 914
Transaction management and remote data 917
Internal operations 919
Troubleshooting remote data access 923

893
Introduction

Introduction
Using Adaptive Server Anywhere you can:
♦ Access data in relational databases such as Sybase, Oracle, and DB2.
♦ Access desktop data such as Excel spreadsheets, MS-Access databases,
FoxPro, and text files.
♦ Access any other data source that supports an ODBC interface.
♦ Perform joins between local and remote data.
♦ Perform joins between tables in separate Adaptive Server Anywhere
databases.
♦ Use Adaptive Server Anywhere features on data sources that would
normally not have that ability. For instance, you could use a Java
function against data stored in Oracle, or perform a subquery on
spreadsheets. Adaptive Server Anywhere will compensate for features
not supported by a remote data source by operating on the data after it is
retrieved.
♦ Use Adaptive Server Anywhere to move data from one location to
another using insert-select.
♦ Access remote servers directly using passthrough mode.
♦ Execute remote procedure calls to other servers.
Adaptive Server Anywhere allows access to the following external data
sources:
♦ Adaptive Server Anywhere
♦ Adaptive Server Enterprise
♦ Oracle
♦ IBM DB2
♦ Microsoft SQL Server
♦ Other ODBC data sources

Platform availability
The remote data access features are supported on the Windows 95/98 and
Windows NT platforms only.

894
Chapter 29 Accessing Remote Data

Accessing remote data from PowerBuilder DataWindows


You can access remote data from a PowerBuilder DataWindow by setting the
DBParm Block parameter to 1 on connect.
♦ In the design environment, you can set the Block parameter by accessing
the Transaction tab in the Database Profile Setup dialog and setting the
Retrieve Blocking Factor to 1.
♦ In a connection string, use the following phrase:
DBParm="Block=1"

895
Basic concepts

Basic concepts
This section describes the basic concepts required to access remote data.

Remote table mappings


Adaptive Server Anywhere presents tables to a client application as if all the
data in the tables were stored in the database to which the application is
connected. Internally, when a query involving remote tables is executed, the
storage location is determined, and the remote location is accessed so that
data can be retrieved.
To have remote tables appear as local tables to the client, you create local
proxy tables that map to the remote data.

v To create a proxy table:


1 Define the server where the remote data is located. This specifies the
type of server and location of the remote server.
$ For more information, see "Working with remote servers" on
page 898.
2 Map the local user login information to the remote server user login
information if the logins on the two servers are different.
$ For more information, see "Working with external logins" on
page 903.
3 Create the proxy table definition. This specifies the mapping of a local
proxy table to the remote table. This includes the server where the
remote table is located, the database name, owner name, table name, and
column names of the remote table.
$ For more information, see "Working with proxy tables" on
page 905.
Administering To manage remote table mappings and remote server definitions, you can use
remote table Sybase Central or you can use a tool such as Interactive SQL and execute the
mappings SQL statements directly.

896
Chapter 29 Accessing Remote Data

Server classes
A server class is assigned to each remote server. The server class specifies
the access method used to interact with the server. Different types of remote
servers require different access methods. The server classes provide
Adaptive Server Anywhere detailed server capability information. Adaptive
Server Anywhere adjusts its interaction with the remote server based on
those capabilities.
There are currently two groups of server classes. The first is JDBC-based;
the second is ODBC-based.
The JDBC-based server classes are:
♦ asajdbc for Adaptive Server Anywhere (version 6 and later)
♦ asejdbc for Adaptive Server Enterprise and SQL Server (version 10 and
later)
The ODBC-based server classes are:
♦ asaodbc for Adaptive Server Anywhere (version 5.5 and later)
♦ aseodbc for Adaptive Server Enterprise and SQL Server (version 10
and later)
♦ db2odbc for IBM DB2
♦ mssodbc for Microsoft SQL Server
♦ oraodbc for Oracle servers (version 8.0 and later)
♦ odbc for all other ODBC data sources

$ For a full description of remote server classes, see "Server Classes for
Remote Data Access" on page 925.

897
Working with remote servers

Working with remote servers


Before you can map remote objects to a local proxy table, you must define
the remote server where the remote object is located. When you define a
remote server, an entry is added to the sysservers table for the remote server.
This section describes how to create, alter, and delete a remote server
definition.

Creating remote servers


Use the CREATE SERVER statement to set up remote server definitions.
You can execute the statements directly, or use Sybase Central.
For ODBC connections, each remote server corresponds to an ODBC data
source. For some systems, including Adaptive Server Anywhere, each data
source describes a database, so a separate remote server definition is needed
for each database.
You must have RESOURCE authority to create a server.
Example 1 The following statement creates an entry in the sysservers table for the
Adaptive Server Enterprise named ASEserver:
CREATE SERVER ASEserver
CLASS ’asejdbc’
USING ’rimu:6666’
where:
♦ ASEserver is the name of the remote server
♦ asejdbc specifies the server is an Adaptive Server Enterprise and the
connection to it is JDBC-based
♦ rimu:6666 is the machine name and the TCP/IP port number where the
remote server is located
$ For more information about the optional database name, see "Supplying
a URL for the server" on page 614, or "USING parameter value in the
CREATE SERVER statement" on page 928.
Example 2 The following statement creates an entry in the sysservers table for the
ODBC-based Adaptive Server Anywhere named testasa:
CREATE SERVER testasa
CLASS ’asaodbc’
USING ’test4’
where:

898
Chapter 29 Accessing Remote Data

♦ testasa is the name by which the remote server is known within this
database.
♦ asaodbc specifies that the server is an Adaptive Server Anywhere and
the connection to it uses ODBC.
♦ test4 is the ODBC data source name.
$ For a full description of the CREATE SERVER statement, see
"CREATE SERVER statement" on page 464 of the book ASA Reference.

Creating remote servers using Sybase Central


You can create a remote server using a wizard in Sybase Central. For more
information, see "Creating remote servers" on page 898.

v To create a remote server (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Open the Remote Servers folder for that database.
3 Double-click Add Remote Server.
4 On the first page of the wizard, enter a name to use for the remote
server. This name simply refers to the remote server from within the
local database; it does not need to correspond with the name the server
supplies. Click Next.
5 Select an appropriate class for the server and click Next.
6 Select a data access method (JDBC or ODBC), and supply connection
information.
♦ For JDBC, supply a URL in the form machine-name:port-number
♦ For ODBC, supply a data source name.
The data access method (JDBC or ODBC) is the method used by
Adaptive Server Anywhere to access the remote database. This is not
related to the method used by Sybase Central to connect to your
database.
7 Click Next. Specify whether you want the remote server to be read-only.
Click Next again.
8 Click Add to specify external logins and passwords for the remote
server. When finished, click OK to exit the dialog and return to the
wizard. Click Next.
9 Verify that the server name and connection parameters are correct.

899
Working with remote servers

10 Click Finish to create the remote server definition.

Deleting remote servers


You can use Sybase Central or a DROP SERVER statement to delete a
remote server from the Adaptive Server Anywhere system tables. All remote
tables defined on that server must already be dropped for this action to
succeed.
You must have DBA authority to delete a remote server.

v To delete a remote server (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Open the Remote Servers folder.
3 Right-click the remote server you want to delete and choose Delete from
the popup menu.

v To delete a remote server (SQL):


1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute a DROP SERVER statement.

Example The following statement drops the server named testasa:


DROP SERVER testasa

$ See also
♦ "DROP SERVER statement" on page 511 of the book ASA Reference

Altering remote servers


You can use Sybase Central or an ALTER SERVER statement to modify the
attributes of a server. These changes do not take effect until the next
connection to the remote server.
You must have RESOURCE authority to alter a server.

v To alter the properties of a remote server (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).

900
Chapter 29 Accessing Remote Data

2 Open the Remote Servers folder for that database.


3 Right-click the remote server and choose Properties from the popup
menu.
4 Configure the various remote server properties.

v To alter the properties of a remote server (SQL):


1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute an ALTER SERVER statement.

Example The following statement changes the server class of the server named
ASEserver to aseodbc:
ALTER SERVER ASEserver
CLASS ’aseodbc’
The Data Source Name for the server is ASEserver.
The ALTER SERVER statement can also be used to enable or disable a
server’s known capabilities.
$ See also
♦ "Property Sheet Descriptions" on page 1061
♦ "ALTER SERVER statement" on page 390 of the book ASA Reference

Listing the remote tables on a server


It may be helpful when you are configuring your Adaptive Server Anywhere
to get a list of the remote tables available on a particular server. The
sp_remote_tables procedure returns a list of the tables on a server.
sp_remote_tables servername
[,tablename]
[, owner ]
[, database]
If tablename, owner, or database is given, the list of tables is limited to only
those that match.
For example, to get a list of all of the Microsoft Excel worksheets available
from an ODBC data source named excel:
sp_remote_tables excel
Or to get a list of all of the tables in the production database in an ASE
named asetest, owned by ’fred’:

901
Working with remote servers

sp_remote_tables asetest, null, fred, production


$ For more information, see "sp_remote_tables system procedure" on
page 977 of the book ASA Reference.

Listing remote server capabilities


The sp_servercaps procedure displays information about a remote server’s
capabilities. Adaptive Server Anywhere uses this capability information to
determine how much of a SQL statement can be passed off to a remote
server.
The system tables which contain server capabilities are not populated until
after Adaptive Server Anywhere first connects to the remote server. This
information comes from the SYSCAPABILITY and
SYSCAPABILITYNAME system tables. The servername specifed must be
the same servername used in the CREATE SERVER statement.
Issue the stored procedure sp_servercaps as follows:
sp_servercaps servername

$ For more information, see "sp_servercaps system procedure" on


page 978 of the book ASA Reference.

902
Chapter 29 Accessing Remote Data

Working with external logins


By default, Adaptive Server Anywhere uses the names and passwords of its
clients whenever it connects to a remote server on behalf of those clients.
However, this default can be overridden by creating external logins. External
logins are alternate login names and passwords to be used when
communicating with a remote server.
If you are using an integrated login, then the IQ name and password of the
IQ client is the same as the database userid and password that the IQ userid
maps to in syslogins.
$ For more information, see "Using integrated logins" on page 77.

Creating external logins


You can create an external login using either Sybase Central or the CREATE
EXTERNLOGIN statement.
Only the login-name and the DBA account can add or modify an external
login.

v To create an external login (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Open the Remote Servers folder for that database and select the remote
server.
3 Right-click the remote server and choose Properties from the popup
menu.
4 On the External Logins tab of the property sheet, click Add External
Login and configure the settings in the resulting dialog.
5 Click OK to save the changes.

v To create an external login (SQL):


1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute a CREATE EXTERNLOGIN statement.

Example The following statement allows the local user fred to gain access to the
server ASEserver, using the remote login frederick with password banana.

903
Working with external logins

CREATE EXTERNLOGIN fred


TO ASEserver
REMOTE LOGIN frederick
IDENTIFIED BY banana
$ See also
♦ "Add External Login dialog" on page 1043
♦ "Property Sheet Descriptions" on page 1061
♦ "CREATE EXTERNLOGIN statement" on page 443 of the book ASA
Reference

Dropping external logins


You can use either Sybase Central or a DROP EXTERNLOGIN statement to
delete an external login from the Adaptive Server Anywhere system tables.
Only the login-name and the DBA account can delete an external login.

v To delete an external login (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Open the Remote Servers folder.
3 Right-click the remote server and choose Delete from the popup menu.

v To delete an external login (SQL):


1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute a DROP EXTERNLOGIN statement.

Example The following statement drops the external login for the local user fred
created in the example above:
DROP EXTERNLOGIN fred TO ASEserver

$ See also
♦ "DROP EXTERNLOGIN statement" on page 509 of the book ASA
Reference

904
Chapter 29 Accessing Remote Data

Working with proxy tables


Location transparency of remote data is enabled by creating a local proxy
table that maps to the remote object. To create a proxy table you use one of
the following statements:
♦ If the table already exists at the remote storage location, use the
CREATE EXISTING TABLE statement. This statement defines the
proxy table for an existing table on the remote server.
♦ If the table does not exist at the remote storage location, use the
CREATE TABLE statement. This statement creates a new table on the
remote server, and also defines the proxy table for that table.

Specifying proxy table locations


The AT keyword is used with both CREATE TABLE and CREATE
EXISTING TABLE to define the location of an existing object. This location
string has 4 components that are separated by either a period or a semicolon.
Semicolons allow filenames and extensions to be used in the database and
owner fields.
... AT ’server.database.owner.tablename’

Server This is the name by which the server is known in the current database, as
specified in the CREATE SERVER statement. This field is mandatory for all
remote data sources.
Database The meaning of the database field depends on the data source. In some cases
this field does not apply and should be left empty. The periods are still
required, however.
♦ Adaptive Server Enterprise Specifies the database where the table
exists. For example master or pubs2.
♦ Adaptive Server Anywhere This field does not apply; leave it empty.
The database name for an Adaptive Server Anywhere ODBC data
source should be specified when the data source name is defined in the
ODBC Administrator.
For jConnect-based connections, the database should be specified in the
USING clause of the CREATE SERVER statement.
For both ODBC and JDBC based connections to Adaptive Server
Anywhere, you need a separate CREATE SERVER statement for each
Adaptive Server Anywhere database being accessed.

905
Working with proxy tables

♦ Excel, Lotus Notes, Access For these file-based data sources, the
database name is the name of the file containing the table. Since file
names can contain a period, a semicolon should be used as the delimiter
between server, database, owner, and table.
Owner If the database supports the concept of ownership, this field represents the
owner name. This field is only required when several owners have tables
with the same name.
Tablename Tablename specifies the name of the table. In the case of an Excel
spreadsheet, this is the name of the "sheet" in the workbook. If the table
name is left empty, the remote table name is assumed to be the same as the
local proxy table name.
Examples: The following examples illustrate the use of location strings:
♦ Adaptive Server Anywhere:
’testasa..dba.employee’
♦ Adaptive Server Enterprise:
’ASEServer.pubs2.dbo.publishers’
♦ Excel:
’excel;d:\pcdb\quarter3.xls;;sheet1$’
♦ Access:
’access;\\server1\production\inventory.mdb;;parts’

Creating proxy tables (Sybase Central)


You can create a proxy table using either Sybase Central or a CREATE
EXISTING TABLE statement.
If you are using Sybase Central, you must use the Java version of the
software, unless you choose to construct the SQL statements yourself. You
cannot use previous Windows versions of Sybase Central.
If you are using Interactive SQL, the CREATE EXISTING TABLE
statement creates a proxy table that maps to an existing table on the remote
server. Adaptive Server Anywhere derives the column attributes and index
information from the object at the remote location.

v To create a proxy table (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Do one of the following:

906
Chapter 29 Accessing Remote Data

♦ In the Tables folder, double-click Add Proxy Table.


♦ In the Remote Servers folder, right-click a remote server and choose
Add Proxy Table from the popup menu.
3 Follow the instructions in the wizard.

Tips
You can also create a proxy table by selecting the Tables folder and then
choosing File➤New➤Proxy Table.
Proxy tables are displayed under their remote server, inside the Remote
Servers folder. They also appear in the Tables folder. They are
distinguished from other tables by a letter P on their icon.

Creating proxy tables with the CREATE EXISTING TABLE


statement
The CREATE EXISTING TABLE statement creates a proxy table that maps
to an existing table on the remote server. Adaptive Server Anywhere derives
the column attributes and index information from the object at the remote
location.

v To create a proxy table with the CREATE EXISTING TABLE


statement (SQL):
1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute a CREATE EXISTING TABLE statement.

Example 1 To create a proxy table named p_employee on the current server to a remote
table named employee on the server named asademo1, use the following
syntax:
CREATE EXISTING TABLE p_employee
AT ’asademo1..dba.employee’

907
Working with proxy tables

p_employee employee
proxy table table
mapping

asademo1
local server
server

Example 2 The following statement maps the proxy table a1 to the Microsoft Access file
mydbfile.mdb. In this example, the AT keyword uses the semicolon (;) as a
delimiter. The server defined for Microsoft Access is named access.
CREATE EXISTING TABLE a1
AT’access;d:\mydbfile.mdb;;a1’

$ See also
♦ "CREATE EXISTING TABLE statement" on page 441 of the book ASA
Reference

Creating a proxy table with the CREATE TABLE statement


The CREATE TABLE statement creates a new table on the remote server,
and defines the proxy table for that table when you use the AT option. You
enter the CREATE TABLE statement using Adaptive Server Anywhere data
types. Adaptive Server Anywhere automatically converts the data into the
remote server’s native types.
If you use the CREATE TABLE statement to create both a local and remote
table, and then subsequently use the DROP TABLE statement to drop the
proxy table, then the remote table also gets dropped. You can, however, use
the DROP TABLE statement to drop a proxy table created using the
CREATE EXISTING TABLE statement if you do not want to drop the
remote table.

v To create a proxy table with the CREATE EXISTING TABLE


statement (SQL):
1 Connect to the host database from Interactive SQL using the jConnect
driver (for JDBC connectivity).
2 Execute a CREATE TABLE statement.

908
Chapter 29 Accessing Remote Data

The following statement creates a table named employee on the remote


Example server asademo1, and creates a proxy table named members that maps to the
remote location:
CREATE TABLE members
( membership_id INTEGER NOT NULL,
member_name CHAR(30) NOT NULL,
office_held CHAR( 20 ) NULL)
AT ’asademo1..dba.employee’

$ See also
♦ "CREATE TABLE statement" on page 466 of the book ASA Reference

Listing the columns on a remote table


If you are entering a CREATE EXISTING statement and you are specifying
a column list, it may be helpful to get a list of the columns that are available
on a remote table. The sp_remote_columns system procedure produces a list
of the columns on a remote table and a description of those data types.
sp_remote_columns servername [,tablename] [, owner ] [,
database]
If a table name, owner, or database name is given, the list of columns is
limited to only those that match.
For example, to get a list of the columns in the sysobjects table in the
production database in an Adaptive Server Enterprise server named asetest:
sp_remote_columns asetest, sysobjects, null, production

$ For more information, see "sp_remote_columns system procedure" on


page 975 of the book ASA Reference.

909
Example: a join between two remote tables

Example: a join between two remote tables


The following figure illustrates the remote Adaptive Server Anywhere tables
employee and department in the sample database mapped to the local server
named testasa.

p_employee employee
proxy table table
p_department department
proxy table table

emp_fname emp_lname

dept_id dept_name
asademo1
testasa server
server

This example shows how to:


♦ Define the remote testasa server
♦ Create the proxy tables employee and department
♦ Perform a join between the remote employee and department tables.
In real-world cases, you may use joins between tables on different Adaptive
Server Anywhere databases. Here we describe a simple case using just one
database to illustrate the principles.

v To perform a join between two remote tables (SQL):


1 Create a new database named empty.db.
This database holds no data. We will use it only to define the remote
objects, and access the sample database from it.
2 Start a database server running both empty.db and the sample database.
You can do this using the following command line, executed from the
installation directory:
dbeng7 asademo empty
3 Connect to empty.db from Interactive SQL using a user ID of dba and a
password of sql.

910
Chapter 29 Accessing Remote Data

4 In the new database, create a remote server named testasa. Its server
class is asaodbc, and the connection information is ’ASA 7.0 Sample’:
CREATE SERVER testasa
CLASS ’asaodbc’
USING ’ASA 7.0 Sample’
5 In this example, we use the same user ID and password on the remote
database as on the local database, so no external logins are needed.
6 Define the employee proxy table:
CREATE EXISTING TABLE employee
AT ’testasa..dba.employee’
7 Define the department proxy table:
CREATE EXISTING TABLE department
AT ’testasa..dba.department’
8 Use the proxy tables in the SELECT statement to perform the join.
SELECT emp_fname, emp_lname, dept_name
FROM employee JOIN department
ON employee.dept_id = department.dept_id
ORDER BY emp_lname

911
Accessing multiple local databases

Accessing multiple local databases


An Adaptive Server Anywhere server may have several local databases
running at one time. By defining tables in other local Adaptive Server
Anywhere databases as remote tables, you can perform cross database joins.
For example, if you are using database db1 and you want to access data in
tables in database db2, you need to set up proxy table definitions that point to
the tables in database db2. For instance, on an Adaptive Server Anywhere
named testasa, you might have three databases available, db1, db2, and db3.
♦ If using ODBC, create an ODBC data source name entry for each
database you will be accessing.
♦ Connect to one of the databases that you will be performing joins from.
For example, connect to db1.
♦ Perform a CREATE SERVER for each other local database you will be
accessing. This sets up a loopback connection to your Adaptive Server
Anywhere server
CREATE SERVER local_db2
CLASS ’asaodbc’
USING ’testasa_db2’
CREATE SERVER local_db3
CLASS ’asaodbc’
USING ’testasa_db3’
or using JDBC:
CREATE SERVER local_db2
CLASS ’asajdbc’
USING ’mypc1:2638/db2’
CREATE SERVER local_db3
CLASS ’asajdbc’
USING ’mypc1:2638/db3’
♦ Create proxy table definitions using CREATE EXISTING to the tables
in the other databases you want to access.
CREATE EXISTING TABLE employee
AT ’local_db2...employee’
$ For more information about specifying multiple databases, see "USING
parameter value in the CREATE SERVER statement" on page 928.

912
Chapter 29 Accessing Remote Data

Sending native statements to remote servers


Use the FORWARD TO statement to send one or more statements to the
remote server in its native syntax. This statement can be used in two ways:
♦ To send a statement to a remote server
♦ To place Adaptive Server Anywhere into passthrough mode for sending
a series of statements to a remote server
If a connection cannot be made to the specified server, the reason is
contained in a message returned to the user. If a connection is made, any
results are converted into a form that can be recognized by the client
program.
The FORWARD TO statement can be used to verify that a server is
configured correctly. If you send a statement to the remote server and
Adaptive Server Anywhere does not return an error message, the remote
server is configured correctly.
Example 1 The following statement verifies connectivity to the server named ASEserver
by selecting the version string:
FORWARD TO ASEserver {SELECT @@version}

Example 2 The following statements show a passthrough session with the server named
ASEserver:
FORWARD TO ASEserver
select * from titles
select * from authors
FORWARD TO

$ For a complete description of the FORWARD TO statement, see


"FORWARD TO statement" on page 530 of the book ASA Reference.

913
Using remote procedure calls (RPCs)

Using remote procedure calls (RPCs)


Adaptive Server Anywhere users can issue procedure calls to remote servers
that support the feature.
Sybase Adaptive Server Anywhere and Adaptive Server Enterprise, Oracle,
and DB2 support this feature. Issuing a remote procedure call is similar to
using a local procedure call.

Creating remote procedures


You can issue a remote procedure call using either Sybase Central or the
CREATE PROCEDURE statement.
You must have DBA authority to create a remote procedure.

v To issue a remote procedure call (Sybase Central):


1 Connect to the host database from Sybase Central using the jConnect
driver (for JDBC connectivity).
2 Open the Remote Servers folder.
3 Right-click the remote server for which you want to create a remote
procedure and choose Properties from the File menu.
4 On the Remote Procedures tab, click Add and follow the instructions of
the wizard.

Tip
You can also add a remote procedure by right-clicking the remote
server and choosing Add Remote Procedure from the popup menu.

v To issue a remote procedure call (SQL):


1 First define the procedure to Adaptive Server Anywhere.
The syntax is the same as a local procedure definition except instead of
using SQL statements to make up the body of the call, a location string
is given defining the location where the procedure resides.
CREATE PROCEDURE remotewho ()
AT ’bostonase.master.dbo.sp_who’
2 Execute the procedure as follows:
call remotewho()

914
Chapter 29 Accessing Remote Data

Here is an example with a parameter:


Example
CREATE PROCEDURE remoteuser (IN uname char(30))
AT ’bostonase.master.dbo.sp_helpuser’
call remoteuser(’joe’)
$ See also
♦ "Property Sheet Descriptions" on page 1061
♦ "CREATE PROCEDURE statement" on page 453 of the book ASA
Reference

Data types for The following data types are allowed for RPC parameters. Other data types
remote procedures are disallowed:
♦ [ UNSIGNED ] SMALLINT
♦ [ UNSIGNED ] INT
♦ [ UNSIGNED ] BIGINT
♦ TINYINT
♦ REAL
♦ DOUBLE
♦ CHAR
♦ BIT
NUMERIC and DECIMAL data types are allowed for IN parameters, but not
for OUT or INOUT parameters.

Dropping remote procedures


You can delete a remote procedure using either Sybase Central or the DROP
PROCEDURE statement.
You must have DBA authority to delete a remote procedure.

v To delete a remote procedure (Sybase Central):


1 Open the Remote Servers folder.
2 Right-click the remote server and choose Properties from the File menu.
3 On the Remote Procedures tab, select the remote procedure and click
Remove.

915
Using remote procedure calls (RPCs)

v To delete a remote procedure (SQL):


1 Connect to a database.
2 Execute a DROP PROCEDURE statement.

Example Delete a remote procedure called remoteproc.


DROP PROCEDURE remoteproc.

$ See also
♦ "DROP statement" on page 505 of the book ASA Reference

916
Chapter 29 Accessing Remote Data

Transaction management and remote data


Transactions provide a way to group SQL statements so that they are treated
as a unit—either all work performed by the statements is committed to the
database, or none of it is.
For the most part, transaction management with remote tables is the same as
transaction management for local tables in Adaptive Server Anywhere, but
there are some differences. They are discussed in the following section.
$ For a general discussion of transactions, see "Using Transactions and
Isolation Levels" on page 381.

Remote transaction management overview


The method for managing transactions involving remote servers uses a two-
phase commit protocol. Adaptive Server Anywhere implements a strategy
that ensures transaction integrity for most scenarios. However, when more
than one remote server is invoked in a transaction, there is still a chance that
a distributed unit of work will be left in an undetermined state. Even though
two-phase commit protocol is used, no recovery process is included.
The general logic for managing a user transaction is as follows:
1 Adaptive Server Anywhere prefaces work to a remote server with a
BEGIN TRANSACTION notification.
2 When the transaction is ready to be committed, Adaptive Server
Anywhere sends a PREPARE TRANSACTION notification to each
remote server that has been part of the transaction. This ensures the that
remote server is ready to commit the transaction.
3 If a PREPARE TRANSACTION request fails, all remote servers are
told to roll back the current transaction.
If all PREPARE TRANSACTION requests are successful, the server
sends a COMMIT TRANSACTION request to each remote server
involved with the transaction.
Any statement preceded by BEGIN TRANSACTION can begin a
transaction. Other statements are sent to a remote server to be executed as a
single, remote unit of work.

Restrictions on transaction management


Restrictions on transaction management are as follows:

917
Transaction management and remote data

♦ Savepoints are not propagated to remote servers.


♦ If nested BEGIN TRANSACTION and COMMIT TRANSACTION
statements are included in a transaction that involves remote servers,
only the outermost set of statements is processed. The innermost set,
containing the BEGIN TRANSACTION and COMMIT
TRANSACTION statements, is not transmitted to remote servers.

918
Chapter 29 Accessing Remote Data

Internal operations
This section describes the underlying operations on remote servers
performed by Adaptive Server Anywhere on behalf of client applications.

Query parsing
When a statement is received from a client, it is parsed. An error is raised if
the statement is not a valid Adaptive Server Anywhere SQL statement.

Query normalization
The next step is called query normalization. During this step, referenced
objects are verified and some data type compatibility is checked.
For example, consider the following query:
SELECT *
FROM t1
WHERE c1 = 10
The query normalization stage verifies that table t1 with a column c1 exists
in the system tables. It also verifies that the data type of column c1 is
compatible with the value 10. If the column’s data type is datetime, for
example, this statement is rejected.

Query preprocessing
Query preprocessing prepares the query for optimization. It may change the
representation of a statement so that the SQL statement Adaptive Server
Anywhere generates for passing to a remote server will be syntactically
different from the original statement.
Preprocessing performs view expansion so that a query can operate on tables
referenced by the view. Expressions may be reordered and subqueries may
be transformed to improve processing efficiency. For example, some
subqueries may be converted into joins.

Server capabilities
The previous steps are performed on all queries, both local and remote.

919
Internal operations

The following steps depend on the type of SQL statement and the
capabilities of the remote servers involved.
Each remote server defined to Adaptive Server Anywhere has a set of
capabilities associated with it. These capabilities are stored in the
syscapabilities system table. These capabilities are initialized during the first
connection to a remote server. The generic server class odbc relies strictly on
information returned from the ODBC driver to determine these capabilities.
Other server classes such as db2odbc have more detailed knowledge of the
capabilities of a remote server type and use that knowledge to supplement
what is returned from the driver.
Once syscapabilities is initialized for a server, the capability information is
retrieved only from the system table. This allows a user to alter the known
capabilities of a server.
Since a remote server may not support all of the features of a given SQL
statement, Adaptive Server Anywhere must break the statement into simpler
components to the point that the query can be given to the remote server.
SQL features not passed off to a remote server must be evaluated by
Adaptive Server Anywhere itself.
For example, a query may contain an ORDER BY statement. If a remote
server cannot perform ORDER BY, the statement is sent to a the remote
server without it and Adaptive Server Anywhere performs the ORDER BY
on the result returned, before returning the result to the user. The result is
that the user can employ the full range of Adaptive Server Anywhere
supported SQL without concern for the features of a particular back end.

Complete passthrough of the statement


The most efficient way to handle a statement is usually to hand as much of
the original statement as possible off to the remote server involved. Adaptive
Server Anywhere will attempt to pass off as much of the statement as is
possible. In many cases this will be the complete statement as originally
given to Adaptive Server Anywhere.
Adaptive Server Anywhere will hand off the complete statement when:
♦ Every table in the statement resides in the same remote server.
♦ The remote server is capable of processing all of the syntax in the
statement.
In rare conditions, it may actually be more efficient to let Adaptive Server
Anywhere do some of the work instead of passing it off. For example,
Adaptive Server Anywhere may have a better sorting algorithm. In this case
you may consider altering the capabilities of a remote server using the
ALTER SERVER statement.
920
Chapter 29 Accessing Remote Data

$ For more information see "ALTER SERVER statement" on page 390


of the book ASA Reference.

Partial passthrough of the statement


If a statement contains references to multiple servers, or uses SQL features
not supported by a remote server, the query is decomposed into simpler
parts.
Select SELECT statements are broken down by removing portions that cannot be
passed on and letting Adaptive Server Anywhere perform the feature. For
example, let’s say a remote server can not process the atan2() function in the
following statement:
select a,b,c where atan2(b,10) > 3 and c = 10
The statement sent to the remote server would be converted to:
select a,b,c where c = 10
Locally, Adaptive Server Anywhere would apply "where atan2(b,10) > 3" to
the intermediate result set.
Joins Adaptive Server Anywhere processes joins using a nested loop algorithm.
When two tables are joined, one table is selected to be the outer table. The
outer table is scanned based on the WHERE conditions that apply to it. For
every qualifying row found, the other table, known as the inner table is
scanned to find a row that matches the join condition.
This same algorithm is used when remote tables are referenced. Since the
cost of searching a remote table is usually much higher than a local table
(due to network I/O), every effort is made to make the remote table the
outermost table in the join.
Update and delete If Adaptive Server Anywhere cannot pass off an UPDATE or DELETE
statement entirely to a remote server, it must change the statement into a
table scan containing as much of the original WHERE clause as possible,
followed by positioned UPDATE or DELETE "where current of cursor"
when a qualifying row is found.
For example, when the function atan2 is not supported by a remote server:
UPDATE t1
SET a = atan2(b, 10)
WHERE b > 5
Would be converted to the following:
SELECT a,b
FROM t1
WHERE b > 5

921
Internal operations

Each time a row is found, Adaptive Server Anywhere would calculate the
new value of a and issue:
UPDATE t1
SET a = ’new value’
WHERE CURRENT OF CURSOR
If a already has a value that equals the "new value", a positioned UPDATE
would not be necessary and would not be sent remotely.
In order to process an UPDATE or DELETE that requires a table scan, the
remote data source must support the ability to perform a positioned
UPDATE or DELETE ("where current of cursor"). Some data sources do not
support this capability.

Temporary tables cannot be updated


In this release of Adaptive Server Anywhere an UPDATE or DELETE
cannot be performed if an intermediate temporary table is required in
Adaptive Server Anywhere. This occurs in queries with ORDER BY and
some queries with subqueries.

922
Chapter 29 Accessing Remote Data

Troubleshooting remote data access


This section provides some hints for troubleshooting remote servers.

Features not supported for remote data


The following Adaptive Server Anywhere features are not supported on
remote data. Attempts to use these features will therefore run into problems:
♦ ALTER TABLE statement against remote tables
♦ Triggers defined on proxy tables will not fire
♦ SQL Remote
♦ Java data types
♦ Foreign keys that refer to remote tables are ignored
♦ The READTEXT, WRITETEXT, and TEXTPTR functions.
♦ Positioned UPDATE and DELETE
♦ UPDATE and DELETE requiring an intermediate temporary table.
♦ Backwards scrolling on cursors opened against remote data. Fetch
statements must be NEXT or RELATIVE 1.
♦ If a column on a remote table has a name that is a keyword on the
remote server, you cannot access data in that column. Adaptive Server
Anywhere cannot know all of the remote server reserved words. You
can execute a CREATE EXISTING TABLE statement, and import the
definition but you cannot select that column.

Case sensitivity
The case sensitivity setting of your Adaptive Server Anywhere database
should match the settings used by any remote servers accessed.
Adaptive Server Anywhere databases are created case insensitive by default.
With this configuration, unpredictable results may occur when selecting from
a case sensitive database. Different results will occur depending on whether
ORDER BY or string comparisons are pushed off to a remote server or
evaluated by the local Adaptive Server Anywhere.

923
Troubleshooting remote data access

Connectivity problems
Take the following steps to be sure you can connect to a remote server:
♦ Determine that you can connect to a remote server using a client tool
such as Interactive SQL before configuring Adaptive Server Anywhere.
♦ Perform a simple passthrough statement to a remote server to check your
connectivity and remote login configuration. For example:
FORWARD TO testasa {select @@version}
♦ Turn on remote tracing for a trace of the interactions with remote
servers.
SET OPTION cis_option = 2

General problems with queries


If you are faced with some type of problem with the way Adaptive Server
Anywhere is handling a query against a remote table, it is usually helpful to
understand how Adaptive Server Anywhere is executing that query. You can
display remote tracing as well as a description of the query execution plan:
SET OPTION cis_option = 6

Queries blocked on themselves


If you access multiple databases on a single Adaptive Server Anywhere
server, you may need to increase the number of threads used by the database
server on Windows NT using the -gx command-line switch.
You must have enough threads available to support the individual tasks that
are being run by a query. Failure to provide the number of required tasks can
lead to a query becoming blocked on itself.

924
C H A P T E R 3 0

Server Classes for Remote Data Access

About this chapter This chapter describes how Adaptive Server Anywhere interfaces with
different classes of servers. You will find information such as:
♦ Types of servers that each server class supports
♦ The USING clause value for the CREATE SERVER statement for each
server class
♦ Special configuration requirements
Contents
Topic Page
Overview 926
JDBC-based server classes 927
ODBC-based server classes 930

925
Overview

Overview
The server class you specify in the CREATE SERVER statement determines
the behavior of a remote connection. The server classes give Adaptive Server
Anywhere detailed server capability information. Adaptive Server Anywhere
formats SQL statements specific to a server’s capabilities.
There are two categories of server classes:
♦ JDBC-based server classes
♦ ODBC-based server classes
Each server class has a set of unique characteristics that database
administrators and programmers need to know about to configure the server
for remote data access.
When using this chapter, refer both to the section generic to the server class
category (JDBC-based or ODBC-based), and to the section specific to the
individual server class.

926
Chapter 30 Server Classes for Remote Data Access

JDBC-based server classes


JDBC-based server classes are used when Adaptive Server Anywhere
internally uses a Java virtual machine and jConnect 4.0 to connect to the
remote server. The JDBC-based server classes are:
♦ asajdbc Adaptive Server Anywhere (version 6 and later)
♦ asejdbc Adaptive Server Enterprise and SQL Server (version 10 and
later).

Configuration notes for JDBC classes


When you access remote servers defined with JDBC-based classes, consider
that:
♦ Your local database must be enabled for Java. Do not use the -j option
on dbinit if you plan to use a JDBC-based server class.
♦ The Java virtual machine needs more than the default amount of
memory to load and run jConnect. Set these memory options to at least
the following values:
SET OPTION PUBLIC.JAVA_NAMESPACE_SIZE = 3000000
SET OPTION PUBLIC.JAVA_HEAP_SIZE = 1000000
♦ Since jConnect 4.0 is automatically installed with Adaptive Server
Anywhere, no additional drivers need to be installed.
♦ For optimum performance, Sybase recommends an ODBC-based class
(asaodbc or aseodbc).
♦ Any remote server that you access using the asejdbc or asajdbc server
class must be setup to handle a jConnect 4.x based client. The jConnect
setup scripts are sql_anywhere.sql for Adaptive Server Anywhere or
sql_server.sql for Adaptive Server Enterprise. Run these against any
remote server you will be using.

Server class asajdbc


A server with server class asajdbc is Adaptive Server Anywhere (version 6
and later). No special requirements exist for the configuration of an Adaptive
Server Anywhere data source.

927
JDBC-based server classes

USING parameter value in the CREATE SERVER statement


You must perform a separate CREATE SERVER for each Adaptive Server
Anywhere database you intend to access. For example, if an Adaptive Server
Anywhere server named testasa is running on the machine ’banana’ and
owns 3 databases (db1, db2, db3), you would configure the local Adaptive
Server Anywhere similar to this:
CREATE SERVER testasadb1
CLASS ’asajdbc’
USING ’banana:2638/db1’
CREATE SERVER testasadb2
CLASS ’asajdbc’
USING ’banana:2638/db2’
CREATE SERVER testasadb2
CLASS ’asajdbc’
USING ’banana:2638/db3’
If you do not specify a /databasename value, the remote connection uses the
remote Adaptive Server Anywhere default database.
$ For more information about the CREATE SERVER statement, see
"CREATE SERVER statement" on page 464 of the book ASA Reference.

Server class asejdbc


A server with server class asejdbc is Adaptive Server Enterprise, SQL Server
(version 10 and later) No special requirements exist for the configuration of
an Adaptive Server Enterprise data source.

Data type conversions


When you issue a CREATE TABLE statement, Adaptive Server Anywhere
automatically converts the data types to the corresponding Adaptive Server
Enterprise data types. Table 2-1 describes the Adaptive Server Anywhere to
Adaptive Server Enterprise data type conversions.

Adaptive Server Anywhere data ASE default data type


type
bit bit
tinyint tinyint
smallint smallint
int int
integer integer

928
Chapter 30 Server Classes for Remote Data Access

Adaptive Server Anywhere data ASE default data type


type
decimal [defaults p=30, s=6] numeric(30,6)
decimal(128,128) not supported
numeric [defaults p=30 s=6] numeric(30,6)
numeric(128,128) not supported
float real
real real
double float
smallmoney numeric(10,4)
money numeric(19,4)
date datetime
time datetime
timestamp datetime
smalldatetime datetime
datetime datetime
char(n) varchar(n)
character(n) varchar(n)
varchar(n) varchar(n)
character varying(n) varchar(n)
long varchar text
text text
binary(n) binary(n)
long binary image
image image
bigint numeric(20,0)

929
ODBC-based server classes

ODBC-based server classes


The ODBC-based server classes include:
♦ asaodbc
♦ aseodbc
♦ db2odbc
♦ mssodbc
♦ oraodbc
♦ odbc

Defining ODBC external servers


The most common way of defining an ODBC-based server bases it on an
ODBC data source. To do this, you must create a data source in the ODBC
Administrator.
Once you have the data source defined, the USING clause in the CREATE
SERVER statement should match the ODBC data source name.
For example, to configure a DB2 server named mydb2 whose Data Source
Name is also mydb2, use:
CREATE SERVER mydb2
CLASS ’db2odbc’
USING ’mydb2’
$ For more information on creating data sources, see "Creating an ODBC
data source" on page 49.
Using connection An alternative, which avoids using data sources, is to supply a connection
strings instead of string in the USING clause of the CREATE SERVER statement. To do this,
data sources you must know the connection parameters for the ODBC driver you are
using. For example, a connection to an ASA may be as follows:
CREATE SERVER testasa
CLASS ’asaodbc’
USING ’driver=adaptive server anywhere
7.0;eng=testasa;dbn=sample;links=tcpip{}’

This defines a connection to an Adaptive Server Anywhere database server


named testasa, database sample, and using the TCP-IP protocol.
See also For information specific to particular ODBC server classes, see:
♦ "Server class asaodbc" on page 931

930
Chapter 30 Server Classes for Remote Data Access

♦ "Server class aseodbc" on page 931


♦ "Server class db2odbc" on page 933
♦ "Server class oraodbc" on page 935
♦ "Server class mssodbc" on page 936
♦ "Server class odbc" on page 938

Server class asaodbc


A server with server class asaodbc is Adaptive Server Anywhere version 5.5
or later No special requirements exist for the configuration of an Adaptive
Server Anywhere data source.
The ODBC driver for version 6 databases installs when you install Adaptive
Server Anywhere. To access version 5 servers, install the version 5 ODBC
driver. You cannot use the version 6 ODBC driver to connect to a version 5
Adaptive Server Anywhere.
To access Adaptive Server Anywhere servers that support multiple
databases, create an ODBC data source name defining a connection to each
database. Issue a CREATE SERVER statement for each of these ODBC data
source names.

Server class aseodbc


A server with server class aseodbc is Adaptive Server Enterprise,
SQL Server (version 10 and later) Adaptive Server Anywhere requires the
installation of the Adaptive Server Enterprise ODBC driver and Open Client
connectivity libraries to connect to a remote Adaptive Server with class
aseodbc. However, the performance is better than with the asejdbc class.
Notes ♦ Open Client should be version 11.1.1, EBF 7886 or above. Install Open
Client and verify connectivity to the Adaptive Server before you install
ODBC and configure Adaptive Server Anywhere. The Sybase ODBC
driver should be version 11.1.1, EBF 7911 or above.
♦ Configure a User Data Source in the Configuration Manager with the
following attributes:
♦ Under the General tab:
Enter any value for Data Source Name. This value is used in the
USING clause of the CREATE SERVER statement.
The server name should match the name of the server in the Sybase
interfaces file.

931
ODBC-based server classes

♦ Under the Advanced tab, check the Application Using Threads box
and check the Enable Quoted Identifiers box.
♦ Under the Connection tab:
Set the charset field to match your Adaptive Server Anywhere
character set.
Set the language field to your preferred language for error
messages.
♦ Under the Performance tab:
Set Prepare Method to "2-Full."
Set Fetch Array Size as large as possible for best performance. This
increases memory requirements since this is the number of rows
that must be cached in memory. Sybase recommends using a value
of 100.
Set Select Method to "0-Cursor."
Set Packet Size to as large as possible. Sybase recommends using a
value of -1.
Set Connection Cache to 1.

Data type conversions


When you issue a CREATE TABLE statement, Adaptive Server Anywhere
automatically converts the data types to the corresponding Adaptive Server
Enterprise data types. The following table describes the Adaptive Server
Anywhere to Adaptive Server Enterprise data type conversions.

Adaptive Server Anywhere data Adaptive Server Enterprise


type default data type
Bit bit
Tinyint tinyint
Smallint smallint
Int int
Integer integer
decimal [defaults p=30, s=6] numeric(30,6)
decimal(128,128) not supported
numeric [defaults p=30 s=6] numeric(30,6)
numeric(128,128) not supported
Float real

932
Chapter 30 Server Classes for Remote Data Access

Adaptive Server Anywhere data Adaptive Server Enterprise


type default data type
Real real
Double float
Smallmoney numeric(10,4)
Money numeric(19,4)
Date datetime
Time datetime
Timestamp datetime
Smalldatetime datetime
Datetime datetime
char(n) varchar(n)
Character(n) varchar(n)
varchar(n) varchar(n)
Character varying(n) varchar(n)
long varchar text
Text text
binary(n) binary(n)
long binary image
Image image
Bigint numeric(20,0)

Server class db2odbc


A server with server class db2odbc is IBM DB2
Notes ♦ Sybase certifies the use of IBM’s DB2 Connect version 5, with fix pack
WR09044. Configure and test your ODBC configuration using the
instructions for that product. Adaptive Server Anywhere has no specific
requirements on configuration of DB2 data sources.
♦ The following is an example of a CREATE EXISTING TABLE
statement for a DB2 server with an ODBC data source named mydb2:
CREATE EXISTING TABLE ibmcol
AT ’mydb2..sysibm.syscolumns’

933
ODBC-based server classes

Data type conversions


When you issue a CREATE TABLE statement, Adaptive Server Anywhere
automatically converts the data types to the corresponding DB2 data types.
The following table describes the Adaptive Server Anywhere to DB2 data
type conversions.

Adaptive Server Anywhere data DB2 default data type


type
Bit smallint
Tinyint smallint
Smallint smallint
Int int
Integer int
Bigint decimal(20,0)
char(1–254) varchar(n)
char(255–4000) varchar(n)
char(4001–32767) long varchar
Character(1–254) varchar(n)
Character(255–4000) varchar(n)
Character(4001–32767) long varchar
varchar(1–4000) varchar(n)
varchar(4001–32767) long varchar
Character varying(1–4000) varchar(n)
Character varying(4001–32767) long varchar
long varchar long varchar
text long varchar
binary(1–4000) varchar for bit data
binary(4001–32767) long varchar for bit data
long binary long varchar for bit data
image long varchar for bit data
decimal [defaults p=30, s=6] decimal(30,6)
numeric [defaults p=30 s=6] decimal(30,6)
decimal(128, 128) NOT SUPPORTED
numeric(128, 128) NOT SUPPORTED

934
Chapter 30 Server Classes for Remote Data Access

Adaptive Server Anywhere data DB2 default data type


type
real real
float float
double float
smallmoney decimal(10,4)
money decimal(19,4)
date date
time time
smalldatetime timestamp
datetime timestamp
timestamp timestamp

Server class oraodbc


A server with server class oraodbc is Oracle version 8.0 or later
Notes ♦ Sybase certifies the use of version 8.0.03 of Oracle’s ODBC driver.
Configure and test your ODBC configuration using the instructions for
that product.
♦ The following is an example of a CREATE EXISTING TABLE
statement for an Oracle server named myora:
CREATE EXISTING TABLE employees
AT ’myora.database.owner.employees’
♦ Due to Oracle ODBC driver restrictions, you cannot issue a CREATE
EXISTING TABLE for system tables. A message returns stating that the
table or columns cannot be found.

Data type conversions


When you issue a CREATE TABLE statement, Adaptive Server Anywhere
automatically converts the data types to the corresponding Oracle data types.
The following table describes the Adaptive Server Anywhere to Oracle data
type conversions.

935
ODBC-based server classes

Adaptive Server Oracle data type


Anywhere data type
bit number(1,0)
tinyint number(3,0)
smallint number(5,0)
int number(11,0)
bigint number(20,0)
decimal(prec, scale) number(prec, scale)
numeric(prec, scale) number(prec, scale)
float float
real real
smallmoney numeric(13,4)
money number(19,4)
date date
time date
timestamp date
smalldatetime date
datetime date
char(n) if (n > 255) long else varchar(n)
varchar(n) if (n > 2000) long else varchar(n)
longchar long
binary(n) if (n > 255) long raw else raw(n)
varbinary(n) if (n > 255) long raw else raw(n)
longbinary long raw

Server class mssodbc


A server with server class mssodbc is Microsoft SQL Server version 6.5,
Service Pack 4.
Notes ♦ Sybase certifies the use of version 3.60.0319 of Microsoft SQL Server’s
ODBC driver (included in MDAC 2.0 release). Configure and test your
ODBC configuration using the instructions for that product.

936
Chapter 30 Server Classes for Remote Data Access

♦ The following is an example of a CREATE EXISTING TABLE


statement for a Microsoft SQL Server named mymssql:
CREATE EXISTING TABLE accounts,
AT ’mymssql.database.owner.accounts’

Data type conversions


When you issue a CREATE TABLE statement, Adaptive Server Anywhere
automatically converts the data types to the corresponding Microsoft
SQL Server data types. The following table describes the Adaptive Server
Anywhere to Microsoft SQL Server data type conversions.

Adaptive Server Anywhere data Microsoft SQL Server default


type data type
bit bit
tinyint tinyint
smallint smallint
int int
bigint numeric(20,0)
decimal [defaults p=30, s=6] decimal(prec, scale)
numeric [defaults p=30 s=6] numeric(prec, scale)
float if (prec) float(prec) else float
real real
smallmoney smallmoney
money money
date datetime
time datetime
timestamp datetime
smalldatetime datetime
datetime datetime
char(n) if (length > 255) text else
varchar(length)
character(n) char(n)

937
ODBC-based server classes

Adaptive Server Anywhere data Microsoft SQL Server default


type data type
varchar(n) if (length > 255) text else
varchar(length)
longchar text
binary(n) if (length > 255) image else
binary(length)
long binary image
double float

Server class odbc


ODBC data sources that do not have their own server class use server class
odbc. You can use any ODBC driver that complies with ODBC version 2.0
compliance level 1 or higher. Sybase certifies the following ODBC data
sources:
♦ "Microsoft Excel" on page 938
♦ "Microsoft Access" on page 939
♦ "Microsoft Foxpro" on page 940
♦ "Lotus Notes" on page 940
The latest versions of Microsoft ODBC drivers can be obtained through the
Microsoft Data Access Components (MDAC) distribution found at
www.microsoft/data/download.htm. The Microsoft driver versions listed
below are part of MDAC 2.0.
The following sections provide notes on accessing these data sources.

Microsoft Excel (Microsoft 3.51.171300)


With Excel, each Excel workbook is logically considered to be a database
holding several tables. Tables are mapped to sheets in a workbook. When
you configure an ODBC data source name in the ODBC driver manager, you
specify a default workbook name associated with that data source, however
when you issue a CREATE TABLE statement, you can override the default
and specify a workbook name in the location string. This allows you to use a
single ODBC DSN to access all of your excel workbooks.
In this example, an ODBC data source named excel was created. To create a
workbook named work1.xls with a sheet (table) called mywork:
CREATE TABLE mywork (a int, b char(20))
938
Chapter 30 Server Classes for Remote Data Access

AT’excel;d:\work1.xls;;mywork’
To create a second sheet (or table) execute a statement such as:
CREATE TABLE mywork2 (x float, y int)
AT ’excel;d:\work1.xls;;mywork2’
You can import existing worksheets into Adaptive Server Anywhere using
CREATE EXISTING, under the assumption that the first row of your
spreadsheet contains column names.
CREATE EXISTING TABLE mywork
AT’excel;d:\work1;;mywork’
If Adaptive Server Anywhere reports that the table is not found, you may
need to explicitly state the column and row range you wish to map to. For
example:
CREATE EXISTING TABLE mywork
AT ’excel;d:\work1;;mywork$’
Adding the $ to the sheet name indicates that the entire worksheet should be
selected.
Note in the location string specified by AT that a semicolon is used instead
of a period for field separators This is because periods occur in the file
names. Excel does not support the owner name field so leave this blank.
Deletes are not supported. Also some updates may not be possible since the
Excel driver does not support positioned updates.

Microsoft Access (Microsoft 3.51.171300)


Access databases are stored in a .mdb file. Using the ODBC manager, create
an ODBC data source and map it to one of these files. A new .mdb file can
be created through the ODBC manager. This database file becomes the
default if you don’t specify a different default when you create a table
through Adaptive Server Anywhere.
Assuming an ODBC data source named access.
CREATE TABLE tab1 (a int, b char(10))
AT ’access...tab1’
or
CREATE TABLE tab1 (a int, b char(10))
AT ’access;d:\pcdb\data.mdb;;tab1’
or
CREATE EXISTING TABLE tab1
AT ’access;d:\pcdb\data.mdb;;tab1’
Access does not support the owner name qualification, leave it empty.
939
ODBC-based server classes

Microsoft Foxpro (Microsoft 3.51.171300)


You can store Foxpro tables together inside a single foxpro database file
(.dbc), or, you can store each table in its own separate .dbf file. When using
.dbf files, be sure the file name is filled into the location string, otherwise the
directory that Adaptive Server Anywhere was started in will be used.
CREATE TABLE fox1 (a int, b char(20))
AT ’foxpro;d:\pcdb;;fox1’
This statement creates a file named d:\pcdb\fox1.dbf when you choose the
"free table directory" option in the odbc driver manager.

Lotus Notes SQL 2.0 (2.04.0203))


You can obtain this driver from the Lotus Web site. Read the documentation
that comes with it for an explanation of how Notes data maps to relational
tables. You can easily map Adaptive Server Anywhere tables to Notes forms.
Here is how to set up Adaptive Server Anywhere to access the Address
sample file.
♦ Create an ODBC data source using the NotesSQL driver. The database
will be the sample names file: c:\notes\data\names.nsf. The Map Special
Characters option should be turned on. For this example, the Data
Source Name is my_notes_dsn.
♦ Create a server in Adaptive Server Anywhere:
CREATE SERVER names
CLASS ’odbc’
USING ’my_notes_dsn’
♦ Map the Person form into an Adaptive Server Anywhere table:
CREATE EXISTING TABLE Person
AT ’names...Person’
♦ Query the table
SELECT * FROM Person

Avoiding password Lotus Notes does not support sending a user name and password through the
prompts ODBC API. If you try to access Lotus notes using a password protected ID, a
window appears on the machine where Adaptive Server Anywhere is
running, and prompts you for a password. Avoid this behavior in multi-user
server environments.

940
Chapter 30 Server Classes for Remote Data Access

To access Lotus Notes unattended, without ever receiving a password


prompt, you must use a non-password-protected ID. You can remove
password protection from your ID by clearing it (File➤Tools➤User
ID➤Clear Password), unless your Domino administrator required a
password when your ID was created. In this case, you will not be able to
clear it.

941
ODBC-based server classes

942
C H A P T E R 3 1

Three-tier Computing and Distributed


Transactions

About this chapter This chapter describes how to use Adaptive Server Anywhere in a three-tier
environment with an application server. It focuses on how to enlist Adaptive
Server Anywhere in distributed transactions.
Contents
Topic Page
Introduction 944
Three-tier computing architecture 945
Using distributed transactions 949
Using Enterprise Application Server with Adaptive Server Anywhere 951

943
Introduction

Introduction
You can use Adaptive Server Anywhere as a database server or resource
manager, participating in distributed transactions coordinated by a
transaction server.
A three-tier environment, where an application server sits between client
applications and a set of resource managers, is a common distributed-
transaction environment. Sybase Enterprise Application Server and some
other application servers are also transaction servers.
Sybase Enterprise Application Server and Microsoft Transaction Server both
use the Microsoft Distributed Transaction Coordinator (DTC) to coordinate
transactions. Adaptive Server Anywhere provides support for distributed
transactions controlled by the DTC service, so you can use Adaptive Server
Anywhere with either of these application servers, or any other product
based on the DTC model.
When integrating Adaptive Server Anywhere into a three-tier environment,
most of the work needs to be done from the Application Server. This chapter
provides an introduction to the concepts and architecture of three-tier
computing, and an overview of relevant Adaptive Server Anywhere features.
It does not describe how to configure your Application Server to work with
Adaptive Server Anywhere. For more information, see your Application
Server documentation.

944
Chapter 31 Three-tier Computing and Distributed Transactions

Three-tier computing architecture


In three-tier computing, application logic is held in an application server,
such as Sybase Enterprise Application Server, which sits between the
resource manager and the client applications. In many situations, a single
application server may access multiple resource managers. In the Internet
case, client applications are browser-based, and the application server is
generally a Web server extension.

Application
Server

Sybase Enterprise Application Server stores application logic in the form of


components, and makes these components available to client applications.
The components may be PowerBuilder components, Java beans, or COM
components.
$ For more information, see the Sybase Enterprise Application Server
documentation.

945
Three-tier computing architecture

Distributed transactions in three-tier computing


When client applications or application servers work with a single
transaction processing database, such as Adaptive Server Anywhere, there is
no need for transaction logic outside the database itself, but when working
with multiple resource managers, transaction control must span the resources
involved in the transaction. Application servers provide transaction logic to
their client applications—guaranteeing that sets of operations are executed
atomically.
Many transaction servers, including Sybase Enterprise Application Server,
use the Microsoft Distributed Transaction Coordinator (DTC) to provide
transaction services to their client applications. DTC uses OLE
transactions, which in turn use the two-phase commit protocol to
coordinate transactions involving multiple resource managers. DTC is
available on Windows NT as part of the Windows NT Option Pack.
Adaptive Server Adaptive Server Anywhere can take part in transactions coordinated by
Anywhere in DTC, which means that you can use Adaptive Server Anywhere databases in
distributed distributed transactions using a transaction server such as Sybase Enterprise
transactions Application Server or Microsoft Transaction Server. You can also use DTC
directly in your applications to coordinate transactions across multiple
resource managers.

The vocabulary of distributed transactions


This chapter assumes some familiarity with distributed transactions. For
information, see your transaction server documentation. This section
describes some commonly used terms.
♦ Resource managers are those services that manage the data involved in
the transaction.
The Adaptive Server Anywhere database server can act as a resource
manager in a distributed transaction when accessed through OLE DB or
ODBC. The ODBC driver and OLE DB provider act as resource
manager proxies on the client machine.
♦ Instead of communicating directly with the resource manager,
application components may communicate with resource dispensers,
which in turn manage connections or pools of connections to the
resource managers.
Adaptive Server Anywhere supports two resource dispensers: the ODBC
driver manager and OLE DB.

946
Chapter 31 Three-tier Computing and Distributed Transactions

♦ When a transactional component requests a database connection (using a


resource manager), the application server enlists each database
connection takes part in the transaction. DTC and the resource dispenser
carry out the enlistment process.
Two-phase commit Distributed transactions are managed using two-phase commit. When the
work of the transaction is complete, the transaction manager (DTC) asks all
the resource managers enlisted in the transaction whether they are ready to
commit the transaction. This phase is called preparing to commit.
If all the resource managers respond that they are prepared to commit, DTC
sends a commit request to each resource manager, and responds to its client
that the transaction is completed. If one or more resource manager does not
respond, or responds that it cannot commit the transaction, all the work of the
transaction is rolled back across all resource managers.

How application servers use DTC


Sybase Enterprise Application Server and Microsoft Transaction Server are
both component servers. The application logic is held in the form of
components, and made available to client applications.
Each component has a transaction attribute that indicates how the component
participates in transactions. The application developer building the
component must program the work of the transaction into the component—
the resource manager connections, the operations on the data for which each
resource manager is responsible. However, the application developer does
not need to add transaction management logic to the component. Once the
transaction attribute is set, to indicate that the component needs transaction
management, Enterprise Application Server uses DTC to enlist the
transaction and manage the two-phase commit process.

Distributed transaction architecture


The following diagram illustrates the architecture of distributed transactions.
In this case, the resource manager proxy is either ODBC or OLE DB.

947
Three-tier computing architecture

Client
system

Application
Server

Resource Resource
Manager DTC Manager
Proxy Proxy

DTC DTC

Server Server
system 1 system 2

In this case, a single resource dispenser is used. The Application Server asks
DTC to prepare a transaction. DTC and the resource dispenser enlist each
connection in the transaction. Each resource manager must be in contact with
both DTC and the database, so as to carry out the work and to notify DTC of
its transaction status when required.
A DTC service must be running on each machine in order to operate
distributed transactions. You can control DTC services from the Services
icon in the Windows NT control panel; the DTC service is named MSDTC.
$ For more information, see your DTC or Enterprise Application Server
documentation.

948
Chapter 31 Three-tier Computing and Distributed Transactions

Using distributed transactions


While Adaptive Server Anywhere is enlisted in a distributed transaction, it
hands transaction control over to the transaction server, and Adaptive Server
Anywhere ensures that it does not carry out any implicit transaction
management. The following conditions are imposed automatically by
Adaptive Server Anywhere when it participates in distributed transactions:
♦ Autocommit is automatically turned off, if it is in use.
♦ Data definition statements (which commit as a side effect) are
disallowed during distributed transactions.
♦ An explicit COMMIT or ROLLBACK issued by the application directly
to Adaptive Server Anywhere, instead of through the transaction
coordinator, generates an error. The transaction is not aborted, however.
♦ A connection can participate in only a single distributed transaction at a
time.
♦ There must be no uncommitted operations at the time the connection is
enlisted in a distributed transaction.

DTC isolation levels


DTC has a set of isolation levels, which the application server specifies.
These isolation levels map to Adaptive Server Anywhere isolation levels as
follows:

DTC isolation level Adaptive Server


Anywhere isolation level
ISOLATIONLEVEL_UNSPECIFIED 0
ISOLATIONLEVEL_CHAOS 0
ISOLATIONLEVEL_READUNCOMMITTED 0
ISOLATIONLEVEL_BROWSE 0
ISOLATIONLEVEL_CURSORSTABILITY 1
ISOLATIONLEVEL_READCOMMITTED 1
ISOLATIONLEVEL_REPEATABLEREAD 2
ISOLATIONLEVEL_SERIALIZABLE 3
ISOLATIONLEVEL_ISOLATED 3

949
Using distributed transactions

Recovery from distributed transactions


If the database server faults while uncommitted operations are pending, it
must either rollback or commit those operations on startup to preserve the
atomic nature of the transaction.
If uncommitted operations from a distributed transaction are found during
recovery, the database server attempts to connect to DTC and requests that it
be re-enlisted in the pending or in-doubt transactions. Once the re-enlistment
is complete, DTC instructs the database server to roll back or commit the
outstanding operations.
If the reenlistment process fails, Adaptive Server Anywhere has no way of
knowing whether the in-doubt operations should be committed or rolled
back, and recovery fails. If you want the database in such a state to recover,
regardless of the uncertain state of the data, you can force recovery using the
following database server command-line options:
♦ -tmf If DTC cannot be located, the outstanding operations are rolled
back and recovery continues.
$ For more information, see "–tmf command-line option" on page 36
of the book ASA Reference.
♦ -tmt If re-enlistment is not achieved before the specified time, the
outstanding operations are rolled back and recovery continues.
$ For more information, see "–tmt command-line option" on page 36
of the book ASA Reference.

950
Chapter 31 Three-tier Computing and Distributed Transactions

Using Enterprise Application Server with


Adaptive Server Anywhere
This section provides an overview of the actions you need to take in
Enterprise Application Server 3.0 to work with Adaptive Server Anywhere.
For more detailed information, see the Enterprise Application Server
documentation.

Configuring Enterprise Application Server


All components installed in a Sybase Enterprise Application Server share the
same transaction coordinator.
Enterprise Application Server 3.0 offers a choice of transaction coordinators.
You must use DTC as the transaction coordinator if you are including
Adaptive Server Anywhere in the transactions. This section describes how to
configure Enterprise Application Server to use DTC as its transaction
coordinator.
The component server in Enterprise Application Server is named Jaguar.

v To configure an Enterprise Application Server to use the Microsoft


DTC transaction model:
1 Ensure that your Jaguar server is running.
On Windows NT, the Jaguar server commonly runs as a service. To
manually start the installed Jaguar server that comes with Enterprise
Application Server 3.0, select Start➤Programs➤Sybase➤Jaguar
CTS➤Jaguar Server.
2 Start Jaguar Manager.
From the Windows NT desktop, select
Start➤Programs➤Sybase➤Jaguar CTS➤Jaguar Manager. The Jaguar
Manager runs inside the Sybase Central administration tool.
3 Connect to the Jaguar server from Jaguar Manager.
From the Sybase Central menu, choose Tools➤Connect➤Jaguar
Manager. In the connection dialog, enter jagadmin as the User Name,
leave the Password field blank, and enter a Host Name of localhost.
Click OK to connect.
4 Set the transaction model for the Jaguar server.

951
Using Enterprise Application Server with Adaptive Server Anywhere

In the left pane, open the Servers folder. In the right pane, right click on
the server you wish to configure, and select Server Properties from the
drop down menu. Click the Transactions tab, and choose Microsoft DTC
as the transaction model. Click OK to complete the operation.

Setting the component transaction attribute


In Enterprise Application Server you may implement a component that
carries out operations on more than one database. You assign a transaction
attribute to this component that defines how it participates in transactions.
The transaction attribute can have the following values:
♦ Not Supported The component’s methods never execute as part of a
transaction. If the component is activated by another component that is
executing within a transaction, the new instance’s work is performed
outside the existing transaction. This is the default.
♦ Supports Transaction The component can execute in the context of a
transaction, but a connection is not required in order to execute the
component’s methods. If the component is instantiated directly by a base
client, Enterprise Application Server does not begin a transaction. If
component A is instantiated by component B, and component B is
executing within a transaction, component A executes in the same
transaction.
♦ Requires Transaction The component always executes in a
transaction. When the component is instantiated directly by a base client,
a new transaction begins. If component A is activated by component B,
and B is executing within a transaction, then A executes within the same
transaction; if B is not executing in a transaction, then A executes in a
new transaction.
♦ Requires New Transaction Whenever the component is instantiated,
a new transaction begins. If component A is activated by component B,
and B is executing within a transaction, then A begins a new transaction
that is unaffected by the outcome of B’s transaction; if B is not executing
in a transaction, then A executes in a new transaction.
For example, in the Sybase Virtual University sample application, included
with Enterprise Application Server as the SVU package, the
SVUEnrollment component enroll() method carries out two separate
operations (reserves a seat in a course, bills the student for the course). These
two operations need to be treated as a single transaction.
Microsoft Transaction Server provides the same set of attribute values.

952
Chapter 31 Three-tier Computing and Distributed Transactions

v To set the transaction attribute of a component:


1 In Jaguar Manager, locate the component.
To find the SVUEnrollment component in the Jaguar sample
application, connect to the Jaguar server, open the Packages folder, and
open the SVU package. The components in the package are listed in the
right pane.
2 Set the transaction attribute for the desired component.
Right click the component, and select Component Properties from the
popup menu. Click the Transaction tab, and choose the transaction
attribute value from the list. Click OK to complete the operation.
The SVUEnrollment component is already marked as Requires
Transaction.
Once the component transaction attribute is set, you can carry out Adaptive
Server Anywhere operations from that component, and be assured of
transaction processing at the level you have specified.

953
Using Enterprise Application Server with Adaptive Server Anywhere

954
P A R T S I X

The Adaptive Server Family

Adaptive Server Anywhere is one member of the Adaptive Server family of


database products from Sybase. This part describes how to use Adaptive
Server Anywhere together with other members of the Adaptive Server family,
and particularly Adaptive Server Enterprise.

955
956
C H A P T E R 3 2

Transact-SQL Compatibility

About this chapter Transact-SQL is the dialect of SQL supported by Sybase Adaptive Server
Enterprise.
This chapter is a guide for creating applications that are compatible with both
Adaptive Server Anywhere and Adaptive Server Enterprise. It describes
Adaptive Server Anywhere support for Transact-SQL language elements and
statements, and for Adaptive Server Enterprise system tables, views, and
procedures.
Contents
Topic Page
An overview of Transact-SQL support 958
Adaptive Server architectures 961
Configuring databases for Transact-SQL compatibility 967
Writing compatible SQL statements 975
Transact-SQL procedure language overview 980
Automatic translation of stored procedures 983
Returning result sets from Transact-SQL procedures 984
Variables in Transact-SQL procedures 985
Error handling in Transact-SQL procedures 986

957
An overview of Transact-SQL support

An overview of Transact-SQL support


Adaptive Server Anywhere supports a large subset of Transact-SQL, which
is the dialect of SQL supported by Sybase Adaptive Server Enterprise. This
chapter describes compatibility of SQL between Adaptive Server Anywhere
and Adaptive Server Enterprise.
Goals The goals of Transact-SQL support in Adaptive Server Anywhere are as
follows:
♦ Application portability Many applications, stored procedures, and
batch files can be written for use with both Adaptive Server Enterprise
and Adaptive Server Anywhere databases.
♦ Data portability Adaptive Server Anywhere and Adaptive Server
Enterprise databases can exchange and replicate data between each other
with minimum effort.
The aim is to write applications to work with both Adaptive Server
Enterprise and Adaptive Server Anywhere. Existing Adaptive Server
Enterprise applications generally require some changes to run on an
Adaptive Server Anywhere database.
How Transact-SQL Transact-SQL support in Adaptive Server Anywhere takes the following
is supported form:
♦ Many SQL statements are compatible between Adaptive Server
Anywhere and Adaptive Server Enterprise.
♦ For some statements, particularly in the procedure language used in
procedures, triggers, and batches, a separate Transact-SQL statement is
supported together with the syntax supported in previous versions of
Adaptive Server Anywhere. For these statements, Adaptive Server
Anywhere supports two dialects of SQL. In this chapter, we name those
dialects Transact-SQL and Watcom-SQL.
♦ A procedure, trigger, or batch is executed in either the Transact-SQL or
Watcom-SQL dialect. You must use control statements from one dialect
only throughout the batch or procedure. For example, each dialect has
different flow control statements.
The following diagram illustrates how the two dialects overlap.

958
Chapter 32 Transact-SQL Compatibility

Statements allowed
ASA-only statements Transact-SQL statements
in both servers

Transact-SQL control
ASA control statements,
statements, CREATE
CREATE PROCEDURE SELECT, INSERT,
PROCEDURE statement,
statement, CREATE UPDATE, DELETE,...
CREATE TRIGGER
TRIGGER statement,...
statement,...

Similarities and Adaptive Server Anywhere supports a very high percentage of Transact-SQL
differences language elements, functions, and statements for working with existing data.
For example, Adaptive Server Anywhere supports all of the numeric
functions, all but one of the string functions, all aggregate functions, and all
date and time functions. As another example, Adaptive Server Anywhere
supports Transact-SQL outer joins (using =* and *= operators) and extended
DELETE and UPDATE statements using joins.
Further, Adaptive Server Anywhere supports a very high percentage of the
Transact-SQL stored procedure language (CREATE PROCEDURE and
CREATE TRIGGER syntax, control statements, and so on) and many, but
not all, aspects of Transact-SQL data definition language statements.
There are design differences in the architectural and configuration facilities
supported by each product. Device management, user management, and
maintenance tasks such as backups tend to be system-specific. Even here,
Adaptive Server Anywhere provides Transact-SQL system tables as views,
where the tables that are not meaningful in Adaptive Server Anywhere have
no rows. Also, Adaptive Server Anywhere provides a set of system
procedures for some of the more common administrative tasks.
This chapter looks first at some system-level issues where differences are
most noticeable, before discussing data manipulation and data definition
language aspects of the dialects where compatibility is high.
Transact-SQL only Some SQL statements supported by Adaptive Server Anywhere are part of
one dialect, but not the other. You cannot mix the two dialects within a
procedure, trigger, or batch. For example, Adaptive Server Anywhere
supports the following statements, but as part of the Transact-SQL dialect
only:
♦ Transact-SQL control statements IF and WHILE

959
An overview of Transact-SQL support

♦ Transact-SQL EXECUTE statement


♦ Transact-SQL CREATE PROCEDURE and CREATE TRIGGER
statements
♦ Transact-SQL BEGIN TRANSACTION statement.
♦ SQL statements not separated by semicolons are part of a Transact-SQL
procedure or batch.

Adaptive Server Adaptive Server Enterprise does not support the following statements:
Anywhere only
♦ control statements CASE, LOOP, and FOR
♦ Adaptive Server Anywhere versions of IF and WHILE
♦ CALL statement
♦ Adaptive Server Anywhere versions of the CREATE PROCEDURE,
CREATE FUNCTION, and CREATE TRIGGER statements.
♦ SQL statements separated by semicolons
Notes The two dialects cannot be mixed within a procedure, trigger, or batch. This
means that:
♦ You can include Transact-SQL-only statements together with statements
that are part of both dialects in a batch, procedure, or trigger.
♦ You can include statements not supported by Adaptive Server Enterprise
together with statements that are supported by both servers in a batch,
procedure, or trigger.
♦ You cannot include Transact-SQL-only statements together with
Adaptive Server Anywhere-only statements in a batch, procedure, or
trigger.

960
Chapter 32 Transact-SQL Compatibility

Adaptive Server architectures


Adaptive Server Enterprise and Adaptive Server Anywhere are
complementary products, with architectures designed to suit their distinct
purposes. Adaptive Server Anywhere works well as a workgroup or
departmental server requiring little administration, and as a personal
database. Adaptive Server Enterprise works well as an enterprise-level server
for the largest databases.
This section describes architectural differences between Adaptive Server
Enterprise and Adaptive Server Anywhere. It also describes the Adaptive
Server Enterprise-like tools that Adaptive Server Anywhere includes for
compatible database management.

Servers and databases


The relationship between servers and databases is different in Adaptive
Server Enterprise than in Adaptive Server Anywhere.
In Adaptive Server Enterprise, each database exists inside a server, and each
server can contain several databases. Users can have login rights to the
server, and can connect to the server. They can then use each database on
that server for which they have permissions. System-wide system tables, held
in a master database, contain information common to all databases on the
server.
No master In Adaptive Server Anywhere, there is no level corresponding to the
database in Adaptive Server Enterprise master database. Instead, each database is an
Adaptive Server independent entity, containing all of its system tables. Users can have
Anywhere connection rights to a database, not to the server. When a user connects, they
connect to an individual database. There is no system-wide set of system
tables maintained at a master database level. Each Adaptive Server
Anywhere database server can dynamically load and unload multiple
databases, and users can maintain independent connections on each.
Adaptive Server Anywhere provides tools in its Transact-SQL support and in
its Open Server support to allow some tasks to be carried out in a manner
similar to Adaptive Server Enterprise. For example, Adaptive Server
Anywhere provides an implementation of the Adaptive Server Enterprise
sp_addlogin system procedure that carries out the nearest equivalent action:
adding a user to a database.
$ For information about Open Server support, see "Adaptive Server
Anywhere as an Open Server" on page 989.

961
Adaptive Server architectures

File manipulation Adaptive Server Anywhere does not support the Transact-SQL statements
statements DUMP DATABASE and LOAD DATABASE. Adaptive Server Anywhere
has its own CREATE DATABASE and DROP DATABASE statements with
different syntax.

Device management
Adaptive Server Anywhere and Adaptive Server Enterprise use different
models for managing devices and disk space, reflecting the different uses for
the two products. While Adaptive Server Enterprise sets out a comprehensive
resource management scheme using a variety of Transact-SQL statements,
Adaptive Server Anywhere manages its own resources automatically, and its
databases are regular operating system files.
Adaptive Server Anywhere does not support Transact-SQL DISK statements,
such as DISK INIT, DISK MIRROR, DISK REFIT, DISK REINIT, DISK
REMIRROR, and DISK UNMIRROR.
$ For information on disk management, see "Working with Database
Files" on page 785

Defaults and rules


Adaptive Server Anywhere does not support the Transact-SQL CREATE
DEFAULT statement or CREATE RULE statement. The CREATE
DOMAIN statement allows you to incorporate a default and a rule (called a
CHECK condition) into the definition of a domain, and so provides similar
functionality to the Transact-SQL CREATE DEFAULT and CREATE
RULE statements.
In Adaptive Server Enterprise, the CREATE DEFAULT statement creates a
named default. This default can be used as a default value for columns by
binding the default to a particular column or as a default value for all
columns of a domain by binding the default to the data type using the
sp_bindefault system procedure.
The CREATE RULE statement creates a named rule which can be used to
define the domain for columns by binding the rule to a particular column or
as a rule for all columns of a domain by binding the rule to the data type. A
rule is bound to a data type or column using the sp_bindrule system
procedure.

962
Chapter 32 Transact-SQL Compatibility

In Adaptive Server Anywhere, a domain can have a default value and a


CHECK condition associated with it, which are applied to all columns
defined on that data type. You create the domain using the CREATE
DOMAIN statement.
You can define default values and rules, or CHECK conditions, for
individual columns using the CREATE TABLE statement or the ALTER
TABLE statement.
$ For a description of the Adaptive Server Anywhere syntax for these
statements, see "SQL Statements" on page 377 of the book ASA Reference.

System tables
In addition to its own system tables, Adaptive Server Anywhere provides a
set of system views that mimic relevant parts of the Adaptive Server
Enterprise system tables. You’ll find a list and individual descriptions in
"Views for Transact-SQL Compatibility" on page 1077 of the book ASA
Reference, which describes the system catalogs of the two products. This
section provides a brief overview of the differences.
The Adaptive Server Anywhere system tables rest entirely within each
database, while the Adaptive Server Enterprise system tables rest partly
inside each database and partly in the master database. The Adaptive Server
Anywhere architecture does not include a master database.
In Adaptive Server Enterprise, the database owner (user ID dbo) owns the
system tables. In Adaptive Server Anywhere, the system owner (user ID
SYS) owns the system tables. A dbo user ID owns the Adaptive Server
Enterprise-compatible system views provided by Adaptive Server Anywhere.

Administrative roles
Adaptive Server Enterprise has a more elaborate set of administrative roles
than Adaptive Server Anywhere. In Adaptive Server Enterprise there is a set
of distinct roles, although more than one login account on an Adaptive
Server Enterprise can be granted any role, and one account can possess more
than one role.
Adaptive Server In Adaptive Server Enterprise distinct roles include:
Enterprise roles
♦ System Administrator Responsible for general administrative tasks
unrelated to specific applications; can access any database object.
♦ System Security Officer Responsible for security-sensitive tasks in
Adaptive Server Enterprise, but has no special permissions on database
objects.

963
Adaptive Server architectures

♦ Database Owner Has full permissions on objects inside the database


he or she owns, can add users to a database and grant other users the
permission to create objects and execute commands within the database.
♦ Data definition statements Permissions can be granted to users for
specific data definition statements, such as CREATE TABLE or
CREATE VIEW, enabling the user to create database objects.
♦ Object owner Each database object has an owner who may grant
permissions to other users to access the object. The owner of an object
automatically has all permissions on the object.
In Adaptive Server Anywhere, the following database-wide permissions have
administrative roles:
♦ The Database Administrator (DBA authority) has, like the Adaptive
Server Enterprise database owner, full permissions on all objects inside
the database (other than objects owned by SYS) and can grant other
users the permission to create objects and execute commands within the
database. The default database administrator is user ID DBA.
♦ The RESOURCE permission allows a user to create any kind of object
within a database. This is instead of the Adaptive Server Enterprise
scheme of granting permissions on individual CREATE statements.
♦ Adaptive Server Anywhere has object owners in the same way that
Adaptive Server Enterprise does. The owner of an object automatically
has all permissions on the object, including the right to grant
permissions.
For seamless access to data held in both Adaptive Server Enterprise and
Adaptive Server Anywhere, you should create user IDs with appropriate
permissions in the database (RESOURCE in Adaptive Server Anywhere, or
permission on individual CREATE statements in Adaptive Server
Enterprise) and create objects from that user ID. If you use the same user ID
in each environment, object names and qualifiers can be identical in the two
databases, ensuring compatible access.

Users and groups


There are some differences between the Adaptive Server Enterprise and
Adaptive Server Anywhere models of users and groups.
In Adaptive Server Enterprise, users connect to a server, and each user
requires a login ID and password to the server as well as a user ID for each
database they want to access on that server. Each user of a database can only
be a member of one group.

964
Chapter 32 Transact-SQL Compatibility

In Adaptive Server Anywhere, users connect directly to a database and do


not require a separate login ID to the database server. Instead, each user
receives a user ID and password on a database so they can use that database.
Users can be members of many groups, and group hierarchies are allowed.
Both servers support user groups, so you can grant permissions to many
users at one time. However, there are differences in the specifics of groups in
the two servers. For example, Adaptive Server Enterprise allows each user to
be a member of only one group, while Adaptive Server Anywhere has no
such restriction. You should compare the documentation on users and groups
in the two products for specific information.
Both Adaptive Server Enterprise and Adaptive Server Anywhere have a
public group, for defining default permissions. Every user automatically
becomes a member of the public group.
Adaptive Server Anywhere supports the following Adaptive Server
Enterprise system procedures for managing users and groups.
$ For the arguments to each procedure, see "Adaptive Server Enterprise
system and catalog procedures" on page 988 of the book ASA Reference.

System procedure Description


sp_addlogin In Adaptive Server Enterprise, this adds a user to the
server. In Adaptive Server Anywhere, this adds a user to
a database.
sp_adduser In Adaptive Server Enterprise and Adaptive Server
Anywhere, this adds a user to a database. While this is a
distinct task from sp_addlogin in Adaptive Server
Enterprise, in Adaptive Server Anywhere, they are the
same.
sp_addgroup Adds a group to a database.
sp_changegroup Adds a user to a group, or moves a user from one group
to another.
sp_droplogin In Adaptive Server Enterprise, removes a user from the
server. In Adaptive Server Anywhere, removes a user
from the database.
sp_dropuser Removes a user from the database.
sp_dropgroup Removes a group from the database.

In Adaptive Server Enterprise, login IDs are server-wide. In Adaptive Server


Anywhere, users belong to individual databases.

965
Adaptive Server architectures

Database object The Adaptive Server Enterprise and Adaptive Server Anywhere GRANT and
permissions REVOKE statements for granting permissions on individual database objects
are very similar. Both allow SELECT, INSERT, DELETE, UPDATE, and
REFERENCES permissions on database tables and views, and UPDATE
permissions on selected columns of database tables. Both allow EXECUTE
permissions to be granted on stored procedures.
For example, the following statement is valid in both Adaptive Server
Enterprise and Adaptive Server Anywhere:
GRANT INSERT, DELETE
ON TITLES
TO MARY, SALES
This statement grants permission to use the INSERT and DELETE
statements on the TITLES table to user MARY and to the SALES group.
Both Adaptive Server Anywhere and Adaptive Server Enterprise support the
WITH GRANT OPTION clause, allowing the recipient of permissions to
grant them in turn, although Adaptive Server Anywhere does not permit
WITH GRANT OPTION to be used on a GRANT EXECUTE statement.
Database-wide Adaptive Server Enterprise and Adaptive Server Anywhere use different
permissions models for database-wide user permissions. These are discussed in "Users
and groups" on page 964. Adaptive Server Anywhere employs DBA
permissions to allow a user full authority within a database. The System
Administrator in Adaptive Server Enterprise enjoys this permission for all
databases on a server. However, DBA authority on an Adaptive Server
Anywhere database is different from the permissions of an Adaptive Server
Enterprise Database Owner, who must use the Adaptive Server Enterprise
SETUSER statement to gain permissions on objects owned by other users.
Adaptive Server Anywhere employs RESOURCE permissions to allow a
user the right to create objects in a database. A closely corresponding
Adaptive Server Enterprise permission is GRANT ALL, used by a Database
Owner.

966
Chapter 32 Transact-SQL Compatibility

Configuring databases for Transact-SQL


compatibility
You can eliminate some differences in behavior between Adaptive Server
Anywhere and Adaptive Server Enterprise by selecting appropriate options
when creating a database or, if you are working on an existing database,
when rebuilding the database. You can control other differences by
connection level options using the SET TEMPORARY OPTION statement
in Adaptive Server Anywhere or the SET statement in Adaptive Server
Enterprise.

Creating a Transact-SQL-compatible database


This section describes choices you must make when creating or rebuilding a
database.
Quick start Here are the steps you need to take to create a Transact-SQL-compatible
database. The remainder of the section describes which options you need to
set.

v To create a Transact-SQL compatible database (Sybase Central):


1 Start Sybase Central.
2 Choose Tools➤Adaptive Server Anywhere➤Create Database.
3 On the first five pages of the wizard, configure the available settings.
4 On the sixth page (called Choose the Database Attributes), click
Emulate Adaptive Server Enterprise.
5 Click Next to move to the next page. Continue to follow the instructions
in the wizard.

v To create a Transact-SQL compatible database (Command line):


♦ Enter the following command at a system prompt:
dbinit -b -c -k db-name.db

v To create a Transact-SQL compatible database (SQL):


1 Connect to any Adaptive Server Anywhere database.
2 Enter the following statement, for example, in Interactive SQL:

967
Configuring databases for Transact-SQL compatibility

CREATE DATABASE ’db-name.db’


ASE COMPATIBLE
In this statement, ASE COMPATIBLE means compatible with Adaptive
Server Enterprise.

Make the database By default, string comparisons in Adaptive Server Enterprise databases are
case-sensitive case-sensitive, while those in Adaptive Server Anywhere are case
insensitive.
When building an Adaptive Server Enterprise-compatible database using
Adaptive Server Anywhere, choose the case-sensitive option.
♦ If you are using Sybase Central, this option is in the Create Database
wizard on the Choose the Database Attributes page.
♦ If you are using the dbinit command-line utility, specify the -c
command-line switch.

Ignore trailing When building an Adaptive Server Enterprise-compatible database using


blanks in Adaptive Server Anywhere, choose the option to ignore trailing blanks in
comparisons comparisons.
♦ If you are using Sybase Central, this option is in the Create Database
wizard on the Choose the Database Attributes page.
♦ If you are using the dbinit command line utility, specify the -b
command-line switch.
When you choose this option, Adaptive Server Enterprise and Adaptive
Server Anywhere considers the following two strings equal:
’ignore the trailing blanks ’
’ignore the trailing blanks’
If you do not choose this option, Adaptive Server Anywhere considers the
two strings above different.
A side effect of this choosing this option is that strings are padded with
blanks when fetched by a client application.
Remove historical Older versions of Adaptive Server Anywhere employed two system views
system views whose names conflict with the Adaptive Server Enterprise system views
provided for compatibility. These views include SYSCOLUMNS and
SYSINDEXES. If you are using Open Client or JDBC interfaces, create your
database excluding these views. You can do this with the dbinit -k
command-line switch.
If you do not use this option when creating your database, the following two
statements return different results:
SELECT * FROM SYSCOLUMNS ;

968
Chapter 32 Transact-SQL Compatibility

SELECT * FROM dbo.syscolumns ;

v To drop the Adaptive Server Anywhere system views from an


existing database:
1 Connect to the database as a user with DBA authority.
2 Execute the following statements:
DROP VIEW SYS.SYSCOLUMNS ;
DROP VIEW SYS.SYSINDEXES

Caution
Ensure that you do not drop the dbo.syscolumns or dbo.sysindexes
system view.

Setting options for Transact-SQL compatibility


You set Adaptive Server Anywhere database options using the SET OPTION
statement. Several database option settings are relevant to Transact-SQL
behavior.
Set the By default, Adaptive Server Enterprise disallows NULLs on new columns
allow_nulls_by_def unless you explicitly tell the column to allow NULLs. Adaptive Server
ault option Anywhere permits NULL in new columns by default, which is compatible
with the SQL/92 ISO standard.
To make Adaptive Server Enterprise behave in a SQL/92-compatible
manner, use the sp_dboption system procedure to set the
allow_nulls_by_default option to true.
To make Adaptive Server Anywhere behave in a Transact-SQL-compatible
manner, set the allow_nulls_by_default option to OFF. You can do this
using the SET OPTION statement as follows:
SET OPTION PUBLIC.allow_nulls_by_default = ’OFF’

Set the By default, Adaptive Server Enterprise treats identifiers and strings
quoted_identifier differently than Adaptive Server Anywhere, which matches the SQL/92 ISO
option standard.
The quoted_identifier option is available in both Adaptive Server Enterprise
and Adaptive Server Anywhere. Ensure the option is set to the same value in
both databases, for identifiers and strings to be treated in a compatible
manner.

969
Configuring databases for Transact-SQL compatibility

For SQL/92 behavior, set the quoted_identifier option to ON in both


Adaptive Server Enterprise and Adaptive Server Anywhere.
For Transact-SQL behavior, set the quoted_identifier option to OFF in both
Adaptive Server Enterprise and Adaptive Server Anywhere. If you choose
this, you can no longer use identifiers that are the same as keywords,
enclosed in double quotes.
$ For more information on the quoted_identifier option, see
"QUOTED_IDENTIFIER option" on page 208 of the book ASA Reference.
Set the automatic_ Transact-SQL defines a timestamp column with special properties. With the
timestamp option automatic_timestamp option set to ON, the Adaptive Server Anywhere
to ON treatment of timestamp columns is similar to Adaptive Server Enterprise
behavior.
With the automatic_timestamp option set to ON in Adaptive Server
Anywhere (the default setting is OFF), any new columns with the
TIMESTAMP data type that do not have an explicit default value defined
receive a default value of timestamp.
$ For information on timestamp columns, see "The special Transact-
SQL timestamp column and data type" on page 972.
Set the Both Adaptive Server Enterprise and Adaptive Server Anywhere support the
string_rtruncation string_rtruncation option, which affects error message reporting when an
option INSERT or UPDATE string is truncated. Ensure that each database has the
option set to the same value.
$ For more information on the STRING_RTRUNCATION option, see
"STRING_TRUNCATION option" on page 212 of the book ASA Reference.
$ For more information on database options for Transact-SQL
compatibility, see "Transact-SQL and SQL/92 compatibility options" on
page 163 of the book ASA Reference.

Case-sensitivity
Case sensitivity in databases refers to:
♦ Data The case sensitivity of the data is reflected in indexes, in the
results of queries, and so on.
♦ Identifiers Identifiers include table names, column names, and so on.
♦ User IDs and passwords Case sensitivity of user IDs and passwords
is treated differently to other identifiers.

970
Chapter 32 Transact-SQL Compatibility

Case sensitivity of You decide the case-sensitivity of Adaptive Server Anywhere data in
data comparisons when you create the database. By default, Adaptive Server
Anywhere databases are case-insensitive in comparisons, although data is
always held in the case in which you enter it.
Adaptive Server Enterprise’s sensitivity to case depends on the sort order
installed on the Adaptive Server Enterprise system. Case sensitivity can be
changed for single-byte character sets by reconfiguring the Adaptive Server
Enterprise sort order.
Case sensitivity of Adaptive Server Anywhere does not support case-sensitive identifiers. In
identifiers Adaptive Server Enterprise, the case sensitivity of identifiers follows the case
sensitivity of the data.
In Adaptive Server Enterprise, domain names are case sensitive. In Adaptive
Server Anywhere, they are case insensitive, with the exception of Java data
types.
User IDs and In Adaptive Server Anywhere, user IDs and passwords follow the case
passwords sensitivity of the data. The default user ID and password for case sensitive
databases are upper case DBA and SQL, respectively.
In Adaptive Server Enterprise, the case sensitivity of user IDs and passwords
follows the case sensitivity of the server.

Ensuring compatible object names


Each database object must have a unique name within a certain name space.
Outside this name space, duplicate names are allowed. Some database
objects occupy different name spaces in Adaptive Server Enterprise and
Adaptive Server Anywhere.
In Adaptive Server Anywhere, indexes and triggers are owned by the owner
of the table on which they are created. Index and trigger names must be
unique for a given owner. For example, while the tables t1 owned by user
user1 and t2 owned by user user2 may have indexes of the same name, no
two tables owned by a single user may have an index of the same name.
Adaptive Server Enterprise has a less restrictive name space for index names
than Adaptive Server Anywhere. Index names must be unique on a given
table, but any two tables may have an index of the same name. For
compatible SQL, stay within the Adaptive Server Anywhere restriction of
unique index names for a given table owner.
Adaptive Server Enterprise has a more restrictive name space on trigger
names than Adaptive Server Anywhere. Trigger names must be unique in the
database. For compatible SQL, you should stay within the Adaptive Server
Enterprise restriction and make your trigger names unique in the database.

971
Configuring databases for Transact-SQL compatibility

The special Transact-SQL timestamp column and data type


Adaptive Server Anywhere supports the Transact-SQL special timestamp
column. The timestamp column, together with the tsequal system function,
checks whether a row has been updated.

Two meanings of timestamp


Adaptive Server Anywhere has a TIMESTAMP data type, which holds
accurate date and time information. It is distinct from the special
Transact-SQL TIMESTAMP column and data type.

Creating a To create a Transact-SQL timestamp column, create a column that has the
Transact-SQL (Adaptive Server Anywhere) data type TIMESTAMP and a default setting of
timestamp column timestamp. The column can have any name, although the name timestamp is
in Adaptive Server common.
Anywhere
For example, the following CREATE TABLE statement includes a Transact-
SQL timestamp column:
CREATE TABLE table_name (
column_1 INTEGER ,
column_2 TIMESTAMP DEFAULT TIMESTAMP
)
The following ALTER TABLE statement adds a Transact-SQL timestamp
column to the sales_order table:
ALTER TABLE sales_order
ADD timestamp TIMESTAMP DEFAULT TIMESTAMP
In Adaptive Server Enterprise a column with the name timestamp and no
data type specified automatically receives a TIMESTAMP data type. In
Adaptive Server Anywhere you must explicitly assign the data type yourself.
If you have the AUTOMATIC_TIMESTAMP database option set to ON,
you do not need to set the default value: any new column created with
TIMESTAMP data type and with no explicit default receives a default value
of timestamp. The following statement sets AUTOMATIC_TIMESTAMP to
ON:
SET OPTION PUBLIC.AUTOMATIC_TIMESTAMP=’ON’

The data type of a Adaptive Server Enterprise treats a timestamp column as a domain that is
timestamp column VARBINARY(8), allowing NULL, while Adaptive Server Anywhere treats
a timestamp column as the TIMESTAMP data type, which consists of the
date and time, with fractions of a second held to six decimal places.
When fetching from the table for later updates, the variable into which the
timestamp value is fetched should correspond to the column description.

972
Chapter 32 Transact-SQL Compatibility

Timestamping an If you add a special timestamp column to an existing table, all existing rows
existing table have a NULL value in the timestamp column. To enter a timestamp value
(the current timestamp) for existing rows, update all rows in the table such
that the data does not change. For example, the following statement updates
all rows in the sales_order table, without changing the values in any of the
rows:
UPDATE sales_order
SET region = region
In Interactive SQL, you may need to set the TIMESTAMP_FORMAT option
to see the differences in values for the rows. The following statement sets the
TIMESTAMP_FORMAT option to display all six digits in the fractions of a
second:
SET OPTION TIMESTAMP_FORMAT=’YYYY-MM-DD HH:NN:ss.SSSSSS’
If all six digits are not shown, some timestamp column values may appear to
be equal: they are not.
Using tsequal for With the tsequal system function you can tell whether a timestamp column
updates has been updated or not.
For example, an application may SELECT a timestamp column into a
variable. When an UPDATE of one of the selected rows is submitted, it can
use the tsequal function to check whether the row has been modified. The
tsequal function compares the timestamp value in the table with the
timestamp value obtained in the SELECT. Identical timestamps means there
are no changes. If the timestamps differ, the row has been changed since the
SELECT was carried out.
A typical UPDATE statement using the tsequal function looks like this:
UPDATE publishers
SET city = ’Springfield’
WHERE pub_id = ’0736’
AND TSEQUAL(timestamp, ’1995/10/25 11:08:34.173226’)
The first argument to the tsequal function is the name of the special
timestamp column; the second argument is the timestamp retrieved in the
SELECT statement. In Embedded SQL, the second argument is likely to be a
host variable containing a TIMESTAMP value from a recent FETCH on the
column.

The special IDENTITY column


To create an IDENTITY column, use the following CREATE TABLE
syntax:

973
Configuring databases for Transact-SQL compatibility

CREATE TABLE table-name (


...
column-name numeric(n,0) IDENTITY NOT NULL,
...
)
where n is large enough to hold the value of the maximum number of rows
that may be inserted into the table.
The IDENTITY column stores sequential numbers, such as invoice numbers
or employee numbers, which are automatically generated. The value of the
IDENTITY column uniquely identifies each row in a table.
In Adaptive Server Enterprise, each table in a database can have one
IDENTITY column. The data type must be numeric with scale zero, and the
IDENTITY column should not allow nulls.
In Adaptive Server Anywhere, the IDENTITY column is a column default
setting. You can explicitly insert values that are not part of the sequence into
the column with an INSERT statement. Adaptive Server Enterprise does not
allow INSERTs into identity columns unless the identity_insert option is on.
In Adaptive Server Anywhere, you need to set the NOT NULL property
yourself and ensure that only one column is an IDENTITY column. Adaptive
Server Anywhere allows any numeric data type to be an IDENTITY column.
In Adaptive Server Anywhere the identity column and the
AUTOINCREMENT default setting for a column are identical.

Retrieving IDENTITY column values with @@identity


The first time you insert a row into the table, an IDENTITY column has a
value of 1 assigned to it. On each subsequent insert, the value of the column
increases by one. The value most recently inserted into an identity column is
available in the @@identity global variable.
The value of @@identity changes each time a statement attempts to insert a
row into a table.
♦ If the statement affects a table without an IDENTITY column,
@@identity is set to 0.
♦ If the statement inserts multiple rows, @@identity reflects the last value
inserted into the IDENTITY column.
This change is permanent. @@identity does not revert to its previous value if
the statement fails or if the transaction that contains it is rolled back.
$ For more information on the behavior of @@identity, see "@@identity
global variable" on page 257 of the book ASA Reference.

974
Chapter 32 Transact-SQL Compatibility

Writing compatible SQL statements


This section describes general guidelines for writing SQL for use on more
than one database management system, and discusses compatibility issues
between Adaptive Server Enterprise and Adaptive Server Anywhere at the
SQL statement level.

General guidelines for writing portable SQL


When writing SQL for use on more than one database management system,
make your SQL statements as explicit as possible. Even if more than one
server supports a given SQL statement, it may be a mistake to assume that
default behavior is the same on each system. General guidelines applicable to
writing compatible SQL include:
♦ Spell out all of the available options, rather than using default behavior.
♦ Use parentheses to make the order of execution within statements
explicit, rather than assuming identical default order of precedence for
operators.
♦ Use the Transact-SQL convention of an @ sign preceding variable
names for Adaptive Server Enterprise portability.
♦ Declare variables and cursors in procedures, triggers, and batches
immediately following a BEGIN statement. Adaptive Server Anywhere
requires this, although Adaptive Server Enterprise allows declarations to
be made anywhere in a procedure, trigger, or batch.
♦ Avoid using reserved words from either Adaptive Server Enterprise or
Adaptive Server Anywhere as identifiers in your databases.
♦ Assume large namespaces. For example, ensure that each index has a
unique name.

Creating compatible tables


Adaptive Server Anywhere supports domains which allow constraint and
default definitions to be encapsulated in the data type definition. It also
supports explicit defaults and CHECK conditions in the CREATE TABLE
statement. It does not, however, support named constraints or named
defaults.

975
Writing compatible SQL statements

NULL Adaptive Server Anywhere and Adaptive Server Enterprise differ in some
respects in their treatment of NULL. In Adaptive Server Enterprise, NULL is
sometimes treated as if it were a value.
For example, a unique index in Adaptive Server Enterprise cannot contain
rows that hold null and are otherwise identical. In Adaptive Server
Anywhere, a unique index can contain such rows.
By default, columns in Adaptive Server Enterprise default to NOT NULL,
whereas in Adaptive Server Anywhere the default setting is NULL. You can
control this setting using the allow_nulls_by_default option. Specify
explicitly NULL or NOT NULL to make your data definition statements
transferable.
$ For information on this option, see "Setting options for Transact-SQL
compatibility" on page 969.
Temporary tables You can create a temporary table by placing a pound sign (#) in front of a
CREATE TABLE statement. These temporary tables are Adaptive Server
Anywhere declared temporary tables, and are available only in the current
connection. For information about declared temporary tables in Adaptive
Server Anywhere, see "DECLARE LOCAL TEMPORARY TABLE
statement" on page 495 of the book ASA Reference.
Physical placement of a table is carried out differently in Adaptive Server
Enterprise and in Adaptive Server Anywhere. Adaptive Server Anywhere
supports the ON segment-name clause, but segment-name refers to an
Adaptive Server Anywhere dbspace.
$ For information about the CREATE TABLE statement, see "CREATE
TABLE statement" on page 466 of the book ASA Reference.

Writing compatible queries


There are two criteria for writing a query that runs on both Adaptive Server
Anywhere and Adaptive Server Enterprise databases:
♦ The data types, expressions, and search conditions in the query must be
compatible.
♦ The syntax of the SELECT statement itself must be compatible.
This section explains compatible SELECT statement syntax, and assumes
compatible data types, expressions, and search conditions. The examples
assume the QUOTED_IDENTIFIER setting is OFF: the default Adaptive
Server Enterprise setting, but not the default Adaptive Server Anywhere
setting.

976
Chapter 32 Transact-SQL Compatibility

Adaptive Server Anywhere supports the following subset of the Transact-


SQL SELECT statement.
Syntax SELECT [ ALL | DISTINCT ] select-list
...[ INTO #temporary-table-name ]
...[ FROM table-spec [ HOLDLOCK | NOHOLDLOCK ],
... table-spec [ HOLDLOCK | NOHOLDLOCK ], ... ]
...[ WHERE search-condition ]
...[ GROUP BY column-name, ... ]
...[ HAVING search-condition ]
...| [ ORDER BY expression [ ASC | DESC ], ... ] |
| [ ORDER BY integer [ ASC | DESC ], ... ] |
Parameters select-list:
table-name.*
| *
| expression
| alias-name = expression
| expression as identifier
| expression as T_string
table-spec:
[ owner . ]table-name
…[ [ AS ] correlation-name ]
…[ ( INDEX index_name [ PREFETCH size ][ LRU | MRU ] )]
alias-name:
identifier | 'string' | "string"
$ For a full description of the SELECT statement, see "SELECT
statement" on page 601 of the book ASA Reference.
Adaptive Server Anywhere does not support the following keywords and
clauses of the Transact-SQL SELECT statement syntax:
♦ SHARED keyword
♦ COMPUTE clause
♦ FOR BROWSE clause
♦ GROUP BY ALL clause
Notes ♦ The INTO table_name clause, which creates a new table based on the
SELECT statement result set, is supported only for declared temporary
tables where the table name starts with a #. Declared temporary tables
exist for a single connection only.
♦ Adaptive Server Anywhere does not support the Transact-SQL
extension to the GROUP BY clause allowing references to columns and
expressions that are not used for creating groups. In Adaptive Server
Enterprise, this extension produces summary reports.

977
Writing compatible SQL statements

♦ The FOR READ ONLY clause and the FOR UPDATE clause are
parsed, but have no effect.
♦ The performance parameters part of the table specification is parsed, but
has no effect.
♦ The HOLDLOCK keyword is supported by Adaptive Server Anywhere.
It makes a shared lock on a specified table or view more restrictive by
holding it until the completion of a transaction (instead of releasing the
shared lock as soon as the required data page is no longer needed,
whether or not the transaction has been completed). For the purposes of
the table for which the HOLDLOCK is specified, the query is carried
out at isolation level 3.
♦ The HOLDLOCK option applies only to the table or view for which it is
specified, and only for the duration of the transaction defined by the
statement in which it is used. Setting the isolation level to 3 applies a
holdlock for each select within a transaction. You cannot specify both a
HOLDLOCK and NOHOLDLOCK option in a query.
♦ The NOHOLDLOCK keyword is recognized by Adaptive Server
Anywhere, but has no effect.
♦ Transact-SQL uses the SELECT statement to assign values to local
variables:
SELECT @localvar = 42
The corresponding statement in Adaptive Server Anywhere is the SET
statement:
SET localvar = 42
However, using the Transact-SQL SELECT to assign values to variables
is supported inside batches.
♦ Adaptive Server Enterprise does not support the following clauses of the
SELECT statement syntax:
♦ INTO host-variable-list
♦ INTO variable-list.
♦ Parenthesized queries.
♦ Adaptive Server Enterprise uses join operators in the WHERE clause ,
rather than the FROM clause and the ON condition for joins.

978
Chapter 32 Transact-SQL Compatibility

Compatibility of joins
In Transact-SQL, joins appear in the WHERE clause, using the following
syntax:
start of select, update, insert, delete, or subquery
FROM {table-list | view-list} WHERE [ NOT ]
...[ table-name.| view name.]column-name
join-operator
...[ table-name.| view-name.]column_name
...[ { AND | OR } [ NOT ]
... [ table-name.| view-name.]column_name
join-operator
[ table-name.| view-name.]column-name
]...
end of select, update, insert, delete, or subquery
The join_operator in the WHERE clause may be any of the comparison
operators, or may be either of the following outer-join operators:
♦ *= Left outer join operator
♦ =* Right outer join operator.
Adaptive Server Anywhere supports the Transact-SQL outer-join operators
as an alternative to the native SQL/92 syntax. You cannot mix dialects within
a query. This rule applies also to views used by a queryan outer-join query
on a view must follow the dialect used by the view-defining query.
Adaptive Server Anywhere also provides a SQL/92 syntax for joins other
than outer joins, in which the joins are placed in the FROM clause rather
than the WHERE clause.
$ For information about joins in Adaptive Server Anywhere and in
SQL/92, see "Joins: Retrieving Data from Several Tables" on page 195, and
"FROM clause" on page 532 of the book ASA Reference.
$ For more information on Transact-SQL compatibility of joins, see
"Transact-SQL outer joins" on page 220.

979
Transact-SQL procedure language overview

Transact-SQL procedure language overview


The stored procedure language is the part of SQL used in stored
procedures, triggers, and batches.
Adaptive Server Anywhere supports a large part of the Transact-SQL stored
procedure language in addition to the Watcom-SQL dialect based on
SQL/92.

Transact-SQL stored procedure overview


Based on the ISO/ANSI draft standard , the Adaptive Server Anywhere
stored procedure language differs from the Transact-SQL dialect in many
ways. Many of the concepts and features are similar, but the syntax is
different. Adaptive Server Anywhere support for Transact-SQL takes
advantage of the similar concepts by providing automatic translation between
dialects. However, a procedure must be written exclusively in one of the two
dialects, not in a mixture of the two.
Adaptive Server There are a variety of aspects to Adaptive Server Anywhere support for
Anywhere support Transact-SQL stored procedures, including:
for Transact-SQL
♦ Passing parameters
stored procedures
♦ Returning result sets
♦ Returning status information
♦ Providing default values for parameters
♦ Control statements
♦ Error handling

Transact-SQL trigger overview


Trigger compatibility requires compatibility of trigger features and syntax.
This section provides an overview of the feature compatibility of Transact-
SQL and Adaptive Server Anywhere triggers.
Adaptive Server Enterprise executes triggers after the triggering statement
has completed: they are statement level, after triggers. Adaptive Server
Anywhere supports both row level triggers (which execute before or after
each row has been modified) and statement level triggers (which execute
after the entire statement).

980
Chapter 32 Transact-SQL Compatibility

Row-level triggers are not part of the Transact-SQL compatibility features,


and are discussed in "Using Procedures, Triggers, and Batches" on page 435.
Description of Features of Transact-SQL triggers that are either unsupported or different in
unsupported or Adaptive Server Anywhere include:
different Transact-
♦ Triggers firing other triggers Suppose a trigger carries out an action
SQL triggers
that would, if carried out directly by a user, fire another trigger.
Adaptive Server Anywhere and Adaptive Server Enterprise respond
slightly differently to this situation. By default in Adaptive Server
Enterprise, triggers fire other triggers up to a configurable nesting level,
which has the default value of 16. You can control the nesting level with
the Adaptive Server Enterprise nested triggers option. In Adaptive
Server Anywhere, triggers fire other triggers without limit unless there is
insufficient memory.
♦ Triggers firing themselves Suppose a trigger carries out an action
that would, if carried out directly by a user, fire the same trigger.
Adaptive Server Anywhere and Adaptive Server Enterprise respond
slightly differently to this situation. In Adaptive Server Anywhere, non-
Transact-SQL triggers fire themselves recursively, while Transact-SQL
dialect triggers do not fire themselves recursively.
By default in Adaptive Server Enterprise, a trigger does not call itself
recursively, but you can turn on the self_recursion option to allow
triggers to call themselves recursively.
♦ ROLLBACK statement in triggers Adaptive Server Enterprise
permits the ROLLBACK TRANSACTION statement within triggers, to
roll back the entire transaction of which the trigger is a part. Adaptive
Server Anywhere does not permit ROLLBACK (or ROLLBACK
TRANSACTION) statements in triggers because a triggering action and
its trigger together form an atomic statement.
Adaptive Server Anywhere does provide the Adaptive Server
Enterprise-compatible ROLLBACK TRIGGER statement to undo
actions within triggers. See "ROLLBACK TRIGGER statement" on
page 599 of the book ASA Reference.

Transact-SQL batch overview


In Transact-SQL, a batch is a set of SQL statements submitted together and
executed as a group, one after the other. Batches can be stored in command
files. The Interactive SQL utility in Adaptive Server Anywhere and the isql
utility in Adaptive Server Enterprise provide similar capabilities for
executing batches interactively.

981
Transact-SQL procedure language overview

The control statements used in procedures can also be used in batches.


Adaptive Server Anywhere supports the use of control statements in batches
and the Transact-SQL-like use of non-delimited groups of statements
terminated with a GO statement to signify the end of a batch.
For batches stored in command files, Adaptive Server Anywhere supports
the use of parameters in command files. Adaptive Server Enterprise does not
support parameters.
$ For information on parameters, see "PARAMETERS statement" on
page 577 of the book ASA Reference.

982
Chapter 32 Transact-SQL Compatibility

Automatic translation of stored procedures


In addition to supporting Transact-SQL alternative syntax, Adaptive Server
Anywhere provides aids for translating statements between the Watcom-SQL
and Transact-SQL dialects. Functions returning information about SQL
statements and enabling automatic translation of SQL statements include:
♦ SQLDialect(statement) Returns Watcom-SQL or Transact-SQL.
♦ WatcomSQL(statement) Returns the Watcom-SQL syntax for the
statement.
♦ TransactSQL(statement) Returns the Transact-SQL syntax for the
statement.
These are functions, and so can be accessed using a select statement from
Interactive SQL. For example:
select SqlDialect(’select * from employee’)
returns the value Watcom-SQL.

Using Sybase Central to translate stored procedures


Sybase Central has facilities for creating, viewing, and altering procedures
and triggers.

v To translate a stored procedure using Sybase Central:


1 Connect to a database using Sybase Central, either as owner of the
procedure you wish to change, or as a DBA user.
2 Open the Procedures & Functions folder.
3 Right-click the procedure you want to translate and from the popup
menu choose one of the Open As commands, depending on the dialect
you want to use.
The procedure appears in the Code Editor in the selected dialect. If the
selected dialect is not the one in which the procedure is stored, the server
translates it to that dialect. Any untranslated lines appear as comments.
4 Rewrite any untranslated lines as needed.
5 When finished, choose File➤Save/Execute in Database to save the
translated version to the database. You can also export the text to a file
for editing outside Sybase Central.

983
Returning result sets from Transact-SQL procedures

Returning result sets from Transact-SQL


procedures
Adaptive Server Anywhere uses a RESULT clause to specify returned result
sets. In Transact-SQL procedures, the column names or alias names of the
first query are returned to the calling environment.
Example of The following Transact-SQL procedure illustrates how Transact-SQL stored
Transact-SQL procedures returns result sets:
procedure CREATE PROCEDURE showdept (@deptname varchar(30))
AS
SELECT employee.emp_lname, employee.emp_fname
FROM department, employee
WHERE department.dept_name = @deptname
AND department.dept_id = employee.dept_id

Example of The following is the corresponding Adaptive Server Anywhere procedure:


Watcom-SQL CREATE PROCEDURE showdept(in deptname varchar(30))
procedure RESULT ( lastname char(20), firstname char(20))
BEGIN
SELECT employee.emp_lname, employee.emp_fname
FROM department, employee
WHERE department.dept_name = deptname
AND department.dept_id = employee.dept_id
END

$ For more information about procedures and results, see "Returning


results from procedures" on page 466

984
Chapter 32 Transact-SQL Compatibility

Variables in Transact-SQL procedures


Adaptive Server Anywhere uses the SET statement to assign values to
variables in a procedure. In Transact-SQL, values are assigned using the
SELECT statement with an empty table-list. The following simple procedure
illustrates how the Transact-SQL syntax works:
CREATE PROCEDURE multiply
@mult1 int,
@mult2 int,
@result int output
AS
SELECT @result = @mult1 * @mult2
This procedure can be called as follows:
CREATE VARIABLE @product int
go
EXECUTE multiply 5, 6, @product OUTPUT
go
The variable @product has a value of 30 after the procedure executes.
$ For more information on using the SELECT statement to assign
variables, see "Writing compatible queries" on page 976. For more
information on using the SET statement to assign variables, see "SET
statement" on page 605 of the book ASA Reference.

985
Error handling in Transact-SQL procedures

Error handling in Transact-SQL procedures


Default procedure error handling is different in the Watcom-SQL and
Transact-SQL dialects. By default, Watcom-SQL dialect procedures exit
when they encounter an error, returning SQLSTATE and SQLCODE values
to the calling environment.
Explicit error handling can be built into Watcom-SQL stored procedures
using the EXCEPTION statement, or you can instruct the procedure to
continue execution at the next statement when it encounters an error, using
the ON EXCEPTION RESUME statement.
When a Transact-SQL dialect procedure encounters an error, execution
continues at the following statement. The global variable @@error holds
the error status of the most recently executed statement. You can check this
variable following a statement to force return from a procedure. For example,
the following statement causes an exit if an error occurs.
IF @@error != 0 RETURN
When the procedure completes execution, a return value indicates the
success or failure of the procedure. This return status is an integer, and can
be accessed as follows:
DECLARE @status INT
EXECUTE @status = proc_sample
IF @status = 0
PRINT ’procedure succeeded’
ELSE
PRINT ’procedure failed’
The following table describes the built-in procedure return values and their
meanings:

Value Meaning
0 Procedure executed without error
–1 Missing object
–2 Data type error
–3 Process was chosen as deadlock victim
–4 Permission error
–5 Syntax error
–6 Miscellaneous user error
–7 Resource error, such as out of space
–8 Non-fatal internal problem

986
Chapter 32 Transact-SQL Compatibility

Value Meaning
–9 System limit was reached
–10 Fatal internal inconsistency
–11 Fatal internal inconsistency
–12 Table or index is corrupt
–13 Database is corrupt
–14 Hardware error

The RETURN statement can be used to return other integers, with their own
user-defined meanings.

Using the RAISERROR statement in procedures


The RAISERROR statement is a Transact-SQL statement for generating
user-defined errors. It has a similar function to the SIGNAL statement.
$ For a description of the RAISERROR statement, see "RAISERROR
statement" on page 584 of the book ASA Reference.
By itself, the RAISERROR statement does not cause an exit from the
procedure, but it can be combined with a RETURN statement or a test of the
@@error global variable to control execution following a user-defined
error.

If you set the ON_TSQL_ERROR database option to CONTINUE, the


RAISERROR statement no longer signals an execution-ending error. Instead,
the procedure completes and stores the RAISERROR status code and
message, and returns the most recent RAISERROR. If the procedure causing
the RAISERROR was called from another procedure, the RAISERROR
returns after the outermost calling procedure terminates.

You lose intermediate RAISERROR statuses and codes after the procedure
terminates. If, at return time, an error occurs along with the RAISERROR,
then the error information is returned and you lose the RAISERROR
information. The application can query intermediate RAISERROR statuses
by examining @@error global variable at different execution points.

Transact-SQL-like error handling in the Watcom-SQL dialect


You can make a Watcom-SQL dialect procedure handle errors in a Transact-
SQL-like manner by supplying the ON EXCEPTION RESUME clause to the
CREATE PROCEDURE statement:

987
Error handling in Transact-SQL procedures

CREATE PROCEDURE sample_proc()


ON EXCEPTION RESUME
BEGIN
...
END
The presence of an ON EXCEPTION RESUME clause prevents explicit
exception handling code from being executed, so avoid using these two
clauses together.

988
C H A P T E R 3 3

Adaptive Server Anywhere as an Open


Server

About this chapter Adaptive Server Anywhere can appear to client applications as an Open
Server. This feature enables Sybase Open Client applications to connect
natively to Adaptive Server Anywhere databases.
This chapter describes how to use Adaptive Server Anywhere as an Open
Server, and how to configure Open Client and Adaptive Server Anywhere to
work together.
$ For information on developing Open Client applications for use with
Adaptive Server Anywhere, see "The Open Client Interface" on page 167 of
the book ASA Programming Interfaces Guide.
Contents
Topic Page
Open Clients, Open Servers, and TDS 990
Setting up Adaptive Server Anywhere as an Open Server 992
Configuring Open Servers 994
Characteristics of Open Client and jConnect connections 1000

989
Open Clients, Open Servers, and TDS

Open Clients, Open Servers, and TDS


This chapter describes how Adaptive Server Anywhere fits into the Sybase
Open Client/Open Server client/server architecture. This section describes
the key concepts of this architecture, and provides the conceptual
background for the rest of the chapter.
If you simply wish to use a Sybase application with Adaptive Server
Anywhere, you do not need to know any details of Open Client, Open
Server, or TDS. However, an understanding of how these pieces fit together
may be helpful for configuring your database and setting up applications.
This section explains how the pieces fit together, and avoids any discussion
of the internal features of the pieces.
Open Clients and Adaptive Server Anywhere and other members of the Adaptive Server
Open Servers family act as Open Servers. This means that you can develop client
applications using the Open Client libraries available from Sybase. Open
Client includes both the Client Library (CT-Library) and the older DB-
Library interfaces.
Tabular Data Open Clients and Open Servers exchange information using an application
Stream protocol called the tabular data stream (TDS). All applications built using
the Sybase Open Client libraries are also TDS applications, because the
Open Client libraries handle the TDS interface. However, some applications
(such as Sybase jConnect) are TDS applications even though they do not use
the Sybase Open Client librariesthey communicate directly using TDS
protocol.
While many Open Servers use the Sybase Open Server libraries to handle the
interface to TDS, some applications have a direct interface to TDS of their
own. Sybase Adaptive Server Enterprise and Adaptive Server Anywhere
both have internal TDS interfaces. They appear to client applications as an
Open Server, but do not use the Sybase Open Server libraries.
Programming Adaptive Server Anywhere supports two application protocols. Open Client
Interfaces and applications and other Sybase applications such as Replication Server and
application OmniConnect use TDS. ODBC and Embedded SQL applications use a
protocols separate application protocol specific to Adaptive Server Anywhere.
TDS uses TCP/IP Application protocols such as TDS sit on top of lower level communications
protocols that handle network traffic. Adaptive Server Anywhere supports
TDS only over the TCP/IP network protocol. In contrast, the Adaptive Server
Anywhere-specific application protocol supports several network protocols
as well as a shared memory protocol designed for same-machine
communication.

990
Chapter 33 Adaptive Server Anywhere as an Open Server

Sybase applications and Adaptive Server Anywhere


The ability of Adaptive Server Anywhere to act as an Open Server enables
Sybase applications such as Replication Server and OmniConnect to work
with Adaptive Server Anywhere.
Replication Server The Open Server interface enables support for Sybase Replication Server:
support Replication Server connects through the Open Server interface, enabling
databases to act as replicate sites in Replication Server installations.
For your database to act as a primary site in a Replication Server installation,
you must also use the Replication Agent for Sybase Adaptive Server
Anywhere, also called a Log Transfer Manager.
$ For information on the Replication Agent, see "Replicating Data with
Replication Server" on page 1003.
OmniConnect Sybase OmniConnect provides a unified view of disparate data within an
support organization, allowing users to access multiple data sources without having
to know what the data looks like or where to find it. In addition,
OmniConnect performs heterogeneous joins of data across the enterprise,
enabling cross-platform table joins of targets such as DB2, Sybase Adaptive
Server Enterprise, Oracle, and VSAM.
Using the Open Server interface, Adaptive Server Anywhere can act as a
data source for OmniConnect.

991
Setting up Adaptive Server Anywhere as an Open Server

Setting up Adaptive Server Anywhere as an


Open Server
This section describes how to set up an Adaptive Server Anywhere server to
receive connections from Open Client applications.

System requirements
There are separate requirements at the client and server for using Adaptive
Server Anywhere as an Open Server.
Server-side You must have the following elements at the server side to use Adaptive
requirements Server Anywhere as an Open Server:
♦ Adaptive Server Anywhere server components You must use the
network server (dbsrv7.exe) if you want to access an Open Server over a
network. You can use the personal server (dbeng7.exe) as an Open
Server only for connections from the same machine.
♦ TCP/IP You must have a TCP/IP protocol stack to use Adaptive Server
Anywhere as an Open Server, even if you are not connecting over a
network.

Client-side You need the following elements to use Sybase client applications to connect
requirements to an Open Server (including Adaptive Server Anywhere):
♦ Open Client components The Open Client libraries provide the
network libraries your application needs to communicate via TDS, if
your application uses Open Client.
♦ jConnect If your application uses JDBC, you need jConnect and a
Java runtime environment.
♦ DSEdit You need dsedit, the directory services editor, to make server
names available to your Open Client application. On UNIX platforms,
this utility is called sybinit.

Starting the database server as an Open Server


If you wish to use Adaptive Server Anywhere as an Open Server, you must
ensure you start it using the TCP/IP protocol. By default, the server starts all
available communications protocols, but you can limit the protocols started
by listing them explicitly on the command line. For example, the following
command lines are both valid:

992
Chapter 33 Adaptive Server Anywhere as an Open Server

dbsrv7 -x tcpip,ipx asademo.db


dbsrv7 -x tcpip -n myserver asademo.db
The first command line uses both TCP/IP and IPX protocols, of which
TCP/IP is available for use by Open Client applications. The second line uses
only TCP/IP.
You can use the personal database server as an Open Server for
communications on the same machine because it supports the TCP/IP
protocol.
The server can serve other applications through the TCP/IP protocol or other
protocols using the Adaptive Server Anywhere- specific application protocol
at the same time as serving Open Client applications over TDS.
Port numbers Every application using TCP/IP on a machine uses a distinct TCP/IP port, so
network packets end up at the right application. The default port for Adaptive
Server Anywhere is port 2638. It is recommended that you use the default
port number, as Adaptive Server Anywhere has been granted that port
number by the Internet Assigned Numbers Authority (IANA). If you wish to
use a different port number, you can specify which one using the ServerPort
network option:
dbsrv7 -x tcpip(ServerPort=2629) -n myserver asademo.db

Open Client To connect to this server, the interfaces file at the client machine must
settings contain an entry specifying the machine name on which the database server
is running, and the TCP/IP port it uses.
$ For details on setting up the client machine, see "Configuring Open
Servers" on page 994.

993
Configuring Open Servers

Configuring Open Servers


Adaptive Server Anywhere can communicate with other Adaptive Servers,
Open Server applications, and client software on the network. Clients can
talk to one or more servers, and servers can communicate with other servers
via remote procedure calls. For products to interact with one another, each
needs to know where the others reside on the network. This network service
information is stored in the interfaces file.

The interfaces file


The interfaces file is usually named sql.ini on PC operating systems and
interfaces, or interfac on UNIX operating systems.
Like an address book, the interfaces file lists the name and address of every
database server known to Open Client applications on your machine. When
you use an Open Client program to connect to a database server, the program
looks up the server name in the interfaces file and then connects to the server
using the address.
The name, location, and contents of the interfaces file differ between
operating systems. Also, the format of the addresses in the interfaces file
differs between network protocols.
When you install Adaptive Server Anywhere, the setup program creates a
simple interfaces file that you can use for local connections to Adaptive
Server Anywhere over TCP/IP. It is the System Administrator’s
responsibility to modify the interfaces file and distribute it to users so that
they can connect to Adaptive Server Anywhere over the network.

Using the DSEDIT utility


The dsedit utility is a Windows 95/98 and Windows NT utility that allows
you to configure the interfaces file (sql.ini). The following sections explain
how to use the dsedit utility to configure the interfaces file.
$ These sections describe how to use dsedit for those tasks required for
Adaptive Server Anywhere. It is not complete documentation for the dsedit
utility. For more information on dsedit, see the Utility Programs book for
your platform, included with other Sybase products.

994
Chapter 33 Adaptive Server Anywhere as an Open Server

Starting dsedit
The dsedit executable is in the SYBASE\bin directory, which is added to your
path on installation. You can start dsedit either from the command line or
from the Windows Explorer in the standard fashion.
When you start dsedit, the Select Directory Service window appears.

Opening a Directory Services session


The Select Directory Service window allows you to open a session with a
directory service. You can open a session to edit the interfaces file (sql.ini),
or any directory service that has a driver listed in the libtcl.cfg file.

v To open a session:
♦ Click the local name of the directory service you want to connect to, as
listed in the DS Name box, and click OK.
For Adaptive Server Anywhere, select the Interfaces Driver.

SYBASE environment variable must be set


The dsedit utility uses the SYBASE environment variable to locate the
libtcl.cfg file. If the SYBASE environment variable is incorrect, dsedit
cannot locate the libtcl.cfg file.

995
Configuring Open Servers

You can add, modify, or delete entries for servers, including Adaptive Server
Anywhere servers, in this window.

Adding a server entry

v To add a server entry:


1 Choose Add from the Server Object menu. The Input Server Name
window appears.
2 Type a server name in the Server Name box, and click OK to enter the
server name.
The server entry appears in the Server box. To specify the attributes of the
server, you must modify the entry.

Server entry name The server name entered here does not need to match the name provided on
need not match the Adaptive Server Anywhere command line. The server address identifies
server command- and locates the server, not the server name. The server name field is purely
line name an identifier for Open Client. For Adaptive Server Anywhere, if the server
has more than one database loaded, the DSEDIT server name entry identifies
which database to use.

Adding or changing the server address


Once you have entered a Server Name, you need to modify the Server
Address to complete the interfaces file entry.

v To enter a Server Address:


1 Select a server entry in the Server box.
2 Right-click the Server Address in the Attributes box.
996
Chapter 33 Adaptive Server Anywhere as an Open Server

3 Choose Modify Attribute from the popup menu. A window appears,


showing the current value of the address. If you have no address entered,
the box is empty.

4 Click Add. The Network Address for Protocol window appears.


5 Select NLWNSCK from the Protocol list box (this is the TCP/IP
protocol) and enter a value in the Network Address text box.

For TCP/IP addresses take one of the following two forms:


♦ computer name,port number
♦ IP-address,portnumber

997
Configuring Open Servers

The address or computer name is separated from the port number by a


comma.

Machine name A name (or an IP address) identifies the machine on which the server is
running. On Windows and Windows NT you can find the machine name in
Network Settings, in the Control Panel.
If your client and server are on the same machine, you must still enter the
machine name. In this case, you can use localhost to identify the current
machine.
Port Number The port number you enter must match the port specified on the Adaptive
Server Anywhere database server command line, as described in "Starting
the database server as an Open Server" on page 992. The default port number
for Adaptive Server Anywhere servers is 2638. This number has been
assigned to Adaptive Server Anywhere by the Internet Adapter Number
Authority, and use of this port is recommended unless you have good reasons
for explicitly using another port.
The following are valid server address entries:
elora,2638
123.85.234.029,2638

Verifying the server address


You can verify your network connection using the Ping command from the
Server Object menu.

Database connections not verified


Verifying a network connection confirms that a server is receiving
requests on the machine name and port number specified. It does not
verify anything about database connections.

v To ping a server:
1 Ensure that the database server is running.
2 Click the server entry in the Server box of the dsedit session window.
3 Choose Ping from the Server Object menu. The Ping window appears.
4 Click the address you want to ping. Click Ping.
A message box appears, notifying you whether or not the connection is
successful. A message box for a successful connection states that both
Open Connection and Close Connection succeeded.

998
Chapter 33 Adaptive Server Anywhere as an Open Server

Renaming a server entry


You can rename server entries from the dsedit session window.

v To rename a server entry:


1 Select a server entry in the Server box.
2 Choose Rename from the Server Object menu. The Input Server Name
window appears.
3 Type a new name for the server entry in the Server Name box.
4 Click OK to make the change.

Deleting server entries


You can delete server entries from the dsedit session window.

v To delete a server entry:


1 Click a server entry in the Server box.
2 Choose Delete from the Server Object menu.

Configuring servers for JDBC


The JDBC connection address (URL) contains all the information required to
locate the server.
$ For information on the JDBC URL, see "Supplying a URL for the
server" on page 614.

999
Characteristics of Open Client and jConnect connections

Characteristics of Open Client and jConnect


connections
When Adaptive Server Anywhere is serving applications over TDS, it
automatically sets relevant database options to values compatible with
Adaptive Server Enterprise default behavior. These options are set
temporarily, for the duration of the connection only. The client application
can override them at any time.
Default settings The database options set on connection using TDS include:

Option Set to
ALLOW_NULLS_BY_DEFAULT OFF
ANSINULL OFF
ANSI_BLANKS ON
ANSI_INTEGER_OVERFLOW ON
AUTOMATIC_TIMESTAMP ON
CHAINED OFF
CONTINUE_AFTER_RAISERROR ON
DATE_FORMAT YYYY-MM-DD
DATE_ORDER MDY
ESCAPE_CHARACTER OFF
ISOLATION_LEVEL 1
FLOAT_AS_DOUBLE ON
QUOTED_IDENTIFIER OFF
TIME_FORMAT HH:NN:SS.SSS
TIMESTAMP_FORMAT YYYY-MM-DD HH:NN:SS.SSS
TSQL_HEX_CONSTANT ON
TSQL_VARIABLES ON

How the startup The default database options are set for TDS connections using a system
options are set procedure named sp_tsql_environment. This procedure sets the following
options:
SET TEMPORARY OPTION TSQL_VARIABLES=’ON’;
SET TEMPORARY OPTION ANSI_BLANKS=’ON’;
SET TEMPORARY OPTION TSQL_HEX_CONSTANT=’ON’;

1000
Chapter 33 Adaptive Server Anywhere as an Open Server

SET TEMPORARY OPTION CHAINED=’OFF’;


SET TEMPORARY OPTION QUOTED_IDENTIFIER=’OFF’;
SET TEMPORARY OPTION ALLOW_NULLS_BY_DEFAULT=’OFF’;
SET TEMPORARY OPTION AUTOMATIC_TIMESTAMP=’ON’;
SET TEMPORARY OPTION ANSINULL=’OFF’;
SET TEMPORARY OPTION CONTINUE_AFTER_RAISERROR=’ON’;
SET TEMPORARY OPTION FLOAT_AS_DOUBLE=’ON’;
SET TEMPORARY OPTION ISOLATION_LEVEL=’1’;
SET TEMPORARY OPTION DATE_FORMAT=’YYYY-MM-DD’;
SET TEMPORARY OPTION TIMESTAMP_FORMAT=’YYYY-MM-DD
HH:NN:SS.SSS’;
SET TEMPORARY OPTION TIME_FORMAT=’HH:NN:SS.SSS’;
SET TEMPORARY OPTION DATE_ORDER=’MDY’;
SET TEMPORARY OPTION ESCAPE_CHARACTER=’OFF’

Do not edit the sp_tsql_environment procedure


Do not alter the sp_tsql_environment procedure yourself. It is for system
use only.

The procedure sets options only for connections that use the TDS
communications protocol. This includes Open Client and JDBC connections
using jConnect. Other connections (ODBC and Embedded SQL) have the
default settings for the database.
You can change the options for TDS connections.

v To change the option settings for TDS connections:


1 Create a procedure that sets the database options you want. For example,
you could use a procedure such as the following:
CREATE PROCEDURE my_startup_procedure()
BEGIN
IF connection_property(’CommProtocol’)=’TDS’ THEN
SET TEMPORARY OPTION QUOTED_IDENTIFIER=’OFF’;
END IF
END
This particular procedure example changes only the
QUOTED_IDENTIFIER option from the default settings.
2 Set the LOGIN_PROCEDURE option to the name of a new procedure:
SET OPTION LOGIN_PROCEDURE=
’dba.my_startup_procedure’
Future connections will use the procedure. You can configure the procedure
differently for different user Ids.
$ For more information about database options, see "Database Options"
on page 155 of the book ASA Reference.

1001
Characteristics of Open Client and jConnect connections

1002
C H A P T E R 3 4

Replicating Data with Replication Server

About this chapter This chapter describes how you can use Replication Server to replicate data
between an Adaptive Server Anywhere database and other databases. Other
databases in the replication system may be Adaptive Server Anywhere
databases or other kinds of database.
Contents
Topic Page
Introduction to replication 1004
A replication tutorial 1007
Configuring databases for Replication Server 1017
Using the LTM 1020

Before you begin Replication Server administrators who are setting up Adaptive Server
Anywhere to take part in their Replication Server installation will find this
chapter especially useful. You should have knowledge of Replication Server
documentation, and familiarity with the Replication Server product. This
chapter does not describe Replication Server itself.
$ For information about Replication Server, including design, commands,
and administration, see your Replication Server documentation.

1003
Introduction to replication

Introduction to replication
Data replication is the sharing of data among physically distinct databases.
Changes made to shared data at any one database are copied precisely to the
other databases in the replication system.
Data replication brings some key benefits to database users.
Data availability Replication makes data available locally, rather than through potentially
expensive, less reliable and slower connections to a single, central database.
Since data is accessible locally, you are always have access to data, even in
the event of a long-distance network connection failure.
Response time Replication improves response times for data requests for two reasons. First,
retrieval rates are faster since requests are processed on a local server
without accessing some wide area network. Second, competition for
processor time decreases, since local processing offloads work from a central
database server.

Sybase replication technologies


Sybase provides two replication technologies for Adaptive Server Anywhere:
SQL Remote and Replication Server.
♦ SQL Remote SQL Remote is designed for two-way replication
involving a consolidated database and large numbers of remote
databases, typically including many mobile databases. Administration
and resource requirements at the remote sites are minimal, and a typical
time lag between the consolidated and remote databases takes several
hours.
♦ Replication Server Replication Server is designed for replication
among relatively small numbers of data servers, with a typical time lag
between primary data and replicate data of a few seconds, and generally
with an administrator at each site.
Each replication technology has its own documentation. This chapter
describes how to use Adaptive Server Anywhere with Replication Server.
$ For information about SQL Remote, see the book Data Replication
with SQL Remote.

1004
Chapter 34 Replicating Data with Replication Server

Replicate sites and primary sites


In a Replication Server installation, the data to be shared among databases is
arranged in replication subscriptions.
For each replication definition, there is a primary site, where changes to the
data in the replication occur. The sites that receive the data in the replication
are called replicate sites.

Replicate site components


You can use Adaptive Server Anywhere as a replicate site with no additional
components.
The following diagram illustrates the components required for Adaptive
Server Anywhere to participate in a Replication Server installation as a
replicate site.

From
other
servers

♦ Replication Server receives data changes from primary site servers.


♦ Replication Server connects to Adaptive Server Anywhere to apply the
changes.
♦ Adaptive Server Anywhere makes the changes to the database.

Asynchronous The Replication Server can use asynchronous procedure calls (APC) at
procedure calls replicate sites to alter data at a primary site database. If you are using APCs,
the above diagram does not apply. Instead, the requirements are the same as
for a primary site.

1005
Introduction to replication

Primary site components


To use an Adaptive Server Anywhere database as a primary site, you need to
use the Log Transfer Manager (LTM), or Replication Agent. The LTM
supports Replication Server version 10.0 and greater.
The following diagram illustrates the components required for Adaptive
Server Anywhere to participate in a Replication Server installation as a
primary site. The arrows in the diagram represent data flow.

To other
servers

♦ The Adaptive Server Anywhere database server manages the database.


♦ The Adaptive Server Anywhere Log Transfer Manager connects to the
database. It scans the transaction log to pick up changes to the data, and
sends them to Replication Server.
♦ Replication Server sends the changes to replicate site databases.

1006
Chapter 34 Replicating Data with Replication Server

A replication tutorial
This section provides a step-by-step tutorial describing how to replicate data
from a primary database to a replicate database. Both databases in the tutorial
are Adaptive Server Anywhere databases.
Replication Server This section assumes you have a running Replication Server. For more
assumed information about how to install or configure Replication Server, see the
Replication Server documentation.
What is in the This tutorial describes how to replicate only tables. For information about
tutorial replicating procedures, see "Preparing procedures and functions for
replication" on page 1022.
The tutorial uses a simple example of a (very) primitive office news system:
a single table with an ID column holding an integer, a column holding the
user ID of the author of the news item, and a column holding the text of the
news item. The id column and the author column make up the primary key.
Before you work through the tutorial, create a directory (for example,
c:\tutorial) to hold the files you create in the tutorial.

Set up the Adaptive Server Anywhere databases


This section describes how to create and set up the Adaptive Server
Anywhere databases for replication.
You can create a database using Sybase Central or the dbinit command-line
utility. For this tutorial, we use the dbinit command-line utility.

v To create the primary site database:


♦ Enter the following command from the tutorial directory you created
(for example c:\tutorial).
dbinit primedb
This creates a database file primedb.db in the current directory.

v To create the replicate site database:


♦ Enter the following command from the tutorial directory you created
(for example c:\tutorial).
dbinit repdb
This creates a database file repdb.db in the current directory.

1007
A replication tutorial

What’s next? Next, you have to start database servers running on these databases.

Start the database servers


You need to run the primary site database server, with the primary database
loaded.

v To start the primary site database server:


1 Change to the tutorial directory.
2 Enter the following command line to start a network database server
running the primedb database. You should be using the TCP/IP network
communication protocol on the default communications port (2638):
dbsrv7 -x tcpip primedb.db

v To start the replicate site database server:


1 Change to the tutorial directory.
2 Enter the following command line to start a network database server
running the repdb database, but on a different port:
dbsrv7 -x tcpip(port=2639) -n REPSV repdb.db

What’s next? Next, you have to make entries for each of the Adaptive Server Anywhere
servers in an interfaces file, so Replication Server can communicate with
these database servers.

Set up the Open Servers in your system


You need to add a set of Open Servers to the list of Open Servers in your
system.
Adding Open Open Servers are defined in your interfaces file (sql.ini) using the dsedit
Servers utility. For NetWare and UNIX users, the interfaces file is named interfaces,
and the utility is named sybinit.
$ For full instructions on how to add definitions to your interfaces file,
see "Configuring Open Servers" on page 994.
Required Open For each Open Server definition you must provide a name and an address.
Servers Do not alter the other attributes of the definition. You need to add an Open
Server entry for each of the following:

1008
Chapter 34 Replicating Data with Replication Server

♦ The primary database Create an entry named PRIMEDB with


address as follows:
♦ Protocol NLWNSCK
♦ Network address localhost,2638
♦ The replicate database Create an entry named REPDB with address
as follows:
♦ Protocol NLWNSCK
♦ Network address localhost,2639
♦ The LTM at the primary database This is necessary so you can shut
down the LTM properly. Create an entry named PRIMELTM with
address as follows:
♦ Protocol NLWNSCK
♦ Network address localhost,2640
♦ Your Replication Server This tutorial assumes you already have the
Replication Server Open Server defined.

What’s next? Next, confirm that the Open Servers are configured properly.

Confirm the Open Servers are configured properly


You can confirm that each Open Server is available by selecting
ServerObject➤Ping Server from the DSEDIT utility.
Alternatively, you can confirm that each Open Server is configured properly
by connecting to the database using an Open Client application such as the
isql utility.
To start isql running on the primary site database, type
isql -U dba -P sql -S PRIMEDB
The Open Client isql utility is not the same as the Adaptive Server Anywhere
Interactive SQL utility.

Add Replication Server information to the primary database


You need to add Replication Server tables and procedures to the primary site
database for the database to participate in a Replication Server installation.
You also need to create two user IDs for use by Replication Server. The SQL
command file rssetup.sql comes with Adaptive Server Anywhere, and carries
out these tasks.

1009
A replication tutorial

The rssetup.sql command file must be run on the Adaptive Server Anywhere
server from the Interactive SQL utility.

v To run the rssetup script:


1 From Interactive SQL, connect to the Adaptive Server Anywhere
database as user ID DBA using password SQL.
2 Run the rssetup script using the following command:
read "path\rssetup.sql"
where path is your Adaptive Server Anywhere installation directory.
You can alternatively use File➤Run Script, and browse to the file.

Actions carried out The rssetup.sql command file carries out the following functions:
by rssetup.sql
♦ Creates a user named dbmaint, with password dbmaint and with DBA
permissions. This is the maintenance user name and password required
by Replication Server to connect to the primary site database.
♦ Creates a user named sa, with password sysadmin and with DBA
permissions. This is the user ID used by Replication Server when
materializing data.
♦ Adds sa and dbmaint to a group named rs_systabgroup.

Passwords and While the hard-wired user IDs (dbmaint and sa) and passwords are useful
user IDs for test and tutorial purposes, you should change the password and perhaps
also the user IDs when running databases that require security. Users with
DBA permissions have full authority in a Adaptive Server Anywhere
database.
The user ID sa and its password must match that of the system administrator
account on the Replication Server. Adaptive Server Anywhere does not
currently accept A NULL password.
Permissions The rssetup.sql script carries out a number of operations, including some
permissions management. The permissions changes made by rssetup.sql are
outlined here. You do not have to make these changes yourself.
For replication, ensure that the dbmaint and sa users can access this table
without explicitly specifying the owner. To do this, the table owner user ID
must have group membership permissions, and the dbmaint and sa users
must be members of the table owner group. To grant group permissions, you
must have DBA authority.
For example, if the table is owned by user DBA, you should grant group
permissions to the DBA user ID:
GRANT GROUP
TO DBA

1010
Chapter 34 Replicating Data with Replication Server

You should then grant the dbmaint and sa users membership in the DBA
group. To grant group membership, you must either have DBA authority or
be the group ID.
GRANT MEMBERSHIP
IN GROUP "DBA"
TO dbmaint ;
GRANT MEMBERSHIP
IN GROUP "DBA"
TO sa ;

Create the table for the primary database


In this section, you create a single table in the primary site database, using
isql. First, make sure you are connected to the primary site database:
isql -U dba -P sql -S PRIMEDB
Next, create a table in the database:
CREATE TABLE news (
ID int,
AUTHOR char( 40 ) DEFAULT CURRENT USER,
TEXT char( 255 ),
PRIMARY KEY ( ID, AUTHOR )
)
go

Identifier case sensitivity


In Adaptive Server Anywhere, all identifiers are case insensitive. In
Adaptive Server Enterprise, identifiers are case sensitive by default. Even
in Adaptive Server Anywhere, ensure the case of your identifiers matches
in all parts of the SQL statement to ensure compatibility with Adaptive
Server Enterprise.
In Adaptive Server Anywhere, the database determines case sensitivity.
For example, passwords are case insensitive in case insensitive databases,
and case sensitive in case sensitive databases, while user IDs, being
identifiers, are case insensitive in all Adaptive Server Anywhere
databases.

For news to act as part of a replication primary site, you must set the
REPLICATE flag to ON for the table using an ALTER TABLE statement:
ALTER TABLE news
REPLICATE ON
go

1011
A replication tutorial

This is equivalent to running the sp_setreplicate or sp_setreptable


procedure on the table in Adaptive Server Enterprise. You cannot set
REPLICATE ON in a CREATE TABLE statement.

Add Replication Server information to the replicate database


You should run the rssetup.sql command file on the replicate database in
exactly the same manner as it ran on the primary database. Also ensure that
the dbmaint and sa users can access this table without explicitly specifying
the table owner.
$ These tasks are the same as those carried out on the primary database.
For a complete explanation, see "Add Replication Server information to the
primary database" on page 1009.

Create the tables for the replicate database


The replicate site database needs to have tables to hold the data it receives.
Now is a good time to create these tables. As long as the database elements
are in place, no extra statements are necessary for them to act as a replicate
site in a Replication Server installation. In particular, you do not need to set
the REPLICATE flag to ON, which is necessary only at the primary site.
Replication Server allows replication between tables and columns with
different names. As a simple example, however, create a table in the replicate
database identical in definition to that in the primary database (except for the
REPLICATE flag, which is not set to ON in the replicate database). The
table creation statement for this is:
CREATE TABLE news (
ID int,
AUTHOR char( 40 ) DEFAULT CURRENT USER,
TEXT char( 255 ),
PRIMARY KEY ( ID, AUTHOR )
)
go
For the tutorial, the CREATE TABLE statement must be exactly the same as
that at the primary site.
You must ensure that the users dbmaint and sa can access this table without
specifying the owner name. Also, these user IDs must have SELECT and
UPDATE permissions on the table.

1012
Chapter 34 Replicating Data with Replication Server

Set up Replication Server


You need to carry out the following tasks on the Replication Server:
♦ Create a connection for the primary site data server.
♦ Create a connection for the replicate site data server.
♦ Create a replication definition.
♦ Create a subscription to the replication.
This section describes each of these tasks. It also describes starting the
Adaptive Server Anywhere LTM.

Create a connection for the primary site


Using isql, connect to Replication Server and create a connection to the
primary site Adaptive Server Anywhere database.
The following command creates a connection to the primedb database on
the PRIMEDB Open Server.
create connection to PRIMEDB.primedb
set error class rs_sqlserver_error_class
set function string class rs_sqlserver_function_class
set username dbmaint
set password dbmaint
with log transfer on
go
If you have changed the dbmaint user ID and password in the rssetup.sql
command file, make sure you replace the dbmaint username and password
in this command.
Replication Server does not actually use the primedb database name;
instead, the database name is read from the command line of the PRIMEDB
Open Server. You must, however, include a database name in the CREATE
CONNECTION statement to conform to the syntax.
$ For a full description of the create connection statement, see your
Replication Server Commands Reference.

Create a connection for the replicate site.


Using isql, connect to Replication Server and create a connection to the
replicate site Adaptive Server Anywhere database.
The following command creates a connection to the repdb database on the
REPDB Open Server.

1013
A replication tutorial

create connection to REPDB.repdb


set error class rs_sqlserver_error_class
set function string class rs_sqlserver_function_class
set username dbmaint
set password dbmaint
go
This statement differs from the primary site server statement in that there is
no with log transfer on clause in this statement.
If you have changed the dbmaint user ID and password in the rssetup.sql
command file, make sure you replace the dbmaint username and password
in this command.

Create a replication definition


Using isql, connect to Replication Server and create a replication definition.
The following statement creates a replication definition for the news table on
the primedb database:
create replication definition news
with primary at PRIMEDB.primedb
( id int, author char(40), text char(255) )
primary key ( id, author )
go
For a full description of the CREATE REPLICATION DEFINITION
statement, see your Replication Server Commands Reference.

Configure and start the Adaptive Server Anywhere LTM


For replication to take place, the Adaptive Server Anywhere LTM must be
running against the primary site server. Before you start the Adaptive Server
Anywhere LTM, make sure it is properly configured by editing an LTM
configuration file.
Below is a sample configuration file for the primedb database. If you are
following the examples, you should make a copy of this file as primeltm.cfg:

1014
Chapter 34 Replicating Data with Replication Server

#
# Configuration file for ’PRIMELTM’
#
SQL_server=PRIMEDB
SQL_database=primedb
SQL_user=sa
SQL_pw=sysadmin
RS_source_ds=PRIMEDB
RS_source_db=primedb
RS=your_rep_server_name_here
RS_user=sa
RS_pw=sysadmin
LTM_admin_user=dba
LTM_admin_pw=sql
LTM_charset=cp850
scan_retry=2
APC_user=sa
APC_pw=sysadmin
SQL_log_files=C:\TUTORIAL
If you have changed the user ID and password in the rssetup.sql command
file from sa and sysadmin, you should use the new user ID and password in
this configuration.
To start the Adaptive Server Anywhere LTM running on the primary site
server, enter the following command:
dbltm -S PRIMELTM -C primeltm.cfg
The connection information is in primeltm.cfg. In this command line,
PRIMELTM is the server name of the LTM.
You can find usage information about the Adaptive Server Anywhere LTM
by typing the following statement:
dbltm -?
You can run the Adaptive Server Anywhere LTM for Windows NT as an NT
service. For information on running programs as services, see "Running the
server outside the current session" on page 18.

Create a subscription for your replication


Using isql, connect to Replication Server and create a subscription for the
replication.
The following statement creates a subscription for the news replication
defined in "Create a replication definition" on page 1014 with replicate site
as the repdb database.
create subscription NEWS_SUBSCRIPTION
for news
with replicate at REPDB.repdb
1015
A replication tutorial

go
You have now completed your installation. Try replicating data to confirm
that the setup is working properly.

Enter data at the primary site for replication


You can now replicate data from the primary database to the replicate
database. As an example, connect to the primary database using the isql
utility, and enter a row in the news table.
insert news (id, text)
values (1, ’Test news item.’ )
commit
go
The Adaptive Server Anywhere LTM sends only committed changes to the
Replication Server. The data change is replicated next time the LTM polls
the transaction log.
Tutorial complete You have now completed the tutorial. The following section describes in
more detail the steps you have carried out.

1016
Chapter 34 Replicating Data with Replication Server

Configuring databases for Replication Server


Each Adaptive Server Anywhere database that participates in a Replication
Server installation needs to be configured before it can do so. Configuring
the database involves the following tasks:
♦ Selecting a secure user ID for the maintenance user and the name used
by Replication Server when materializing data.
♦ Setting up the database for Replication Server.
♦ Configuring the language and character set, where necessary.

Configuring the Each primary site Adaptive Server Anywhere database requires an LTM to
LTM send data to Replication Server. Each primary or replicate site Adaptive
Server Anywhere database requires an Open Server definition so that
Replication Server can connect to the database.
$ For information on configuring the LTM, see "Configuring the LTM"
on page 1023.

Setting up the database for Replication Server


Once you have created your Adaptive Server Anywhere database, and
created the necessary tables and so on within the database, you can make
database ready for use with Replication Server. You do this using a setup
script supplied with the Adaptive Server Anywhere Replication Agent
product. The script is named rssetup.sql.
When you need to You need to run the setup script at any Adaptive Server Anywhere database
run the setup script that is taking part in a Replication Server installation, whether as a primary
or a replicate site.
What the setup The setup script creates user IDs required by Replication Server when
script does connecting to the database. It also creates a set of stored procedures and
tables used by Replication Server. The tables begin with the characters rs_,
and the procedures begin with the characters sp_. Procedures include some
that are important for character set and language configuration.

Prepare to run the setup script


Replication Server uses a special data server maintenance user login name
for each local database containing replicated tables. This allows Replication
Server to maintain and update the replicated tables in the database.

1017
Configuring databases for Replication Server

The maintenance The setup script creates a maintenance user with name dbmaint and
user password dbmaint. The maintenance user has DBA permissions in the
Adaptive Server Anywhere database, which allows it full control over the
database. For security reasons, you should change the maintenance user ID
and password.

v To change the maintenance user ID and password:


1 Open the rssetup.sql setup script in a text editor. The script is held in the
scripts subdirectory of your Adaptive Server Anywhere installation
directory.
2 Change all occurrences of the dbmaint user ID to the new maintenance
user ID of your choice.
3 Change the dbmaint password to the new maintenance user password of
your choice. The password occurs in the following place at the top of the
setup script file:
GRANT CONNECT TO dbmaint
IDENTIFIED BY dbmaint

The materialization When Replication Server connects to a database to materialize and the initial
user ID copy of the data in the replication, it does so using the Replication Server
system administrator account.
The Adaptive Server Anywhere database must have a user ID and password
that match the Replication Server system administrator user ID and
password. Adaptive Server Anywhere does not accept a NULL password.
The setup script assumes a user ID of sa and a password of sysadmin for the
Replication Server administrator. You should change this to match the actual
name and password.

v To change the system administrator user ID and password:


1 Open the rssetup.sql setup script in a text editor.
2 Change all occurrences of the sa user ID to match the Replication Server
system administrator user ID.
3 Change the sa password to match the Replication Server system
administrator password. The password has the initial setting of
sysadmin.

1018
Chapter 34 Replicating Data with Replication Server

Run the setup script


Once you have modified the setup script to match the user IDs and
passwords appropriately, you can run the setup script to create the
maintenance and system administrator users in the Adaptive Server
Anywhere database.

v To run the setup script:


1 Start the Adaptive Server Anywhere database on a Adaptive Server
Anywhere database engine or server.
2 Start the Interactive SQL utility, and connect to the database as a user
with DBA authority. When you create an Adaptive Server Anywhere
database, it has a user with user ID DBA and password SQL, which has
DBA authority.
3 Run the script by entering the following command in the Interactive
SQL command window:
read path\rssetup.sql
where path is the path to the setup script.

Character set and language issues


Upon creation, each Adaptive Server Anywhere database is assigned a
specific collation (character set and sort order). Replication Server uses a
different set of identifiers for character sets and sort orders.
Set the character set and language parameters in the LTM configuration file.
If you are unsure of the characters set label to specify, you can do the
following to determine the character set of the server:

v To determine the character set:


♦ Execute the following command:
exec sp_serverinfo csname

The language is one of the language labels listed in "Language label values"
on page 298.

1019
Using the LTM

Using the LTM


Since the Adaptive Server Anywhere LTM relies on information in the
Adaptive Server Anywhere transaction log, take care not to delete or damage
the log without storing backups (for example, using a transaction log mirror).
$ For more information about transaction log management, see the
section "Transaction log and backup management" on page 1027.
You cannot substitute an Adaptive Server Anywhere LTM for an Adaptive
Server Enterprise LTM since the transaction logs have different formats.
The Adaptive Server Anywhere LTM supports replication of inserts, updates,
and deletes, as well as replication of Transact-SQL-dialect stored procedure
calls.
The Adaptive Server Enterprise LTM sends data changes to the Replication
Server before they are committed. The Replication Server holds the changes
until a COMMIT statement arrives. By contrast, the Adaptive Server
Anywhere LTM sends only committed changes to Replication Server. For
long transactions, this may lead to some added delay in replication, since all
changes have to go through the Replication Server before distribution.

Configuring tables for replication


Adaptive Server Anywhere does not support the sp_setreplicate system
procedure. Instead, a table is identified as a primary data source using the
ALTER TABLE statement with a single clause:
ALTER TABLE table-name
SET REPLICATE ON

The effects of Setting REPLICATE ON places extra information into the transaction log.
setting Whenever an UPDATE, INSERT, or DELETE action occurs on the table.
REPLICATE ON The Adaptive Server Anywhere Replication Agent uses this extra
for a table information to submit the full pre-image of the row, where required, to
Replication Server for replication.
Even if only some of the data in the table needs to be replicated, all changes
to the table are submitted to Replication Server. It is Replication Server’s
responsibility to distinguish the data to be replicated from that which is not.

1020
Chapter 34 Replicating Data with Replication Server

When you update, insert, or delete a row, the pre-image of the row is the
contents of the row before the action, and the post-image is the contents of
the row after the action. For INSERTS, only the post-image is submitted (the
pre-image is empty). For DELETES, the post-image is empty and only the
pre-image is submitted. For UPDATES, both the pre-image and the updated
values are submitted.
The following data types are supported for replication:

Data type Description ( Open Client/Open Server type )


Exact integer data types int, smallint, tinyint
Exact decimal data types decimal, numeric
Approximate numeric data float (8-byte), real
types
Money data types money, smallmoney
Character data types char(n), varchar(n), text
Date and time data types datetime, smalldatetime
Binary data types binary(n), varbinary(n), image
Bit data types bit

Notes Adaptive Server Anywhere supports data of zero length that is not NULL.
However, non-null long varchar and long binary data of zero length is
replicated to a replicate site as NULL.
If a primary table has columns with unsupported data types, you can replicate
the data if you create a replication definition using a compatible supported
data type. For example, to replicate a DOUBLE column, you could define
the column as FLOAT in the replication definition.
Side effects of There can be a replication performance hit for heavily updated tables. You
setting could consider using replicated procedures if you experience performance
REPLICATE ON problems that may be related to replication traffic, since replicated
for a table procedures send only the call to the procedure instead of each individual
action.
Since setting REPLICATE ON sends extra information to the transaction
log, this log grows faster than for a non-replicating database.
Minimal column The Adaptive Server Anywhere LTM supports the Replication Server
replication replicate minimal columns feature. This feature is enabled at Replication
definitions Server.
$ For more information on replicate minimal columns, see your
Replication Server documentation.

1021
Using the LTM

Preparing procedures and functions for replication


You can use stored procedures to modify the data in tables. Updates, inserts,
and deletes execute from within the procedure.
Replication Server can replicate procedures as long as they satisfy certain
conditions. The first statement in a procedure must carry out an update for
the procedure to be replicated. See your Replication Server documentation
for a full description of how Replication Server replicates procedures.
Adaptive Server Anywhere supports two dialects for stored procedures: the
Watcom-SQL dialect, based on the draft ISO/ANSI standard, and the
Transact-SQL dialect. You can use either dialect in writing stored procedures
for replication.
Function APC The Adaptive Server Anywhere LTM supports the Replication Server
format function APC format. To make use of these functions, set the configuration
parameter rep_func to on (the default is off).
The LTM interprets all replicated APCs as either table APCs or function
APCs. A single Adaptive Server Anywhere database cannot combine
function APCs with other table APCs.
$ For more information about replicate functions, see your Replication
Server documentation.

SQL Statements for controlling procedure replication


A procedure can be configured to act as a replication source using the
ALTER PROCEDURE statement.
The following statement makes the procedure MyProc act as a replication
source.
ALTER PROCEDURE MyProc
REPLICATE ON
The following statement prevents the procedure MyProc from acting as a
replication source.
ALTER PROCEDURE MyProc
REPLICATE OFF
These statements have the same effect as running sp_setreplicate or
sp_setrepproc ’table’ on the procedure in Adaptive Server Enterprise. The
sp_setrepproc ’function’ syntax is not supported.

1022
Chapter 34 Replicating Data with Replication Server

The effects of When a procedure is used as a replication data source, calling the procedure
setting sends extra information to the transaction log.
REPLICATE ON
for a procedure
Asynchronous procedures
Procedures called at a replicate site database to update data at a primary site
database are asynchronous procedures. The procedure carries out no action
at the replicate site, but rather, the call to the procedure is replicated to the
primary site, where a procedure of the same name executes. This is called an
asynchronous procedure call (APC). The changes made by the APC are
then replicated from the primary to the replicate database in the usual
manner.
$ For information about APCs, see your Replication Server
documentation.
The APC_user and Support for APCs in Adaptive Server Anywhere is different from that in
APC support Adaptive Server Enterprise. In Adaptive Server Enterprise, each APC
executes using the user ID and password of the user who called the
procedure at the replicate site. In Adaptive Server Anywhere, however, the
transaction log does not store the password, and so it is not available at the
primary site. To work around this difference, the LTM configuration file
holds a single user ID with associated password, and this user ID (the
APC_user) executes the procedure at the primary site. The APC_user must,
therefore, have appropriate permissions at the primary site for each APC that
may be called.

Configuring the LTM


You control LTM behavior by modifying the LTM configuration file, which
is a plain text file created and edited using a text editor. The LTM
configuration file contains information the LTM needs, such as the Adaptive
Server Anywhere server it transfers a log from, the Replication Server it
transfers the log to. You need a valid configuration file to run the LTM.
Creating a You must create a configuration file, using a text editor, before you can run
configuration file the LTM. The -C LTM command-line specifies the name of the
configuration file to use, and has a default of dbltm.cfg.
Configuration file The LTM configuration file shares the same format as the Replication Server
format configuration file described in your Replication Server Administration
Guide. In summary:
♦ The configuration file contains one entry per line.

1023
Using the LTM

♦ An entry consists of a configuration parameter, followed by the =


character, followed by the value:
Entry=value
♦ Lines beginning with a # character are comments ignored by the LTM.
♦ The configuration file cannot contain leading blanks.
♦ Entries are case sensitive.
$ For the full list of available configuration file parameters, see "The
LTM configuration file" on page 113 of the book ASA Reference.
Example ♦ The following is a sample Adaptive Server Anywhere LTM
configuration file configuration file.
# This is a comment line
# Names are case sensitive.
SQL_user=sa
SQL_pw=sysadmin
SQL_server=PRIMESV
SQL_database=primedb
RS_source_ds=PRIMESV
RS_source_db=primedb
RS=MY_REPSERVER
RS_user=sa
RS_pw=sysadmin
LTM_admin_user=DBA
LTM_admin_pw=SQL
LTM_charset=cp850
scan_retry=2
SQL_log_files=e:\logs\backup
APC_user=sa
APC_pw=sysadmin

Replicating transactions in batches


Effects of buffering The LTM allows buffering of replication commands to Replication Server.
transactions Buffering the replication commands and sending them in batches results in
fewer messages being sent, and can significantly increase overall throughput,
especially on high volume installations.
How batch mode By default, the LTM buffers transactions. The buffer flushes (the
works transactions sent to Replication Server when the buffer:
♦ Reaches maximum number of commands The batch_ltl_sz
parameter sets the maximum number of LTL (log transfer language)
commands stored in the buffer before it flushes. The default setting is
200.

1024
Chapter 34 Replicating Data with Replication Server

♦ Reaches maximum memory used The batch_ltl_mem parameter


sets the maximum memory that the buffer can occupy before flushes.
The default setting is 256K.
♦ Completes transaction log processing If there are no more entries
in the transaction log to process (that is, the LTM is up to date with all
committed transactions), then the buffer flushes.

Turning off You can turn off buffering of transactions by setting the batch_ltl_cmds
buffering configuration parameter to off:
batch_ltl_cmds=off

Language and character set issues


Language and character set issues are an important consideration in many
replication sites. Each database and server in the system uses a specific
collation (character set and sorting order) for storing and ordering strings.
Adaptive Server Anywhere character set support is carried out in a different
manner to character set support in Adaptive Server Enterprise and other
Open Client/Open Server based applications.
This section describes how to configure the Adaptive Server Anywhere LTM
such that data in an Adaptive Server Anywhere database can be shared with
Replication Server and hence with other databases.
The LTM automatically uses the default Open Client/Open Server language,
sort order, and character set. You can override these defaults by adding
entries to the LTM configuration file.

Open Client/Open Server collations


Adaptive Server Enterprise, Replication Server, and other Open Client/Open
Server applications share a common means of managing character sets.
$ For information on Open Client/Open Server character set support, see
the chapter "Configuring Character Sets, Sort Orders, and Message
Language" in the Adaptive Server Enterprise Administration Guide. For
more information about character set issues in Replication Server, see the
chapter "International Replication Design Considerations" in the Replication
Server Design Guide.
This section provides a brief overview of Open Client/Open Server character
set support.

1025
Using the LTM

Internationalization Files that support data processing in a particular language are called
files internationalization files. Several types of internationalization files come
with Adaptive Server Enterprise and other Open Client/Open Server
applications.
There is a directory named charsets under your Sybase directory. Charsets
has a set of subdirectories, including one for each character set available to
you. Each character set contains a set of files, as described in the following
table

File Description
Charset.loc Character set definition files that define the lexical properties of
each character such as alphanumeric, punctuation, operand, upper
or lower case.
*.srt Defines the sort order for alpha-numeric and special characters.
*.xlt Terminal-specific character translation files for use with utilities.

Character set settings in the LTM configuration file


Three settings in the LTM configuration file refer to character set issues:
♦ LTM_charset The character set for the LTM to use. You can specify
any Sybase-supported character set.
♦ LTM_language The language used by the LTM to print its messages
to the error log and to its clients. You can choose any language to which
the LTM has been localized, as long as it is compatible with the LTM
character set.
The Adaptive Server Anywhere LTM has been localized to several
languages.
Notes Character set In an Open Client/Open Server environment, an LTM
should use the same character set as the data server and Replication Server
attached to it.

Adaptive Server Anywhere character sets are specified differently than Open
Client/Open Server character sets. Consequently, the requirement is that the
Adaptive Server Anywhere character set be compatible with the LTM
character set.

Language The locales.dat file in the locales subdirectory of the Sybase


release directory contains valid map settings. However, the LTM output
messages in the user interface are currently available in those languages to
which the LTM has been localized.

1026
Chapter 34 Replicating Data with Replication Server

Sort order All sort orders in your replication system should be the same.
You can find the default entry for your platform in the locales.dat file in the
locales subdirectory of the Sybase release directory.
Example ♦ The following settings are valid for a Japanese installation:
LTM_charset=SJIS
LTM_language=Japanese

Transaction log and backup management


One of the differences between the Adaptive Server Enterprise LTM and the
Adaptive Server Anywhere LTM is that while the Adaptive Server
Enterprise LTM depends on a temporary recovery database for access to old
transactions, the Adaptive Server Anywhere LTM depends on access to old
transaction logs. No temporary recovery database exists for the Adaptive
Server Anywhere LTM.
Replication depends on access to operations in the transaction log, and for
Adaptive Server Anywhere primary site databases, sometimes access to old
transaction logs. This section describes how to set up backup procedures at
an Adaptive Server Anywhere primary site to ensure proper access to old
transaction logs.
Consequences of Good backup practices at Adaptive Server Anywhere primary database sites
lost transaction are crucial. A lost transaction log could mean rematerializing replicate site
logs databases. At primary database sites, a transaction log mirror is
recommended. For information on transaction log mirrors and other backup
procedure information, see the Adaptive Server Anywhere User’s Guide.
The LTM configuration file contains a directory entry, which points to the
directory where backed up transaction logs are kept. This section describes
how to set up a backup procedure to ensure that such a directory stays in
proper shape.
Backup utility With the Backup utility, you have the option of renaming the transaction log
options on backup and restart. For the DBBACKUP command-line utility, this is the -
r command-line switch. It is recommended that you use this option when
backing up the consolidated database and remote database transaction logs.
For example, consider a database named primedb.db, in directory c:\prime,
with a transaction log in directory d:\primelog\primedb.log. Backing up this
transaction log to a directory e:\primebak using the rename and restart option
carries out the following tasks:
1 Backs up the transaction log, creating a backup file
e:\primebak\primedb.log.

1027
Using the LTM

2 Renames the existing transaction log to d:\primelog\YYMMDD.lnn, where


nn is the lowest available integer, starting at 00.
3 Starts a new transaction log, as d:\primelog\primedb.log.
After several backups, the directory d:\primelog will contain a set of
sequential transaction logs. The log directory should not contain any
transaction logs other than the sequence of logs generated by this backup
procedure.

Using the DELETE_OLD_LOGS option


The DELETE_OLD_LOGS Adaptive Server Anywhere database option’s
default is OFF. If you set the default to ON, then the LTM automatically
deletes the old transaction logs when Replication Server no longer needs
access to the transactions. This option can help to manage disk space in
replication setups.
You set the DELETE_OLD_LOGS option for the PUBLIC group:
SET OPTION PUBLIC.DELETE_OLD_LOGS = ’ON’
$ For more information, see "DELETE_OLD_LOGS option" on
page 185 of the book ASA Reference.

The Unload utility and replication


If a database participates in replication, care must be taken when unloading
and reloading in order to avoid needing to re-materialize the database.
Replication is based on the transaction log, and unloading and reloading a
database deletes the old transaction log. For this reason, good backup
practices are especially important when participating in replication.

Replicating an entire database


Adaptive Server Anywhere provides a short cut for replicating an entire
database, so you don’t have to set each table in the database as a replicated
table.
You can set a PUBLIC database option called REPLICATE_ALL using the
SET OPTION statement. You can designate a whole database for replication
using the following command:
SET OPTION PUBLIC.Replicate_all=’ON’
You require DBA authority to change this and other PUBLIC option settings.
You must restart the database for the new setting to take effect. The
REPLICATE_ALL option has no effect on procedures.

1028
Chapter 34 Replicating Data with Replication Server

$ For more information, see "REPLICATE_ALL option" on page 209 of


the book ASA Reference.

Stopping the LTM


You can shut down the LTM from the user interface in Windows NT, or in
other circumstances by issuing a command.

v To stop the LTM in Windows NT, when the LTM is not running as a
service:
♦ Click SHUTDOWN on the user interface.

v To stop the LTM by issuing a command:


1 Connect to the LTM from isql using the LTM_admin_user login name
and password in the LTM configuration file. The user ID and password
are case sensitive.
2 Stop the LTM using the SHUTDOWN statement.
Example ♦ The following statements connect isql to the LTM PRIMELTM, and
shut it down:
isql -SPRIMELTM -UDBA -PSQL
1> shutdown
2> go

1029
Using the LTM

1030
P A R T S E V E N

Appendixes

These appendixes describe the dialog boxes and property sheets you may
encounter while using Sybase Central.

1031
1032
A P P E N D I X A

Dialog Box Descriptions

This chapter provides descriptions of all the dialog boxes you can access in
Sybase Central.
$ For descriptions of the Adaptive Server Anywhere property sheets, see
"Property Sheet Descriptions" on page 1061.
Contents
Topic Page
Introduction to dialog boxes 1034
Dialogs accessed through the File menu 1035
Dialogs accessed through the Tools menu 1045
Debugger windows 1053

1033
Introduction to dialog boxes

Introduction to dialog boxes


You can find most of the configurable settings in Sybase Central in dialog
boxes, which can be accessed either through the File menu or the Tools
menu. If you have different plug-ins installed in Sybase Central, each plug-in
may provide different menu items. For more information about the Sybase
Central viewer, see the Sybase Central online Help.
The File menu contains commands related to the objects displayed in the
main Sybase Central window. These menu items change depending on which
object you select. For example, if you select a table, the File menu shows
commands and options related to tables. Likewise, if you select a column, the
File menu changes to show column options. All of these menu items can also
be accessed in popup menus when you right-click an object.
The Tools menu contains commands related to connecting, disconnecting,
plug-ins, and viewer options. These menu items always stay visible,
regardless of the objects you click in the main window.
Sybase Central also provides many Property dialogs (also called property
sheets). These dialogs appear in the File menu (or the popup menu) when
you select an object that has configurable properties; for descriptions of these
property sheets, see "Property Sheet Descriptions" on page 1061.
$ See also
♦ "Dialogs accessed through the File menu" on page 1035
♦ "Dialogs accessed through the Tools menu" on page 1045

1034
Appendix A Dialog Box Descriptions

Dialogs accessed through the File menu


The File menu contains commands related to the objects you select in the
main Sybase Central window. These menu items change depending on which
object you select.
In this chapter, the dialog box descriptions for the File menu are divided into
groups of similar objects. Because the Properties dialog is used for multiple
objects, it is described in its own section.

Servers and databases


When you select a database server or a database in the main window of
Sybase Central, different menu items appear in the File menu (or in a popup
menu if you right-click the object).
$ For more information about working servers and databases, see
"Running the Database Server" on page 3 and "Working with databases" on
page 115.
$ For descriptions of dialogs related to remote servers, see "Remote
Servers" on page 1043.

Start a Database dialog


The Start a Database dialog (accessed by File➤Start Database when you
have a server selected) lets you specify the database you want to start.
Dialog components
♦ Server name Shows the name of the server on which to start the
database.
♦ Database file Provides a place for you to type the database name. If
you are starting a database, you can specify a new name for the database
to run as. If you don’t enter a name, it defaults to the database file name
without the extension. If you don't know the name of the database, you
can click the Browse button to locate it.
♦ Database name Provides a place for you to type the full path and
name of the Adaptive Server Anywhere database file or write file on the
server machine. You can also click Browse to locate the file. Note: To
specify a database file on a machine other than the server, you must use
a UNC filename.

1035
Dialogs accessed through the File menu

♦ Stop the database when all connections are closed Shuts down the
database when the last connection to it is closed. Note: This option is
different from the –ga server switch, which auto-stops the database
server itself.
$ See also
♦ "Running the Database Server" on page 3
♦ "Working with databases" on page 115

Filter Objects dialog


The Filter Objects dialog (accessed by clicking File➤Filter Objects by
Owner when you have a database selected) lets you select the owners of the
objects you would like to manage
Dialog components
♦ User and group list Lists all of the users and groups connected to the
database. You can specify owners by enabling the check box beside each
user or group.
$ See also
♦ "Working with databases" on page 115

Set Options dialog


The Options dialog (accessed by clicking File➤Set Options when you have a
database, user, group, or remote user selected) lets you configure different
option settings for the user or database.
Dialog components
♦ Name Displays the name of the current user or group.
♦ Show Provides a list of option types. For example, if you only want to
set the options of the database, you can choose Database Options to
show only database-related options in the dialog.
♦ Options list Shows option settings and defaults for the current user,
group or for the database as a whole. Once you have selected a setting,
you can use the controls at the side of the dialog.
♦ Value Shows the value of the currently selected setting.
♦ New Displays the New Permanent Public Option dialog when the
Public group is selected. In this new dialog, you can define new options
and values.

1036
Appendix A Dialog Box Descriptions

♦ Remove Now Removes the selected option from the list.


♦ Set Temporary Now Immediately sets the selected option to the new
value. Temporary values only last for the current Sybase Central session.
You can only set temporary values for the current user (the user who
you connected as in Sybase Central).
♦ Set Permanent Now Immediately sets the selected option to the new
value. This change lasts between sessions until explicitly changed again.
$ See also
♦ "Working with databases" on page 115
♦ "Managing User IDs and Permissions" on page 735

Log SQL Statements dialog


The Log SQL Statements dialog (accessed by clicking File➤Log SQL
Statements when you have a database selected) lets you log SQL statements
generated by Sybase Central to a window or a file.
Dialog components ♦ Log SQL statements to a window Logs SQL statements in a new
window that appears when you click OK in this dialog. You can keep
this window open for the duration of your current Sybase Central
session. When you close the window, you are prompted to save the log
to a file.
♦ Log SQL statements to a file Logs SQL statements to the file that
you specify in the text box below. You can search for an existing file by
clicking the Browse button.
♦ Wordwrap output Lets you specify the length of each line in the log
file if you enable the Log SQL Statements To A File option above. In
this file, each line wraps automatically to the next line after the number
of characters that you specify.
♦ Add date/time to output Includes the date and time of each SQL
statement occurrence in the log window or file.
♦ Log both SELECT statements and SQL commands All statements
and commands are included in the log window or file.
♦ Log only SELECT statements Only SELECT statements are included
in the log window or file.
♦ Log only SQL commands Only SQL commands are included in the
log window or file.
$ See also
♦ "SQL Statements" on page 377 of the book ASA Reference
1037
Dialogs accessed through the File menu

Users and groups


When you select a user (including a remote user) or a group in Sybase
Central, different menu items appear in the File menu (or in a popup menu if
you right-click the object).
$ For more information about working with users and groups, see
"Managing User IDs and Permissions" on page 735.

Subscribe To Publication dialog


The Subscribe to Publication dialog (accessed by clicking File➤Subscribe
To when you have a remote user or group selected) lets you subscribe a user
to a SQL Remote publication.
Dialog components
♦ Name Shows the name of the user or group who is subscribing.
♦ Publication list Shows all publications in the database.
♦ Subscribe by Shows the SUBSCRIBE BY column or clause specified
for the publication. You can specify this column or clause when you
create the publication with the Add Publication wizard. You can also
specify this column or clause for existing publications on the Subscribe
Restriction tab of the property sheets for an article.
♦ Value Provides a place for you to type the value of the SUBSCRIBE
BY column or clause. This value is matched against the SUBSCRIBE
BY column or expression for rows in the table. The subscriber receives
all rows for which the value of the column or expression is equal to the
SUBSCRIBE BY value.
♦ Subscribe Subscribes the selected remote user or group to the
publication.
♦ Properties Opens the property sheet the selected publication. On this
property sheet, you can view and edit articles and subscriptions.
$ See also
♦ "Publication design for Adaptive Server Anywhere" on page 397 of the
book Replication and Synchronization Guide

1038
Appendix A Dialog Box Descriptions

Set Options dialog


The Set Options dialog for users and groups (accessed by clicking File➤Set
Options when a user or group is selected) is the same as for servers and
databases; see "Set Options dialog" on page 1036 for a description.

Change Group to a Remote Group dialog


The Change Group to a Remote Group dialog (accessed by clicking
File➤Set Remote when you have a group selected) lets you change a group
to a remote group.
Dialog components
♦ Group name Shows the name of the currently selected group.
♦ Message Type Lets you select a message type for communicating
with the publisher.
♦ Address Provides a place for you to type the destination for
replication messages. Publishers and remote users each have their own
address.
♦ Send then close Sets the replication frequency so that the publisher’s
agent runs once, sends all pending messages to this remote group, and
then shuts down. The agent must be restarted each time the publisher
wants to send messages. This option is only useful when you are running
the message agent at a remote site.
♦ Send every Sets the replication frequency so that the publisher’s agent
runs continuously, sending messages to this remote group at the given
periodic interval. This option is useful at both consolidated and remote
sites.
♦ Send daily at Sets the replication frequency so that the publisher’s
agent runs continuously, sending messages to this remote group each
day at the given time. This option is particularly useful at remote sites.
$ See also
♦ "Working with message types" on page 512 of the book Replication and
Synchronization Guide
♦ "Granting and revoking remote permissions" on page 749
♦ "Managing SQL Remote permissions" on page 501 of the book
Replication and Synchronization Guide

1039
Dialogs accessed through the File menu

Change User to a Remote User dialog


The Change User to a Remote User dialog (accessed by clicking File➤Set
Remote when you have a user selected) lets you change a user to a remote
user.
Dialog components
♦ User name Shows the name of the currently selected user.
♦ Message Type Lets you select a message type for communicating
with the publisher.
♦ Address Provides a place for you to type the destination for
replication messages. Publishers and remote users each have their own
address.
♦ Send then close Sets the replication frequency so that the publisher’s
agent runs once, sends all pending messages to this remote user, and
then shuts down. The agent must be restarted each time the publisher
wants to send messages. This option is only useful when you are running
the message agent at a remote site.
♦ Send every Sets the replication frequency so that the publisher’s agent
runs continuously, sending messages to this remote user at the given
periodic interval. This option is useful at both consolidated and remote
sites.
♦ Send daily at Sets the replication frequency so that the publisher’s
agent runs continuously, sending messages to this remote user each day
at the given time. This option is particularly useful at remote sites.
$ See also
♦ "Working with message types" on page 512 of the book Replication and
Synchronization Guide
♦ "Granting and revoking remote permissions" on page 749
♦ "Managing SQL Remote permissions" on page 501 of the book
Replication and Synchronization Guide

Java Objects
When you select a Java object in the Java Objects folder in Sybase Central,
different menu items appear in the File menu (or in a popup menu if you
right-click the object).
$ For more information about java objects, see "Introduction to Java in
the database" on page 514 and "Using Java in the Database" on page 549.

1040
Appendix A Dialog Box Descriptions

Update Java Class dialog


The Update Java Class dialog (accessed by clicking File➤Update when you
have a Java class selected) lets you specify a Java class to be updated.
Dialog components
♦ Class name Shows the name of the Java class that is being updated.
♦ Location Shows the location of the Java class that you wish to update.
If you don’t know the location, you can click the Browse button to
search for it.
$ See also
♦ "Using Java in the Database" on page 549

Update JAR dialog


The Update JAR dialog (accessed by clicking File➤Update when you have a
JAR file selected) lets you specify a JAR file to be updated.
Dialog components
♦ JAR name Shows the name of the JAR file that is being updated.
♦ Location Shows the location of the JAR file that you wish to update.
If you don’t know the location, you can click the Browse button to
search for it.
♦ Install all classes in the JAR Indicates that you would like all classes
in this JAR installed.
♦ Install the following classes Lets you identify specific classes to
install. Separate each with a comma. You can click the Select button to
view a list of classes.
$ See also
♦ "Using Java in the Database" on page 549

Publications
When you select a publication in the SQL Remote folder in Sybase Central,
different menu items appear in the File menu (or in a popup menu if you
right-click the object).

1041
Dialogs accessed through the File menu

Sybase Central contains one dialog related to publications that is not located
in the File menu. It can be accessed when you double-click Add Article in
the right pane of the viewer when you have a publication open in the left
pane, and lets you define new articles for a publication.
$ For descriptions of dialogs related to remote users, which are also
located in the SQL Remote folder, see "Users and groups" on page 1038.
$ For more information about publications, see "Publication design for
Adaptive Server Anywhere" on page 397 of the book Replication and
Synchronization Guide.

Subscribe For dialog


The Subscribe For dialog (accessed by clicking File➤Subscribe For when
you have a publication selected) lets you subscribe a user or group to the
publication.
Dialog components
♦ Name Shows the name of the currently selected publication.
♦ All remote users or groups Shows all remote user and groups in the
database.
♦ Subscribe Subscribes the selected remote user or group to the
publication.
♦ Properties Opens the property sheet of the selected remote user or
group. On this property sheet, you can manage permissions, authorities,
membership, statistics, and other settings.
$ See also
♦ "Creating subscriptions" on page 436 of the book Replication and
Synchronization Guide
♦ "Property Sheet Descriptions" on page 1061

Table Properties
The Table Properties dialog (accessed by clicking File➤Properties when you
have a table selected in the Publications folder) lets you view and edit table
properties. This dialog is the same as for any table located in the Tables
folder; see "Table properties" on page 1072 for a description.

1042
Appendix A Dialog Box Descriptions

Remote Servers
When you select a remote database server in the Remote Servers folder in
Sybase Central, different menu items appear in the File menu (or in a popup
menu if you right-click the object).
$ For descriptions of dialogs related to local servers or databases, see
"Servers and databases" on page 1035.

Create a New Remote Server wizard


To access this wizard, open the Remote Servers folder and double-click Add
Remote Server.
♦ On the first page, enter a name by which the remote server is identified
in the current database. This name is used when you create proxy tables
and external logins on the remote server.
♦ On the second page, select a server class from the dropdown list.
$ For information on server classes, see "Server Classes for Remote
Data Access" on page 925.
♦ On the third page, select ODBC or JDBC as your method for connecting
to the remote database server.
Add connection parameters. For ODBC data sources, type the name of
the data source. For JDBC access, enter a machine name or IP address
and a port number, separated by a colon.
♦ On the fourth page, select whether you wish the remote server to be a
read-only data source, or whether changes to remote data are permitted.
By default, changes are permitted.
♦ On the fifth page, add any external logins you wish to provide for the
remote server. You can also create external logins after the remote
server is created.

Add External Login dialog


The Add External Login dialog (accessed by clicking File➤Add External
Login when you have a remote server selected) lets you define external login
settings. External logins are alternate login names and passwords to be used
when communicating with a remote server.
Dialog components
♦ Local user Lists all of the local users; select the one for whom you
want to create an external login.

1043
Dialogs accessed through the File menu

♦ External login Lets you enter the login id to be used by the local user.
♦ External password Lets you enter the external login password to be
used by the local user.
♦ Confirm password Lets you confirm that you have entered the
external login password correctly.
$ See also
♦ "Working with external logins" on page 903

Create a New Proxy Table wizard


You can access the Create a New Proxy Table dialog by clicking File➤Add
Proxy Table when you have a remote server selected, or by double-clicking
Add Proxy Table in the Tables folder. It lets you define a local proxy name
for a table on a remote server.
♦ On the first page, select the remote server on which the remote table is
defined. If the server supports multiple databases, specify the database
as well.
♦ On the second page, select a remote table from the list, and provide a
name by which its proxy on the local database is to be known.
♦ On the third page, select the columns in the remote table that you wish
to include in the proxy table.
♦ On the fourth page, enter a descriptive comment if you wish.
The proxy table appears in the Tables folder.

1044
Appendix A Dialog Box Descriptions

Dialogs accessed through the Tools menu


The Tools menu commands are related to the Sybase Central viewer; they
relate to connection, disconnection, plug-ins, and viewer options. These
menu items always stay visible, regardless of the objects you click in the
main window.

Connect dialog
The Connect dialog lets you define parameters for connecting to a server or
database. The same dialog is used in both Sybase Central and Interactive
SQL. In Sybase Central, you can access it by clicking Tools➤Connect. In
Interactive SQL, the dialog appears when you start a new session or open a
new window.
The Connect dialog has the following pages (or tabs).
♦ The Identification tab lets you identify yourself to the database and
specify a data source.
♦ The Database tab lets you identify a server and database to connect to.
♦ The Advanced tab lets you add additional connection parameters and
specify a driver for the connection.
In Sybase Central, after you connect successfully, the database name appears
in the left panel of the main window, under the server that it is running on.
The user that you connect as is shown in brackets after the database name.
You can then administer the database by navigating and selecting objects that
belong to the database.
In Interactive SQL, the connection information (including the database name,
your user id, and the database server) appears on a title bar above the SQL
Statements pane.

Tip
If you connect to a database using an account that does not have DBA
authority, you can only alter objects on which you have the required
permissions.

$ For more information about connecting, see "Connecting to a


Database" on page 33.

1045
Dialogs accessed through the Tools menu

Identification dialog tab


The Identification tab of the Connect dialog in Sybase Central and
Interactive SQL has the following components.
♦ User Lets you type a user ID for the connection. For the sample
database (asademo), the user ID is dba.
♦ Password Lets you type a password for the connection. For the
sample database (asademo), the password is sql.
♦ None Disables the data source options below.
♦ ODBC Data Source Name Lets you choose a data source (a stored set
of connection parameters usually used for ODBC connections). This
field is equivalent to the DSN connection parameter, which references a
data source in the registry. You can view a list of data sources by
clicking the Browse button.
♦ ODBC Data Source File Lets you choose a data source file for the
connection. This field is equivalent to the FileDSN connection
parameter, which references a data source held in a file. You can search
for the file by clicking the Browse button.
$ See also
♦ "Connecting to a Database" on page 33
♦ "Database dialog tab" on page 1046
♦ "Advanced dialog tab" on page 1047

Database dialog tab


The Database tab of the Connect dialog in Sybase Central and Interactive
SQL has the following components.
♦ Server name Lets you enter the name of the Adaptive Server
Anywhere personal database server or network server. You can include
a host address and port number in the format hostName:port. You can
search for a server by clicking the Find button.
♦ Start line Lets you enter the full path of the server. For example,
f:\Sybase\ASA70\win32\dbeng7.exe. You can also include command line
switches in this field.
♦ Database name Lets you enter the name of the Adaptive Server
Anywhere database that you want to connect to.

1046
Appendix A Dialog Box Descriptions

♦ Database file Lets you enter the full path, name, and extension of the
Adaptive Server Anywhere database file if the database is on the same
machine as Sybase Central or Interactive SQL. You can search for the
file by clicking the Browse button.
♦ Start database automatically Causes the database to start
automatically (if it is not already running) when you start a new Sybase
Central or Interactive SQL session.
♦ Stop database after last disconnect Causes the database to shut
down automatically after the last user has disconnected.
$ See also
♦ "Connecting to a Database" on page 33
♦ "Connection parameters" on page 46 of the book ASA Reference
♦ "Identification dialog tab" on page 1046
♦ "Advanced dialog tab" on page 1047

Advanced dialog tab


The Advanced tab of the Connect dialog in Sybase Central and Interactive
SQL has the following components.
♦ Connection parameters field Provides a place for you to add, change,
or remote additional connection parameters.
♦ Driver options Let you choose the type of driver you want to use for
the connection. All requests and commands that you make when
working with the database go through this driver. You have the
following choices.
♦ jConnect 5 Specifies the use of a JDBC driver called jConnect
driver (a Sybase product). This driver is platform-independent and
offers the best performance. It supports all basic connection features
including the use of ODBC data sources (in Sybase Central and
Interactive SQL only). It is the recommended driver and is enabled
by default.
♦ JDBC-ODBC Bridge Specifies the use of the Sun JDBC-ODBC
Bridge to access ODBC data sources from the dialog.

Tip
You can specify a network protocol as a CommLinks connection
parameter on this tab, but the protocols available depend on the driver you
are using. For jConnect, the TCP/IP protocol is used automatically.

1047
Dialogs accessed through the Tools menu

$ See also
♦ "Connecting to a Database" on page 33
♦ "Connection parameters" on page 46 of the book ASA Reference
♦ "Identification dialog tab" on page 1046
♦ "Database dialog tab" on page 1046

Disconnect dialog
The Disconnect dialog (accessed by clicking Tools➤Disconnect) lets you
view all connections and disconnect from the ones you choose.
Dialog components
♦ Connections list Shows all current connections for your Sybase
Central session, along with a description and corresponding plug-in for
each.
♦ Disconnect Disconnects the selected connection(s).
$ See also
♦ "Connecting to a Database" on page 33

Connection Profiles dialog


The Connection Profiles dialog (accessed by clicking Tools➤Connection
Profiles) lets you view and define named sets of connection parameters (user
name, password, server name, and so on).
Dialog components
♦ Connection profile list Lists and describes all currently defined
connection profiles. You can select a profile and then click the Connect
button, or you can simply double-click the profile.
♦ Connect Connects using the selected profile.
♦ New Displays the "New Profile dialog" on page 1049, which lets you
create a new connection profile.
♦ Edit Displays the "Connect dialog" on page 1045, which lets you edit
the selected connection profile.
♦ Remove Deletes the selected profile; Sybase Central prompts you to
continue if you click this button.

1048
Appendix A Dialog Box Descriptions

♦ Set Startup Toggles the automatic-startup option for the selected


profile. When this option is turned on, the profile is automatically used
each time Sybase Central is started.

$ See also
♦ "Connecting to a Database" on page 33

New Profile dialog


The New Profile dialog (accessed by clicking the New button in the
"Connection Profiles dialog" on page 1048) lets you define a new connection
profile.
Dialog components
♦ Name Lets you type the name of the new connection profile.
♦ Type Lets you select the type of connection. The types available in the
list depend on your Sybase Central configuration.
$ See also
♦ "Connecting to a Database" on page 33

Plug-ins dialog
The Plug-ins dialog (accessed by clicking Tools➤Plug-ins) lets you
configure existing plug-in modules settings and register new ones. For more
information about using and configuring plug-ins, see the online Help for the
Sybase Central viewer.
Dialog components
♦ Plug-in list Lists and describes each plug-in currently registered with
Sybase Central. Note that Sybase Central has its own version number.
♦ Register Displays the "Register Plug-In dialog" on page 1051, which
lets you register a new plug-in by specifying a Java class or JAR file. In
this dialog, you can also type additional directory paths to add to the
plug-in’s classpath (the set of locations of the required classes for the
plug-in).
♦ Load Loads the selected plug-in for use.

1049
Dialogs accessed through the Tools menu

♦ Unload Unloads the selected plug-in. Unloaded plug-ins are dormant


and unusable until you load them again; they remain listed in the dialog,
but are not visible in the main Sybase Central viewer. If there is only
one plug-in that you commonly use, it can be helpful to unload all
others.
♦ Properties Displays the "Plug-in Properties dialog" on page 1050,
which lets you set startup options and add additional paths to the current
classpath.
$ See also
♦ "Introduction to Java in the database" on page 514

Plug-in Properties dialog


The Plug-in Properties dialog (accessed by clicking the Properties button in
the "Plug-ins dialog" on page 1049) lets you configure the properties of the
currently selected plug-in modules. This dialog consists of two pages (or
tabs): General and Advanced.
General tab ♦ Object information The top part of the dialog shows the object name
components and type.
♦ Class name Shows the class name of the current plug-in.
♦ Plug-in classpath Shows the classpath of the current plug-in. A
classpath is the set of locations of the required classes for the plug-in.
You can click the Browse button to search for a new classpath.
♦ Load on startup Toggles the Startup setting between Manual and
Automatic. If you choose Manual, you need to manually connect to the
plug-in each time you start Sybase Central. If you choose Automatic, the
plug-in automatically starts when you start a new session.

Advanced tab ♦ Load plug-in with a separate class loader Lets you specify
components additional paths to add to the current classpath. When this option is
enabled, the field below becomes editable.
♦ Additional paths field Lets you type additional paths to add to the
current classpath. You can search for a path by clicking the Browse
button. If you are entering more than one path, make sure each one is on
a separate line.

1050
Appendix A Dialog Box Descriptions

Register Plug-In dialog


The Register Plug-In dialog (accessed by clicking the Register button in the
"Plug-ins dialog" on page 1049) lets you register a new plug-in modules by
specifying a Java class or JAR file.
Dialog components
♦ Plug-in text box Lets you type the name of the plug-in; you can also
click the Browse button to search for it.
♦ Additional paths field Lets you type additional paths to add to the
plug-ins classpath (the set of locations of the required classes for the
plug-in). You can search for a path by clicking the Browse button. If you
are entering more than one path, make sure each one is on a separate
line. You can also add additional paths after the plug-in is registered by
typing them on the plug-in’s property sheet.
♦ Automatically load on startup When enabled, this option causes the
plug-in to automatically load when you launch future Sybase Central
sessions.
♦ Use class loader Uses a custom class loader to load the plug-in. If
you enable this setting, you don’t have to set the system class path before
launching Sybase Central.

Options dialog
The Options dialog for the main Sybase Central viewer (accessed by clicking
Tools➤Options) lets you configure basic viewer appearance options. This
dialog consists of two pages (or tabs): General and Chart.
General tab ♦ Viewer look and feel Lets you configure the look and feel of the main
components viewer window. Metal displays the viewer in the standard Java look;
CDE/Motif displays it in the standard Motif look; and Windows displays
it in the standards Windows look.
♦ Tab Placement Lets you configure the placement of the tabs for the
viewer’s right pane. For some plug-ins, there is only one tab. For other
plug-ins, there are multiple tabs.
♦ Viewer font options Let you configure the appearance of the viewer
display text. Options include:
♦ Font name Lets you choose the type of font for the viewer
display text.
♦ Font style Lets you choose the style of font (bold, italic, or plain)
for the viewer display text.

1051
Dialogs accessed through the Tools menu

♦ Size Lets you choose the size of the viewer display text.
♦ Sample Shows a text sample that reflects the current font settings.
♦ Reset to Defaults Resets all settings on this tab to their defaults.

Chart tab ♦ Update interval Lets you specify how frequently the Performance
components Monitor is updated. You can move the slider to increase or decrease the
time (in seconds), or you can type an exact value in the text box.
♦ Type options Let you choose how statistics are displayed in the
Performance Monitor. Use the sample window to preview the options.
♦ Reset to Defaults Resets all settings on this tab to their defaults.

Help tab This tab only appears if you are running on Windows operating systems.
components ♦ Help options Let you choose the type of online help you want the
Sybase Central viewer to use. All of these help systems contain the same
content, so you can always access the same information even if you
choose a different type of help.

1052
Appendix A Dialog Box Descriptions

Debugger windows
This section describes the windows in the Adaptive Server Anywhere
debugger.

Breakpoints window
The Breakpoints window lets you display and manipulate breakpoints. The
window displays breakpoints for the active connection only.

Enabling You can enable and disable breakpoints by clicking the breakpoint icon.
breakpoints
The icon represents an active breakpoint.

The icon represents a disabled breakpoint.

When the icon is displayed the breakpoint is cleared: the line is no


longer a breakpoint.
Conditional You can set a condition on a breakpoint, so that the breakpoint is then
Breakpoints triggered only when the condition is true.
You can modify the condition by double clicking the Condition column of
the Breakpoints window and typing a condition. The condition should be a
Java boolean expression for Java breakpoints, or a SQL Search condition for
stored procedure breakpoints. The expression is evaluated in the context of
the connection when the breakpoint is hit.
You can also set a count for the breakpoint, so that the breakpoint does not
trigger until the breakpoint is hit this number of times
In the case where both a count and a condition are specified, the count is
only decremented when the condition is true. For example, if you set a count
of 5 and a condition of ( x == 10 ), the breakpoint is triggered only on
the fifth time that x is equal to 10.

Calls window
The Calls window shows the chain of procedures that have been called to get
to the currently executing procedure. You can change the Source code
window display to a calling procedure by double clicking on a row, or
selecting it and pressing ENTER. You can also move up and down the call
stack using the Stack menu.

1053
Debugger windows

Java and non-Java The Calls window does not show complete information for stored procedure
groups execution when at a Java breakpoint. Instead it groups all stored procedure
calls into one row and labels it non-Java.
Similarly, when at a stored procedure breakpoint, the Calls window groups
Java calls into a single row and labels it as Java.

Catch window
The debugger always traps uncaught exceptions from Java code. You can use
the Catch Window to instruct the debugger to trap exceptions caught by the
Java code. The exception specified in this window must exactly match the
thrown exception. The debugger will not trap a thrown subclass.
When an exception is thrown, execution stops at the point the exception is
thrown. You can then choose Run➤Step Over to proceed to the point where
the exception is caught.

v To add an exception:
♦ In the Catch window, press INSERT, or select a class in the "Classes
window" on page 1054 and choose Break➤When exception thrown.

v To clear an exception:
♦ Select a line in the Catch window and press DELETE.

Classes window
The Classes window displays all Java classes currently installed in the
database.
You can see the source for a class by double clicking on it, or by selecting it
and pressing ENTER. If source code does not appear in the Source code
window, you may have to tell the debugger where to find the source code
using the "Source Path window" on page 1057, or by choosing File➤Add
Source Path.
Note Source code is not available for Sybase and Java API classes.

1054
Appendix A Dialog Box Descriptions

If the source for a class is displayed before the active connection has started
running Java in the database, the "Source code window" on page 1058
indicates that all lines are potential candidates for breakpoints by displaying

on each line. This is because the Java class has not yet been loaded and
the debugger cannot determine which lines contain code. A breakpoint set on
an invalid line (for example a comment) will be moved to the nearest valid
line when Java is started on that connection.

Connection window
The connections window shows a list of all connections and their status.
Possible window statuses include Running, Waiting, and Execution
Interrupted.
You can use this window to select the connection you wish to debug (the
active connection).

v To switch the active connection:


♦ Double-click a connection, or select a connection and press ENTER.
Debugger commands apply only to the active connection. For example,
choosing Run➤Go lets only the active connection run. Breakpoints apply
only to the active connection.
When you choose the special entry All connections, breakpoints that you set
apply to all present and future connections to the database.

Evaluate window
The Evaluate window allows you to debug individual expressions. At a
breakpoint, you may enter an expression in the evaluate window, and click
Evaluate. The value of the expression appears in the Expression Value box.
You can watch an expression by clicking on the Inspect button; this will
transfer the expression to the "Inspect window" on page 1056.
If you enter an expression that does not make sense within the context of the
procedure, then the Expression Value box will display the string ???.

Globals window
This window displays the names and values of all SQL global variables.

1055
Debugger windows

Inspect window
This window lets you evaluate expressions in the context of the connection
being debugged. If you are at a Java breakpoint, the expression is evaluated
as a Java expression; if you are at a stored procedure breakpoint, the
expression is evaluated as a SQL expression. If the expression is invalid, an
error message is displayed in the Value column.
Add new rows by pressing the INSERT key.
Remove rows by selecting them and pressing the DELETE key.
Change rows by selecting the Name column and typing a new expression.

You can expand Java objects by clicking the to the left of the name, by
double clicking the name, or by selecting the name and pressing ENTER.

Local Variables window


This window displays the values of all local variables currently in scope.
You can change the value of a local variable when the debugger is at a
breakpoint by clicking on the Value column and typing a new value.

You can expand Java objects by clicking the to the left of the name, by
double clicking the name, or by selecting the name and pressing ENTER.

Values can only be modified at a breakpoint


When at a non-Java breakpoint, you will still see Java objects which are
declared in a stored procedure, however you cannot modify the value of
these objects. You can only modify the value of a Java object when at a
Java breakpoint.

Methods window
This window displays the names of all the Java methods in the class
currently displayed in the"Source code window" on page 1058. You can
move the source code window to a method by double clicking on a method,
or selecting it and pressing ENTER. If the source code window is not
displaying a Java source file, no methods will be displayed.

1056
Appendix A Dialog Box Descriptions

Tip
When the methods window has focus, you can select methods by typing
the first part of the name. The list will change selections as you type.

Procedures window
This window displays all non-Java stored procedures in the database. You
can see the source code for a procedure by double clicking on it, or by
selecting it and pressing the ENTER key.

Tip
When the procedures window has focus, you can select procedures by
typing the first part of the name. The list will change selections as you
type.

Query window
This window allows you to run a SQL query in the context of the stored
procedure being debugged. For instance, you can use the query window to
inspect the contents of a temporary table used by a stored procedure.

Row variables window


This window displays the old and new values of a row-level trigger. You can
change the value of a column in the old or new row by double-clicking on it
and typing a new value.

Source Path window


The Source Path window holds a list of directories in which the debugger
looks for Java source code. Java rules for finding packages apply. The
debugger also searches the current classpath for source code.
For example, if you add the paths c:\db\procs and c:\Java\src to the source
path, and the debugger is trying to find a class called my.pakkage.MyClass, it
looks for the source code in c:\db\procs\my\pakkage\MyClass.Java and
c:\Java\src\my\pakkage\MyClass.Java

1057
Debugger windows

To browse the disk for a directory and add it to the source path, choose Add
Source Path from the File menu.

Source code window


The Source code window displays the source code of the active connection.
As execution passes from one function to another, each class file or
procedure is displayed as it is executed.
Breakpoints There is a breakpoint indicator to the left of the source code.

This line is a candidate for having a breakpoint

This line has a breakpoint

This line has a disabled breakpoint

This line has a breakpoint for all database connections. Breakpoints


of this type cannot be removed from the source code window. You must first
click make All connections the active connection, and then remove the
breakpoint using the "Breakpoints window" on page 1053.
When you click on the breakpoint indicator, the breakpoint alternates
between states.
Current line
The currently executing line is marked with an arrow, like this . The
current line indicator may be displayed overtop of the breakpoint indicator.

Tip
You can search the source code for a procedure name by choosing Find
from the Search menu.

Statics window
This window displays the names of all the Java static fields in the class
which is currently displayed in the"Source code window" on page 1058. You
can add a field to the Inspect window by double clicking on it, or selecting it
and pressing ENTER. If the source code window is displaying a Java source
file, no fields are displayed.

1058
Appendix A Dialog Box Descriptions

Tip
When the Statics window has focus, you can select fields by typing the
first part of the name. The list will change selections as you type.

Threads window
This window displays all active threads in the Java program begin debugged.
You can double-click on a thread to cause the debugger to show its context.

1059
Debugger windows

1060
A P P E N D I X B

Property Sheet Descriptions

This chapter provides descriptions of all the property sheets you can access
in Sybase Central. The sections are organized according to the object
hierarchy in the Sybase Central viewer object tree.
Contents
Topic Page
Introduction to property sheets 1063
Service properties 1064
Server properties 1067
Statistics properties 1069
Database properties 1070
Table properties 1072
Column properties 1075
Foreign Key properties 1078
Index properties 1081
Trigger properties 1082
View properties 1083
Procedures and Functions properties 1084
Users and Groups properties 1085
Integrated Logins properties 1088
Java Objects properties 1089
Domains properties 1090
Events properties 1091
Publications properties 1092
Articles properties 1093
Remote Users properties 1095
Message Types properties 1099
Connected Users properties 1100
1061
Introduction to property sheets

Database Space properties 1101


Remote Servers properties 1102
MobiLink Synchronization Templates properties 1103

1062
Appendix B Property Sheet Descriptions

Introduction to property sheets


Sybase Central provides a number of Properties dialog boxes (also called
property sheets) to let you configure the properties of different objects.
This chapter contains detailed descriptions of each Adaptive Server
Anywhere property sheet. Each one becomes available in the File menu
when you select an object (or in a popup menu when you right-click an
object).

1063
Service properties

Service properties
The Properties dialog for a service consists of five pages (or tabs): General,
Configuration, Account, Dependencies, and Polling. Each tab is described in
its own section.
$ For more information, see "Managing services" on page 19.

Service Properties: General tab components


The General tab of the Service Properties dialog has the following
components.
♦ Object name Shows the name of the selected object.
♦ Item Shows the type of the selected object.
♦ Type Shows the type of service.
♦ Status Shows the status of the selected service (for example, started or
stopped).
♦ Startup Lets you specify the following startup options:
♦ Automatic Specifies that the service is to be started automatically
whenever the operating system is started.
♦ Manual Specifies that the service is to be started manually. Only a
user with Administrator permission can start the service if it
requires a manual startup.
♦ Disabled Specifies that the service is to be disabled and not able
to start.
$ See also "Managing services" on page 19.

Service Properties: Configuration tab components


The Configuration tab of the Service Properties dialog has the following
components.
♦ Path of executable Provides a place for you to enter the path of the
executable file. You can click the Browse button to search for a file.
♦ Parameters for executable Provides a place for you to enter
additional command line parameters (file names and switches) for the
executable file.

1064
Appendix B Property Sheet Descriptions

$ See also "Managing services" on page 19.

Service Properties: Account tab components


The Account tab of the Service Properties dialog has the following
components.
♦ Local system account Selects your system’s local account as the one
that the service will run under.
♦ Other Indicates that an account other than the local one is to be
selected. Choose the appropriate user ID from the list provided.
♦ Password Lets you enter the appropriate password for the selected
user ID.
♦ Confirm Lets you re-enter the user ID’s password to confirm that it
was entered correctly.
♦ Allow service to interact with desktop When you choose Local
System Account above, this option lets you make the service visible on
the desktop.
$ See also "Managing services" on page 19.

Service Properties: Dependencies tab components


The Dependencies tab of the Service Properties dialog has the following
components.
♦ Load ordering group Lets you enter the load ordering group that the
current service should belong to. You can view all service groups that
exist on your system by clicking the Look Up button. The Look Up
Group dialog appears, which lets you specify a group.
♦ Services to be started before the current service Lists the services
or load order groups that are to be started before the current service.
♦ Add Service Displays the Add Services to Dependencies dialog,
which lets you view all services and select the ones you wish to add to
the list of dependencies.
♦ Add Group Displays the Add Groups to Dependencies dialog, which
lets you select the load order groups that you wish to add to the list of
dependencies.

1065
Service properties

♦ Remove Removes the selected group or service from the list on this
tab. The group or service is then no longer started before the current
service.

$ See also "Managing services" on page 19.

Service Properties: Polling tab components


The Polling tab of the Service Properties dialog has the following
components.
♦ Do not poll Indicates that you do not wish the system to poll the
services for changes to their states (started, stopped, paused, or
removed).
♦ Polling time in seconds Indicates that you wish the system to poll the
services, at specified intervals, to check for changes in the current states
(started, stopped, paused, or removed). You can enter the desired time
interval, in seconds, in the text box.
$ See also "Managing services" on page 19.

1066
Appendix B Property Sheet Descriptions

Server properties
The Properties dialog for a server consists of one page (or tab): the General
tab.

General tab components


♦ Object information The top half of the dialog shows the name and
type of the selected object, as well as the product (or plug-in) with which
the server is associated and the version number of the server.
♦ Platform Shows the platform, or operating system, that the server is
currently running on.
♦ Connection type Shows the network protocol used to connect to the
database.
♦ Connection URL Shows the database location URL (Uniform
Resource Locator).

Extended information tab components


♦ More database server properties An extended list of server
properties and their values. You can click Update to get new values.
$ For a list of properties and their meanings, see "Server-level
properties" on page 1095 of the book ASA Reference.

Options tab components


Some database server command-line options can be reset while the server is
running.
$ For more information, see "sa_server_option system procedure" on
page 971 of the book ASA Reference.
♦ Quitting time Set a time when the database server is to shut down. Use
the format of the displayed Current time, which is as follows:
YYYY-MM-DD HH:NN:SS.SS
♦ Disable new connections Prevent any other users from connecting to
the database. This may be useful for some maintenance operations.

1067
Server properties

♦ Request-level logging Set the level of detail at which to log requests


that the server processes. This option is primarily for troubleshooting
purposes.
$ For more information, see "–zr command-line option" on page 40
of the book ASA Reference.

1068
Appendix B Property Sheet Descriptions

Statistics properties
The Properties dialog for statistics consists of one page (or tab): the General
tab.
General tab ♦ Object information The top part of the dialog shows the name and
components type of the selected object
♦ Description Gives a brief explanation of the statistic.
♦ Graph statistic in the Performance Monitor Adds this statistic to (or
removes it from) the Performance Monitor.
$ See also "Monitoring and Improving Performance" on page 799.

1069
Database properties

Database properties
The Properties dialog for a database consists of three pages (or tabs):
General, Extended Information, SQL Remote.
$ For more information about using databases, see "Working with
databases" on page 115.

General tab components


♦ Object information The top half of the dialog shows the name and
type of the object, as well as the version number of the ASA database.
Versions before 5.0 are Watcom SQL databases.
♦ Database ID Shows a unique number assigned by the server to each
database that is started on it. This number makes it possible to
distinguish between two instances of the same database running on the
same server.
♦ File Shows the root database file for the database (also represented by
the SYSTEM database space).
♦ Log file Shows the transaction log file for the database.
♦ Mirror file Shows the transaction log mirror file for the database.
♦ Internal Java Version Shows the version of the JDK supported by the
database at the time of installation.

Extended Information tab components


♦ Page size Shows the page size of the database (in bytes)
♦ Database encrypted Shows whether the database is encrypted.
♦ Ignore trailing blanks Shows whether the database ignores trailing
blanks in comparisons.
♦ Database case sensitive Shows whether the database is case
sensitive. This property extends to the case sensitivity of connection
parameters, including user IDs and passwords.
♦ Default collation Shows the default collation of the database.
♦ Connection ID Shows the ID of Sybase Central’s connection to the
database.

1070
Appendix B Property Sheet Descriptions

♦ Connection count The total number of current connections to this


database from all users (including Sybase Central’s connection).
♦ Checkpoint Urgency Shows the degree of checkpoint urgency.
♦ Recovery Urgency Shows the degree of recovery urgency.

SQL Remote tab components


♦ Database publisher Shows the publisher of the database. You can
click the Change button to select a user or group to be the publisher.
♦ Consolidated database Shows the consolidated database of this
database (if this database is acting as a remote database). You can click
the Change button to set the consolidated database.
♦ Subscribers Shows the number of remote users subscribing to
publications in this database.
♦ Subscriptions Shows the number of subscriptions by remote users to
publications in this database.
♦ Started subscriptions Shows the number of subscriptions in this
database that have been started.

1071
Table properties

Table properties
The Properties dialog for tables consists of five pages (or tabs): General,
Columns, Constraints, Permissions and Misc. Each tab is described in its
own section.
$ For more information about using tables, see "Working with tables" on
page 124.

Table properties: General tab components


The General tab of the Table Properties dialog has the following
components.
♦ Object information The top half of the tab shows the name and type
of the object, the database user who created (and owns) this object and
the server type.
♦ DB Space Shows the database space (or DB space) used by the table.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
♦ On commit Shows whether the rows of a global temporary table are
deleted or preserved when a COMMIT is executed. This control only
appears when the selected table was created as a global temporary table.
$ See also "Working with tables" on page 124.

Table properties: Columns tab components


The Columns tab of the Table Properties dialog has the following
components.
♦ Column list Lists all of the column in the selected table, along with
their type.
♦ Add To Key Adds selected columns to the primary key of the table.
♦ Remove From Key Removes selected columns from the primary key
of the table.
♦ Remove All Removes all columns from the primary key of the table.
♦ Details Displays the Column Details dialog, which shows a summary
of the properties of the selected object.

1072
Appendix B Property Sheet Descriptions

$ See also "Working with tables" on page 124.

Table properties: Constraints tab components


The Constraints tab of the Table Properties dialog has the following
components.
♦ Uniqueness Contraints list Shows the unique constraint defined for
the table.
♦ New Displays the Add Uniqueness Constraint dialog, which lets you
create new uniqueness constraints.
♦ Remove Removes selected constraints from the list.
♦ Check Constraint text box Lets you define specified conditions on a
column or set of columns to make up the check constraint of the table.
$ See also "Working with tables" on page 124.

Table properties: Permissions tab components


The Permissions tab of the Table Properties dialog has the following
components. Legend: A=Alter, D=Delete, I=Insert, R=Reference, S=Select,
U=Update
♦ Permissions list Lists the users who have permission on the selected
table; if no users appear, you can add them by clicking the Grant button.
Click in the fields beside each user to grant or revoke permission;
double-clicking (so that a check mark and two ’+’ signs appear) gives the
user grant options.
♦ References Shows whether the Reference permission applies to all
column or a subset of columns. You can define the subset by clicking
the button beside this field and clicking beside the relevant columns.
♦ Select Shows whether the Select permission applies to all columns or
a subset of columns You can define the subset by clicking the button
beside this field and clicking beside the relevant columns.
♦ Update Shows whether the Update permission applies to all columns
or a subset of columns. You can define the subset by clicking the button
beside this field and using the controls in the resulting Column
Permissions dialog.
♦ Grant Displays the Grant Permission dialog, which lets you grant
permissions to other users.

1073
Table properties

♦ Revoke Revokes permissions from the selected users and removes


them from the list.
$ See also "Working with tables" on page 124.

Table properties: Misc. tab components


The Misc. tab of the Table Properties dialog has the following components.
♦ Maximum table width The number of bytes required for each row in
the table. This number is calculated from the length of the string
columns, the precision of numeric columns and the number of bytes of
storage for all other data type. If the table includes long binary or long
varchar columns, their arbitrary widths are not included, so the row
width can only be approximated.
♦ Approximate number of rows Shows the approximate number of
rows in the selected table.
♦ Calculate Calculates the number of rows in the selected table.
♦ Table is replicating data Sets the table to act as part of a replication
primary site.
$ See also "Working with tables" on page 124.

1074
Appendix B Property Sheet Descriptions

Column properties
The Properties dialog for columns consists of four pages (or tabs): General,
Data Type, Default, and Constraints. Each tab is described in its own section.
$ For more information about using tables, see "Working with tables" on
page 124.

Column properties: General tab components


The General tab of the Column Properties dialog has the following
components.
♦ Object information The top half of the tab shows the name and type
of the object, and the table it belongs to.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
$ See also "Working with tables" on page 124.

Column properties: Data Type tab components


The Data Type tab of the Column Properties dialog has the following
components.
♦ Built-in type Lets you select a predefined data type of the column.
Integers, character strings, and dates are examples of predefined data
types. For some of these types, you can specify a size and/or scale.
♦ Domain Lets you select a named combination of built-in data type,
default value, check condition, and nullability.
♦ Java Class Lets you select a Java class for the column.
$ See also
♦ "Using domains" on page 371
♦ "Ensuring Data Integrity" on page 357
♦ "Using Java in the Database" on page 549
♦ "Working with tables" on page 124

1075
Column properties

Column properties: Default tab components


The Default tab of the Column Properties dialog has the following
components.
♦ Default Shows the default value of the selected column. If the column
is based on a domain, it inherits the domain’s default value (if any), but
this can be overridden for the column.
♦ Select Displays the Column Default dialog, which lets you define a
default for the column or select a system-defined default.
♦ Computed value Lets you define the column as a computed column.
A computed column derives its values from calculations of values in
other columns. In the text box, you can type the relationship between the
other columns and the computed column.
$ See also
♦ "Defining computed columns" on page 586
♦ "Working with tables" on page 124

Column properties: Constraints tab components


The Constraints tab of the Column Properties dialog has the following
components.
♦ No uniqueness or NULL constraints Disallows uniqueness and
NULL constraints in the selected column.
♦ Values are unique Shows whether the values in the selected column
must be unique.
♦ Column allows NULL Determines the nullability of this column. If
this column is based on a domain, you can retain the domain's nullability
or override it for this column.
♦ Check Constraint text box Lets you further restrict the values that are
allowed in the column, in addition to the restrictions already imposed by
the data type. If the column is based on a domain, it inherits the
domain’s check constraint (if any), but this can be overridden for the
column.
$ See also
♦ "Column Default dialog" on page 1077
♦ "Ensuring Data Integrity" on page 357
♦ "Working with tables" on page 124

1076
Appendix B Property Sheet Descriptions

Column Default dialog


The Column Default dialog (accessed by clicking the Select button in the
Properties dialog for table columns) lets you define a default for the currently
selected column.
Dialog components
♦ User-defined Lets you type a custom value (string or number) for the
default value. If you’ve based this column on a domain, you can retain
the domain’s default value (if any) or override it for this column.
♦ System-defined Lets you select a pre-defined value (for example,
current date) for the default value. If you have based this column on a
domain, you can retain the domain’s default value (if any) or override it
for this column.
$ See also "Working with tables" on page 124.

1077
Foreign Key properties

Foreign Key properties


The Properties dialog for foreign key of tables consists of three pages (or
tabs): General, Columns and Integrity. Each tab is described in its own
section.
$ For more information about foreign keys, see "Managing foreign keys"
on page 133.

Foreign Keys: General tab components


The General tab of the Properties dialog for foreign keys has the following
components.
♦ Foreign key information The top half of the tab shows the name and
type of the object, as well as the table to which the foreign key belongs.
♦ Comment text box Provides a place for you to type a text description
of this object. For example, you could use this area to describe the
object’s purpose in the system.
$ See also "Managing foreign keys" on page 133.

Foreign Keys: Columns tab components


The Columns tab of the Properties dialog for foreign keys has the following
components.
♦ Foreign Column Shows the order of columns in the selected foreign
key. The window on the left of the dialog refers to the currently selected
table (the table containing the foreign key).
♦ Primary Column Shows the column in the primary table's primary
key. The window on the right of the dialog refers to the table containing
the primary key that the foreign key is referencing.
♦ Details Displays the Column Details dialog, which shows a summary
of the properties of the selected object.
$ See also "Managing foreign keys" on page 133.

1078
Appendix B Property Sheet Descriptions

Foreign Keys: Integrity tab components


The Integrity tab of the Properties dialog for foreign keys has the following
components.
♦ Update action Lets you define the behavior of the selected table when
the user tries to update data. You have the following options:
♦ Restrict Update prevents updates of the associated primary table’s
primary key value if there are corresponding foreign keys in this
table
♦ Cascade updates the foreign key to match a new value for the
associated primary key
♦ Set NULL sets to NULL all the foreign-key values in this table that
correspond to the updated primary key of the associated primary
table. Note: To use this option, the foreign-key columns must all
have Allow Nulls set.
♦ Set Default sets to the column’s default value all the foreign-key
values in this table that correspond to the updated primary key of
the associated primary table. Note: To use this option, the foreign-
key columns must all have default values.
♦ Delete action Lets you define the behavior of the selected table when
the user tries to delete data. You have the following options:
♦ Restrict Delete prevents deletion of the associated primary table’s
primary key value if there are corresponding foreign keys in this
table.
♦ Cascade deletes the rows from this table that match the deleted
primary key of the associated primary table.
♦ Set NULL sets to NULL all the foreign-key values in this table that
correspond to the deleted primary key of the associated primary
table. Note: To use this option, the foreign-key columns must all
have Allow Nulls set.
♦ Set Default sets to the column’s default value all the foreign-key
values in this table that correspond to the deleted primary key of the
associated primary table. Note: To use this option, the foreign-key
columns must all have default values.
♦ Advanced foreign key properties Lets you set advanced properties of
the selected foreign key. You have the following options:
♦ Allows NULL determines the nullability of the foreign-key columns.
Note: To use this option, the foreign-key columns must all have
Allow Nulls set.

1079
Foreign Key properties

♦ Check on commit forces the database to wait for a COMMIT


before checking the integrity of the foreign key, overriding the
setting of the WAIT_FOR_COMMIT database option. Note: This
option can only be used with the Restrict actions.
$ See also
♦ "Ensuring Data Integrity" on page 357
♦ "Managing foreign keys" on page 133

1080
Appendix B Property Sheet Descriptions

Index properties
The Properties dialog for index consists of two pages (or tabs): General and
Columns.
General tab ♦ Object information The top half of the dialog shows the name and type
components of the object, as well as the table with which it is associated.
♦ DB Space Shows the dbspace used by the index.
♦ Is unique Shows whether values in the index must be unique. You can
set the uniqueness value when you create a new index.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.

Columns tab ♦ Column list Shows all of the columns in the index, along with their
components order (ascending or descending). You can set the order when you create
a new index.
♦ Details Displays the Column Details dialog, which shows a summary
of the properties of the selected object.
$ See also "Working with indexes" on page 145.

1081
Trigger properties

Trigger properties
The Properties dialog for trigger consists of two pages (or tabs): General and
Type Information.
General tab ♦ Object information The top half of the tab shows the name and type
components of the object, the table with which it is associated, and the SQL dialect in
which the code was last saved (Watcom-SQL or Transact-SQL).
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
Type Information ♦ Trigger Timing Determines whether the trigger executes Before or
tab components After the event. Row-level triggers can also have SQL Remote conflict
timing, which executes before UPDATE or UPDATE OF column-lists
events.
♦ Trigger Type Determines which events cause the trigger to execute.
Events: Insert, Delete, Update, Update Columns.
♦ Trigger Level Determines whether the trigger is a row-level trigger or
a statement-level trigger.
♦ List For triggers in this table that execute for the same kind of event
with the same timing, this number determines the order in which these
triggers are fired.
$ See also "Using Procedures, Triggers, and Batches" on page 435.

1082
Appendix B Property Sheet Descriptions

View properties
The Properties dialog for views consists of three pages (or tabs): General,
Permissions and Columns.
General tab ♦ View information The top half of the tab shows the name and type of
components the object, as well as the database user who created (and owns) this
object.
♦ Comment text box Provides a place for you to type a text description
of this object. For example, you could use this area to describe the
object’s purpose in the system.

Permissions tab ♦ Permission list Lists the users who have permission on the selected
components table; you can add users to the list by clicking the Grant button. Click in
the fields beside each user to grant or revoke permission; double-
clicking (so that a check mark and two '+' signs appear) gives the user
grant options.
♦ Grant Displays the Grant Permission dialog, which lets you grant
permissions to other users.
♦ Revoke Revokes permissions from the selected users and removes
them from the list.

Columns tab ♦ Columns list Lists the column in the selected view.
components
$ See also "Working with views" on page 138.

1083
Procedures and Functions properties

Procedures and Functions properties


The Properties dialog for procedures and functions consists of three pages (or
tabs): General, Permissions, and Parameters. In most cases, we use the term
procedure to refer to both procedures and functions.
General tab ♦ Object information The top half of the tab shows the name and type
components of the object, the database user who created (and owns) this object, and
the SQL dialect in which the code was last saved (Watcom-SQL or
Transact-SQL).
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.

Permissions tab ♦ User list Shows all users who have permissions to execute the
components procedure. Users with permission have a check mark beside them in the
Execute column; you can click in this column to toggle between
granting or not granting permission.
♦ Grant Displays the Grant Permission dialog, which lets you choose the
users or groups for which you want to grant permission.
♦ Revoke Revokes permission from the selected user or group and
removes them from the list.

Parameters tab ♦ Parameter list Displays the parameters for the selected procedure.
components
$ See also
♦ "Using Procedures, Triggers, and Batches" on page 435
♦ "Granting permissions on procedures" on page 747

1084
Appendix B Property Sheet Descriptions

Users and Groups properties


The Properties dialog for users and groups consists of five pages (or tabs):
General, Authorities, Membership, Permissions and External Logins. Each
tab is described in its own section.
This Properties dialog is similar to the one for remote users, although the
remote users property sheet has two extra tabs.
$ For a description of the remote user property sheet, see "Remote Users
properties" on page 1095.
$ For a description of working with users and groups, see "Managing
User IDs and Permissions" on page 735.

Users and Groups: General properties tab


The General tab of the Properties dialog for users and groups has the
following components.
♦ Object information The top part of the tab the name and type of the
object.
♦ Password Provides a place for you to type the password of the user.
For added security, the typed characters are shown as asterisks.
♦ Confirm Provides a place for you to confirm the new password that
you entered in the Password field by re-typing it. The contents of the
two fields must match exactly.
♦ Allowed to connect Determines whether the user or group is allowed
to connect to the database. If the user or group is not allowed to connect,
the password (if any) is removed from the account. If you later change
the user or group to allow them to connect, you must supply a new
password. Users are almost always allowed to connect. For a group,
however, turning this option off prevents anyone from connecting to the
database using the group account itself.
♦ Comment text box Provides a place for you to type a text description
of this object. For example, you could use this area to describe the
object’s purpose in the system.

$ See also "Managing User IDs and Permissions" on page 735.

1085
Users and Groups properties

Users and Groups: Authorities properties tab


The Authorities tab of the Properties dialog for users and groups has the
following components.
♦ DBA Grants DBA authority to the user or group; a user with DBA
authority can fully administer the database.
♦ Resource Grants resource authority to the user or group; a user with
resource authority can create database objects.
♦ Remote DBA Grants Remote DBA authority to the user or group. The
Message Agent should be run using a user ID with this type of authority
to ensure that actions can be carried out, without creating security
loopholes.
$ See also "Managing User IDs and Permissions" on page 735.

Users and Groups: Membership properties tab


The Membership tab of the Properties dialog for users and groups has the
following components.
♦ Membership list Shows the groups to which the selected user or group
belongs.
♦ Join Group Displays the Join Group dialog, which lets you add the
selected user or group to other groups.
♦ Leave Group Removes the user or group from the selected group.
$ See also "Managing User IDs and Permissions" on page 735.

Users and Groups: Permissions properties tab


The Permissions tab of the Properties dialog for users and groups has the
following components.
♦ View permissions on Lets you choose the type of object you want to
view permissions on.
♦ User list Shows all users and groups. You can click in the fields
beside each user to grant or revoke permission; double-clicking (so that
a check mark and two ’+’ signs appear) gives the user grant options.

$ See also "Managing User IDs and Permissions" on page 735.

1086
Appendix B Property Sheet Descriptions

Users and Groups: External Logins properties tab


The External Logins tab of the Properties dialog for users and groups has the
following components.
♦ External Login list Shows all of the external logins for the selected
user, along with the remote server on which the login exists.
♦ Add External Login Displays the Add External Login dialog, which
lets you specify a remote server, an external login name and a password.
♦ Remove External Login Removes selected external logins from the
list.
$ See also "Managing User IDs and Permissions" on page 735.

1087
Integrated Logins properties

Integrated Logins properties


The Properties dialog for integrated logins consists of one page (or tab):
General.
General tab ♦ Object information The top half of the tab shows the name and type
components of the object, as well as the database user who created (and owns) this
object.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
$ See also "Using integrated logins" on page 77.

1088
Appendix B Property Sheet Descriptions

Java Objects properties


The Properties dialog for most Java objects (such as Java class) consists of
two tabs (or tabs): General and Description. The Properties dialog for JAR
files has only the General tab.
General tab ♦ Object information The top part of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
The General tab for JAR file properties also shows the date that the file
was created and the date it was last updated.
♦ Package information Shows the name of the Java archive that this
object originated from.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
♦ Update Now Displays the Update Java Class dialog, which lets you
update the selected Java class.

Description tab ♦ Class description Shows a sample of the Java's class's code.
components
$ See also
♦ "Using Java in the Database" on page 549

1089
Domains properties

Domains properties
The Properties dialog for domains consists of two pages (or tabs): General
and Check Constraint.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Built-in type Shows the pre-defined data type of the selected domain.
The format of the data type (where applicable) is listed after the type’s
name.
♦ Default value Shows the default value of the selected domain.
Columns based on this domain inherit this default value (if any), but you
can subsequently override it.
♦ Allows null Shows the nullability of columns based on this domain.

Check Constraint ♦ Check constraint list Shows the check constraint for the selected
tab components domain. A check constraint allows you to specify conditions for a
column or set of columns.
$ See also "Using domains" on page 371.

1090
Appendix B Property Sheet Descriptions

Events properties
The Properties dialog for events consists of four pages (or tabs): General,
Misc., Conditions, and Handler.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.

Misc. tab ♦ Event is enabled Lets you enable or disable the event.
components ♦ Restrict options Lets you specify how the event executes. You have
the following options:
♦ Execute at all locations Executes the event at all remote
locations
♦ Execute at the consolidated database only Executes the event
at the consolidated database only, and not at any of the remote
locations.
♦ Execute at the remote database only Executes the event at a
remote database only, and not at the consolidated database.

Conditions tab ♦ Manually Executes the event only when you manually trigger it.
components ♦ By the following schedules Executes the event according to the
schedule that you define. You can create new schedules by click Add.
You can change existing schedules or remove them entirely by clicking
Edit and Remove, respectively.
♦ When the following occurs Executes the event when a circumstance
or condition is met. You can specify a circumstance or condition by
clicking Edit.

Handler tab ♦ Event handler code Lets you edit the code of the event handler.
components

1091
Publications properties

Publications properties
The Properties dialog for publication consists of three pages (or tabs):
General, Articles and Subscriptions.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.

Articles tab ♦ Article list Shows all article in the publication (listed by article name,
components article type, and conflict trigger (if any)).

Subscriptions tab ♦ Publication list Shows all subscription to the selected publication
components (listed by remote user name and status).
♦ Subscribe For Displays the Subscribe for User dialog, which lets you
subscribe an existing remote user to the publication.
♦ Unsubscribe Removes the subscriptions of selected remote users
from the publication.
♦ Advanced Displays the Advanced Remote Actions dialog, which lets
you start, stop, or synchronization subscriptions.
$ See also "Publication design for Adaptive Server Anywhere" on
page 397 of the book Replication and Synchronization Guide.

1092
Appendix B Property Sheet Descriptions

Articles properties
The Properties dialog for publication article consists of four pages (or tabs):
General, Table, Where Restriction and Subscribe Restriction. Each tab is
described in its own section.
$ For more information about articles, see "Publication design for
Adaptive Server Anywhere" on page 397 of the book Replication and
Synchronization Guide.

Articles: General properties tab


The General tab of the Properties dialog for publication articles has the
following components.
♦ Object information The top half of the tab shows the name of the
object, its type and the publication it belongs to.
♦ Article Type Shows the type of article.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
♦ Table Properties Displays the Properties dialog of the table that the
article is based on.
$ See also
♦ "Table properties" on page 1072
♦ "Publication design for Adaptive Server Anywhere" on page 397 of the
book Replication and Synchronization Guide

Articles: Table properties tab


The Table tab of the Properties dialog for publication articles has the
following components.
♦ The article refers to the following table Shows the table that the
article is based on. For new articles, you can choose from a list of tables
in the database.
♦ All columns Sets the article to use all columns in the table.

1093
Articles properties

♦ Only these columns Sets the article to use only the table columns that
you select in the list below. When a column is selected, it has a check
mark beside it.

$ See also "Publication design for Adaptive Server Anywhere" on


page 397 of the book Replication and Synchronization Guide.

Articles: Where Restriction properties tab


The Where Restriction tab of the Properties dialog for publication articles
has the following components.
♦ Enter a clause to restrict the rows in the table Provides a place for
you to type the WHERE clause to restrict the table rows that are
included in the article.
$ See also "Publication design for Adaptive Server Anywhere" on
page 397 of the book Replication and Synchronization Guide.

Articles: Subscribe Restriction properties tab


The Subscribe Restriction tab of the Properties dialog for publication articles
has the following components.
♦ No subscribe restriction Sets the article to avoid using SUBSCRIBE
BY columns or clauses to partition rows.
♦ Subscribe by column Sets the article to partition rows from the table
based on a column (SUBSCRIBE BY columns).
♦ Subscribe by clause Sets the article to partition rows from the table
based on an expression (SUBSCRIBE BY clause).
$ See also "Publication design for Adaptive Server Anywhere" on
page 397 of the book Replication and Synchronization Guide.

1094
Appendix B Property Sheet Descriptions

Remote Users properties


The Properties dialog for remote user consists of seven pages (or tabs):
General, Authorities, Membership, Permissions, SQL Remote, Statistics and
External Logins. Each tab is described in its own section.
$ For a description of the property sheet for local users, see "Users and
Groups properties" on page 1085.
$ For information about working with remote users, see "Granting and
revoking remote permissions" on page 749.

Remote Users: General properties tab


The General tab of the Properties dialog for remote users has the following
components.
♦ Object information The top part of the tab the name and type of the
object.
♦ Password Provides a place for you to type the password of the remote
user. For added security, the typed characters are shown as asterisks.
♦ Confirm Provides a place for you to confirm the new password that
you entered in the Password field by re-typing it. The contents of the
two fields must match exactly.
♦ Allowed to connect Determines whether the remote user is allowed to
connect to the database. If the remote user is not allowed to connect, the
password (if any) is removed from the account. If you later change the
remote user to allow them to connect, you must supply a new password.
Users are almost always allowed to connect. For a group, however,
turning this option off prevents anyone from connecting to the database
using the group account itself.
♦ Comment text box Provides a place for you to type a text description
of this object. For example, you could use this area to describe the
object’s purpose in the system.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: Authorities properties tab


The Authorities tab of the Properties dialog for remote users has the
following components.

1095
Remote Users properties

♦ DBA Grants DBA authority to the remote user; a remote user with
DBA authority can fully administer the database.
♦ Resource Grants resource authority to the remote user; a remote user
with resource authority can create database objects.
♦ Remote DBA Grants Remote DBA authority to the remote user. The
Message Agent should be run using a user ID with this type of authority
to ensure that actions can be carried out, without creating security
loopholes.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: Membership properties tab


The Membership tab of the Properties dialog for remote users has the
following components.
♦ Membership list Shows the groups to which the selected remote user
belongs.
♦ Join Group Displays the Join Group dialog, which lets you add the
selected remote user to other groups.
♦ Leave Group Removes the remote user from the selected group.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: Permissions properties tab


The Permissions tab of the Properties dialog for remote users has the
following components.
♦ View permissions on Lets you choose which type of object you want
to view permissions on.
♦ User list Shows all remote users. You can click in the fields beside
each user to grant or revoke permission; double-clicking (so that a check
mark and two ’+’ signs appear) gives the user grant options.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: SQL Remote properties tab


The SQL Remote tab of the Properties dialog for remote users has the
following components.

1096
Appendix B Property Sheet Descriptions

♦ Publication list Shows all publication that the selected remote user is
subscribed to (listed by publication name and status).
♦ Subscribe To Displays the Subscribe to Publication dialog, which lets
you subscribe the selected remote user to any of the listed publications.
♦ Unsubscribe Removes the selected publication from the list, causing
the remote user to no longer be subscribed to that publication.
♦ Advanced Displays the Advanced Remote Actions dialog, which lets
you start, stop, or synchronization subscriptions.
♦ Message type Lets you select a message type for communicating with
the publisher.
♦ Address Provides a place for you to type the remote address of the
selected remote user.
♦ Send then close Sets the replication frequency so that the publisher’s
agent runs once, sends all pending messages to this remote group, then
shuts down. This means that the agent must be restarted each time the
publisher wants to send messages. Note: In most replication setups, this
option is not used for sending from the consolidated publisher to the
remote group.
♦ Send every Sets the replication frequency so that the publisher’s agent
runs continuously, sending messages to this remote group at the given
periodic interval.
♦ Send daily at Sets the replication frequency so that the publisher’s
agent runs continuously, sending messages to this remote group each
day at the given time.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: Statistics properties tab


The Statistics tab of the Properties dialog for remote users has the following
components.
♦ Remote User is a consolidated database Shows whether the remote
user is also a consolidated database for other remote users (a multi-tier
replication setup).
♦ Next send Shows the date and time when the consolidated database is
scheduled to send the next batch of replication messages to the selected
remote user.

1097
Remote Users properties

♦ Statistics list Shows SQL Remote usage statistics for the selected
remote user (from the perspective of the consolidated database). For
example, Last send shows when the most recent replication messages
were sent from the consolidated database to this remote user.
$ See also "Granting and revoking remote permissions" on page 749.

Remote Users: External Logins properties tab


The External Logins tab of the Properties dialog for remote users has the
following components.
♦ External Login list Shows all of the external login for the selected
remote user, along with the remote server on which the login exists.
♦ Add External Login Displays the Add External Login dialog, which
lets you specify a remote server, an external login name and a password.
♦ Remove External Login Removes selected external logins from the
list.
$ See also "Granting and revoking remote permissions" on page 749.

1098
Appendix B Property Sheet Descriptions

Message Types properties


The Properties dialog for message type consists of two pages (or tabs):
General and Remote Users.
General tab ♦ Object information The top part of the dialog shows the name and
components type of the object.
♦ Publisher address Provides a place for you to type the address of the
publisher. Each remote database send replication message back to the
consolidated database at this address.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.

Remote Users tab ♦ Remote users list Lists all of the remote users that are currently using
components the selected message type.
♦ Properties Displays the property sheet for the selected remote user.
$ See also
♦ "Remote Users properties" on page 1095
♦ "Working with message types" on page 512 of the book Replication and
Synchronization Guide

1099
Connected Users properties

Connected Users properties


The Properties dialog for connected users consists of one page (or tab):
General. You can also view and configure properties for users and groups,
located in the Users & Groups folder.
General tab ♦ Object information The top part of the dialog shows the name of the
components object, as well as the optional name of the user’s connection. Naming
your connections allows multiple connections to the same database, or
multiple connections to the same or different database server, all
simultaneously.
♦ Node address Shows the communications port ID used by the user’s
connection.
♦ Database number Shows the unique number assigned by the server to
each database that is started on it. This number makes it possible to
distinguish between two instances of the same database running on the
same server.
♦ File name Shows the full path and filename (that is, the SYSTEM
database space) of the database to which the user is connected.
♦ Log name Shows the full path and filename of the transaction log file
of the database to which the user is connected.
♦ Communication link Shows the type of communications link used by
the user’s connection. If the connection is between an Adaptive Server
Anywhere client and network server, the link type represents the
network protocol being used.
♦ Total connections Shows the number of users currently connected to
this Adaptive Server Anywhere server (including your current
connection).
$ See also "Managing connected users" on page 752.

1100
Appendix B Property Sheet Descriptions

Database Space properties


The Properties for a database space (or db space) consists of two pages (or
tabs): General and Contains.
General tab ♦ Object information The top part of the tab the name and type of the
components object.
♦ Filename Provides a place for you to type the name of the database
file that the database space points to. For new spaces, you can enter a
new filename. For existing spaces, you can enter the filename of a
moved/renamed file, or click Browse to locate it. If you don’t supply a
path, the directory of the SYSTEM database space is assumed.
♦ Add pages Displays the Add Pages To DB Space dialog, which lets
you pre-allocate storage in the database space by adding pages to it.
Adding pages may improve performance for bulk-loading operations.

Contains tab ♦ Object list Depending upon the Show options enabled, this window
components displays the objects (tables and indexes) currently saved in the current
database space.
♦ Show options Lets you determine which objects appear in the list on
this tab. You can show tables or indexes by enabling their check box, or
you can remove them from the list by disabling their check box.
♦ Properties Displays the property sheet of the object selected in the list
on this tab.
$ See also "Working with Database Files" on page 785.

1101
Remote Servers properties

Remote Servers properties


The Properties dialog for remote servers consists of four pages (or tabs):
General, External Logins, Proxy Tables and Remote Procedures.
$ For more information about remote servers, see "Working with remote
servers" on page 898.
$ For a description of the property sheet for local servers, see "Server
properties" on page 1067.
General tab ♦ Object information The top part of the dialog shows the name and
components type of the object.
♦ Server class Shows the class, or software platform, of the current
database server. You can select a different software platform from the
list to change the class of the server.
♦ Connection Lets you choose between the JDBC and ODBC
connection protocols for the connection type of the current database
server.
♦ Parameters Lets you specify startup connection parameters, such as
the name and address of the server.

External Logins tab ♦ External login list Lists all of the external login and local users
components currently associated with the remote server.
♦ Add External Login Displays the Add External Login dialog, which
lets you define external login settings.

Proxy Tables tab ♦ Proxy table list Displays the proxy tables of the server and their
components respective creators. Proxy tables are tables in the consolidated database
which map directly to tables found in the remote database.

Remote ♦ Remote procedure list Displays the external logins for the server and
Procedures tab their respective local users. External logins are alternate login names and
components passwords to be used when communicating with a remote server.

1102
Appendix B Property Sheet Descriptions

MobiLink Synchronization Templates properties


The Properties dialog for MobiLink Synchronization templates consists of
the following pages (or tabs):
♦ General
♦ Connection Type and Address
♦ Options
♦ Articles

Synchronization Templates: General properties tab


The General tab of the Properties dialog for MobiLink synchronization
templates has the following components.
♦ Name Shows the name of the synchronization template.
♦ Type (read-only) Shows the type of template.

Synchronization Templates: Connection Type and Address


properties tab
The Connection Type and Address tab of the Properties dialog for MobiLink
synchronization templates has the following components.
♦ Type Lets you choose between TCP/IP and HTTP synchronization,
depending on your setup. You must ensure that your MobiLink
synchronization server is configured to accept communications over the
protocol you choose.
♦ MobiLink server address: host The IP number or name of the
machine on which the MobiLink synchronization server is running. On a
local area network, this may be a machine name. You can use localhost
if the synchronization server is running on the same machine as the
client.
♦ MobiLink server address: port The MobiLink synchronization server
communicates over a specific port. By default, the port number is 2439
for TCP/IP and 80 for HTTP. If you choose a value other than these, you
must configure your MobiLink synchronization server to listen on the
port you specify.

1103
MobiLink Synchronization Templates properties

Synchronization Templates: Options properties tab


The Options tab of the Properties dialog for MobiLink synchronization
templates has the following components.
♦ Compression level Lets you choose a compression level for the
upload data stream. If you choose a high compression level, more work
needs to be done to prepare the upload stream, but the stream itself is
smaller. You may wish to use high for low-bandwidth connections, and
one of the other settings if you have a slow CPU and a large number of
updates.
♦ Fire triggers on download Lets you choose whether triggers in your
client database are fired by the download operations. You must write
your synchronization scripts to accommodate the setting you choose
here. If you choose to fire triggers, you must ensure that the results of
any similar triggers on the consolidated database are not sent down
explicitly as well.
♦ Memory used for synchronization process By default, the Adaptive
Server Anywhere client utility dbmlsync uses 1 Mb of memory for
synchronization processing. If you have more memory available and are
carrying out many operations in a synchronization, you may wish to
increase this value.
♦ Path containing offline transaction logs Lets you supply the name
of a directory, on the deployed machine, where the offline transaction
logs are stored. The dbmlsync utility requires access to these logs until
all operations in them are synchronized.
♦ Send trigger actions on upload Lets you choose whether or not to
send actions carried out by triggers when uploading data. You must
write your synchronization scripts to accommodate the setting you
choose here.
♦ Verbose operation Lets you choose to write out informational
messages to a log file during synchronization. This option is useful for
debugging and troubleshooting purposes.

Synchronization Templates: Articles properties tab


The Articles tab of the Properties dialog for MobiLink synchronization
templates lets you define the articles to include in your client database.
The tab includes a set of sub-tabs to choose the following:
♦ Tables Select tables to include in your client database.

1104
Appendix B Property Sheet Descriptions

♦ Columns Select columns from the tables to include in your client


database.
♦ Where Select rows to include in your client database.
Tables tab This tab lets you select tables and add them to your list of articles for
inclusion in the client database.
The Matching Tables list lists all the tables. You can use the Table Pattern,
Owner Pattern, and Table Type fields to limit the tables shown in the list, and
so locate the tables you want to include.

v To limit the tables shown in the list:


1 If you want to limit by table name, enter a string in the Table pattern
field. You can use SQL wildcards in your string, such as % for "one or
more characters".
$ For information on wildcards, see "PATINDEX function" on
page 355 of the book ASA Reference.
2 If you want to limit by owner, enter a string in the Owner pattern field.
You can use SQL wildcards in your string.
3 If you want to limit by table type, and include only views, or temporary
tables in the list, select a table type other than TABLE in the Table type
list box.
4 Click Refresh to generate a list that matches all your requirements.

Columns tab Click a table in the left column to display its columns. You can select
columns and click Add to include them in your articles to include in the
client database.
Where For each table you have selected, you can enter a WHERE clause to restrict
the rows that are included. The Include button provides a dialog that helps
you to construct WHERE clauses.

1105
MobiLink Synchronization Templates properties

1106
Glossary

article
In SQL Remote replication, an article is a database object that represents a
whole table, or a subset of the columns and rows in a table. Articles are
grouped together in publications.

authority
Determines what structural actions a user can perform in a database. While
most users will have no special authorities, a user with DBA authority can
grant other users resource authority, DBA authority, or remote DBA
authority.

backup
It is important to make regular backups of your database files in case of
media failure. A backup is a copy of the database file. You can make
backups using the backup utility or using other archiving software of your
choice.

base table
The tables that permanently hold the data in the database are sometimes
called base tables to distinguish them from temporary tables and from views.

cache
To avoid having to access a hard disk every time it needs to retrieve or write
information to the database, Adaptive Server Anywhere keeps data it may
need to access again in the computer’s memory, where access is much
quicker. The area of memory set aside for this information is called a cache.

check constraint
A check constraint allows specified conditions on a column or set of columns
in a table to be verified.

client
Client is a widely-used term with several meanings. It refers to the user’s side
of a client/server arrangement: for example, an application that addresses a
database, typically held elsewhere on a network, is called a client
application.

1107
client/server A software architecture where one application (the client) obtains
information from and sends information to another application (the server).
In a database context, the server is a database server, and the client is a
database client application. The two applications often reside on different
computers on a local area network.

column
All data in relational databases is held in tables, composed of rows and
columns. Each column holds a particular type of information.

command file
A text file containing SQL statements. Command files can be built by
yourself (manually) or by database utilities (automatically). The
DBUNLOAD utility, for example, creates a command file consisting of the
SQL statements necessary to recreate a given database.

compress The Compression utility reads the given database file and creates a
compressed database file. Compressed databases are usually 40 to 60 per
cent of their original size.

compressed
database file A database file that has been compressed to a smaller physical size using the
database compression utility (dbshrink). To make changes to a compressed
database file, you must use an associated write file. Compressed database
files can be re-expanded into normal database files using the Uncompression
utility (dbexpand).

conflict trigger
In SQL Remote replication, a trigger that is fired when an update conflict is
detected, before the update is applied. Specifically, conflict triggers are fired
by the failure of values in the VERIFY clause of an UPDATE statement to
match the values in the database before the update. They are fired before
each row is updated.

connected A connected database shows its contents as an object tree under the database.
database
connection
When a client application connects to a database, it specifies several
parameters that govern all aspects of the connection once it is established. A
user ID, a password, the name of the database to attach to, are all parameters
that specify the connection. All exchange of information between the client
application and the database to which it is connected is governed by the
connection.

1108
connection ID A unique number that identifies a given connection between the user and the
database.

You can determine your own connection ID using the following SQL
statement inside that connection:
select connection_property( ’Number’ )

connection profile A named set of connection parameters (user name, password, server name,
and so on).

consolidated
database In SQL Remote replication, a database that serves as the "master" database in
the replication setup. The consolidated database contains all of the data to be
replicated, while its remote databases may only contain their own subsets of
the data. In case of conflict or discrepancy, the consolidated database is
considered to have the primary copy of all data.

constraint
When tables and columns are created they may have constraints assigned to
them. A constraint ensures that all entries in the database object. to which it
applies satisfy a particular condition. For example, a column may have a
UNIQUE constraint, which requires that all values in the column be
different. A table may have a foreign key constraint, which specifies how the
information in the table relates to that in some other table.

container
In a graphical user interface, a container is an object that contains other
objects. Containers can be expanded by double-clicking them.

data type
Each column in a table is associated with a particular data type. Integers,
character strings, and dates are examples of data types.

database
A relational database is a collection of tables, related by primary and foreign
keys. The tables hold the information in the database, and the tables and keys
together define the structure of the database. A database may be stored in one
or more database files, on one or more devices.

database
administrator The database administrator (DBA) is a person responsible for maintaining
the database. The DBA is generally responsible for all changes to a database
schema, and for managing users and user groups.

1109
The role of database administrator is built in to databases as a user ID. When
a database is initialized, a DBA user ID is created. The DBA user ID has
authority to carry out any activity within the database.

database
connection All exchange of information between client applications and the database
takes place in a particular connection. A valid user ID and password are
required to establish a connection, and the actions that can be carried out
during the connection are defined by the privileges granted to the user ID.

database server
All access to information in a database goes through a database server. The
specific server you are using will depend on your operating system. Requests
for information from a database are sent to the database server, which carries
out the instructions.

database file
A database is held in one or more distinct database files. The user does not
have to be concerned with the organization of a database into files: requests
are issued to the database server about a database, and the server knows in
which file to look for each piece of required information.
Database administrators can create new database files for a database using
the CREATE DBSPACE command.
Each table, must be contained in a single database file.

database name
When a database is loaded by a server, it is assigned a database name. Client
applications can connect to a database by specifying its database name.
The default database name is the root of the database file.

database object
A database is made up of tables, indexes, views, procedures, and triggers.
Each of these is a database object.

database owner
The user ID that creates a database is the owner of that database, and has the
authority to carry out any changes to that database. The database owner is
also referred to as the database administrator, or DBA. A database owner can
grant permission to other users to have access to the database and to carry
out different operations on the database, such as creating tables or stored
procedures.

1110
DBA An abbreviation for database administrator, also called the database owner.
When a database is first created it has the single user ID DBA, with password
SQL.

DBA authority
DBA (DataBase Administrator) authority enables a user to carry out any
activity in the database (create tables, change table structures, assign
ownership of new objects, create new users, revoke permissions from users,
and so on).
The DBA user has DBA authority by default.

dbspace
A database can be held in multiple files, called dbspaces. The SQL command
CREATE DBSPACE adds a new file to the database.
Each table, together with its associated indexes, must be contained in a single
database file.

display properties
Display properties include the color and line style of a statistic in the
performance monitor.

disconnected A disconnected database is a running database that is visible in Sybase


database Central (that is, running on a visible server), but that you are not connected
to in Sybase Central.
Most database operations in Sybase Central require a connection to the
database. You cannot see the object tree under a database until you connect
to it using Sybase Central.

Embedded SQL
The native programming interface from C programs. Adaptive Server
Anywhere embedded SQL is an implementation of the ANSI and IBM
standard.

erase
Erasing a database deletes all tables and data from disk, including the
transaction log that records alterations to the database.

external login By default, Adaptive Server Anywhere uses the names and passwords of its
clients whenever it connects to a remote server on behalf of those clients.
However, this default can be overridden by creating external logins. External
logins are alternate login names and passwords to be used when
communicating with a remote server.

1111
extraction In SQL Remote replication, the act of synchronizing a remote database with
its consolidated database by unloading the appropriate structure and data
from the consolidated database, then reloading it into the remote database.
Extraction uses direct manipulation of ordinary files—it does not use the
SQL Remote message system.

FILE
In SQL Remote replication, a message system that uses shared files for
exchanging replication messages. This is useful for testing and for
installations without an explicit message-transport system (such as MAPI).

foreign key
Tables are related to each other by using foreign keys. A foreign key in one
table (the foreign table) contains a value corresponding to the primary key of
another table (the primary table). This relates the information in the foreign
table to that in the primary table.

foreign key
constraint A foreign restricts the values for a set of columns to match the values in a
primary key or uniqueness constraint of another table. For example, a foreign
key constraint could be used to ensure that a customer number in an invoice
table corresponds to a customer number in the customer table. Imposing a
foreign key constraint on a set of columns makes that set the foreign key. in a
foreign key relationship.

foreign table
A foreign table is the table containing the foreign key in a foreign key
relationship.

full backup
In a full backup, a copy is made of the entire database file itself, and
optionally of the transaction log. A full backup contains all the information
in the database.

function
Also called a "user-defined function", this is a type of procedure that returns
a single value to the calling environment. A function can be used, subject to
permissions, in any place that a built-in non-aggregate function is used.
Data in a temporary table is held for a single connection only. Global
global temporary temporary table definitions (but not data) are kept in the database until
table dropped. Local temporary table definitions and data exist for the duration of
a single connection only.

1112
grant option When a user is granted the WITH GRANT OPTION permission, they are
given the authority to pass on permissions to other users.

group
A user group is a database user ID that has been given the permission to have
members. User groups are used to make the assignment of database
permissions simpler. Rather than assign permissions to each user ID, a user
ID is assigned to a particular group, and takes on the permissions assigned to
that group.

index
An index on one or more columns of a database table allows fast lookup of
the information in these columns, and so can greatly speed up database
queries. Specifically, indexes assist WHERE clauses in SELECT statements.

InfoMaker
InfoMaker is a powerful yet easy-to-use reporting and data maintenance tool
that lets you work with data in the Windows environment. With InfoMaker
you can create sophisticated forms, reports, graphs, crosstabs, and tables, as
well as applications that use these as building blocks.

integrated login
The integrated login feature allows you to maintain a single user ID and
password for both database connections and operating system and/or network
logins.

Interactive SQL Interactive SQL is an Adaptive Server Anywhere database administration


and browsing utility.

IPX
IPX is a network-level protocol. by Novell.

Jar file
A JAR file is a collection of one or more packages.

Java class A Java class is the main structural unit of code in Java. It is a collection of
procedures and variables that have been grouped together because they all
relate to a specific, identifiable category.

jConnect
jConnect is a 100% pure Java implementation of the JavaSoft JDBC
standard. It provides Java developers native database access in multi-tier and
heterogeneous environments.

1113
Java Database Connectivity (JDBC), provides a SQL interface for Java
JDBC
applications: if you want to access relational data from Java, you do so using
JDBC calls.

LAN
See “local area network”.

local area network


A local area network (LAN) is a collection of networked computers
generally owned by a single organization:

log files
Adaptive Server Anywhere maintains a set of three log files to ensure that
the data in the database is recoverable in the event of a system or media
failure, and to assist database performance.

MAPI
Microsoft's Message Application Programming Interface, a message system
used in several popular e-mail systems such as Microsoft Mail.

message system
In SQL Remote replication, a protocol for exchanging messages between the
consolidated database and a remote database. Adaptive Server Anywhere
includes support for a FILE message system (using shared files) and the
MAPI message system. In most cases, a consolidated database and a remote
database(s) will send and receive messages using the same message system.

message type
In SQL Remote replication, a database object that specifies how remote users
communicate with the publisher of a consolidated database. A consolidated
database may have several message types defined for it; this allows different
remote users to communicate with it using different message systems. A
message type is named after a message system (e.g. MAPI), and includes the
publisher address for that message system (e.g. a valid MAPI address).

messages
Message based communication between applications or computers does not
require a direct connection. Instead, a message sent at one time by an
application can be received at another time by another application at a later
time.

NetBIOS
NetBIOS is a transport-level interface defined by IBM.

NetBEUI
NetBEUI is a transport-level protocol.

1114
NetWare A widely-used network operating system. by Novell. NetWare generally
employs the IPX protocol, although the TCP/IP protocol may also be used.

network server
A database server that runs on a different PC from the client application. The
server communicates with the client using a particular network protocol.
A network server can support many users connecting from many PCs on the
network.

object tree
The object tree is the hierarchy of objects that Sybase Central can manage.
The top level of the object tree shows all products that your version of
Sybase Central supports. Each product expands to reveal its own sub tree of
objects.

ODBC
The Open Database Connectivity (ODBC) interface, defined by Microsoft
Corporation, is a standard interface to database management systems in the
Windows and Windows NT environments. ODBC is one of several
interfaces supported by Adaptive Server Anywhere.

ODBC
Administrator The ODBC. Administrator is a Microsoft program included with Adaptive
Server Anywhere for setting up ODBC data sources.

owner
Each object of a database is owned by the user ID that created it. The owner
of a database object has rights to do anything with that object.

packages A collection of sets of related classes. Packages are grouped together into a
JAR file.

passthrough
In SQL Remote replication, a mode by which the publisher of the
consolidated database can directly change remote databases with SQL
statements. Passthrough is set up for specific remote users (you can specify
all remote users, individual users, or those users who subscribe to given
publications). In normal passthrough mode, all database changes made at the
consolidated database are passed through to the selected remote databases. In
"passthrough only" mode, the changes are made at the remote databases, but
not at the consolidated database.

1115
password Whenever a user connects to a database, a password must be specified. The
passwords are stored in the SYS.SYSUSERPERM system table, to which
only the DBA has access.

performance
statistics Values that reflect the performance of the database system with respect to
disk and memory usage. The CURRREAD statistic, for example, represents
the number of file reads issued by the engine which have not yet completed.

permissions
Each user has a set of permissions that govern the actions they may take
while connected to a database. Permissions are assigned by the DBA or by
the owner of a particular database object.

personal database
server A database server that runs on the same PC as the client application. A local
server is typically for a single user on a single PC, but can support several
concurrent connections from that user.

plug-in modules
A module stored as a file, that adds support for a specific product to Sybase
Central.
Plug-ins are usually installed and registered automatically with Sybase
Central when you install the respective product.
Typically, a plug-in appears as a top-level container, in the Sybase Central
main window, using the name of the product itself (for example, Sybase
Adaptive Server Anywhere).

Power Designer Power Designer is a comprehensive modeling solution that business and
systems analysts, designers, DBAs, and developers can tailor to meet their
specific needs. The flexible analysis and design features of PowerDesigner
allow a structured approach to efficiently create a database or data warehouse
without demanding strict adherence to a specific methodology.

PowerJ
PowerJ is a Java system that allows you to use JavaBeans and ActiveX
components to build, test and deploy business applications with database
connectivity.

primary table
A primary table is the table containing the primary key in a foreign key
relationship.

1116
primary key Each table in a relational database must be assigned a primary key. The
primary key is a column, or set of columns, whose values uniquely identify
every row in the table.

primary key
constraint A primary key constraint identifies one or more columns that uniquely
identify each row in a table. Imposing a primary key constraint on a set of
columns makes that set the primary key. for the table. The primary key
usually identifies the best identifier for a row.

publication
In SQL Remote replication, a publication is a database object that describes
data to be replicated. A publication consists of articles (tables or subsets of
tables). Periodically, the changes made to each publication in a database are
replicated to all subscribers to that publication as publication updates.

publication update
In SQL Remote replication, a periodic batch of changes made to one or more
publications in one database. A publication update is sent as part of a
replication message to the remote database(s).

publisher
In SQL Remote replication, the single user in a database that can exchange
replication messages with other replicating databases.

remote database
In SQL Remote replication, a database that exchanges replication messages
with a consolidated database. Remote databases may contain all or some of
the data in the consolidated database.

remote permission
In SQL Remote replication, the permission to exchange replication messages
with the publishing database. Granting remote permissions to a user make
them a remote user. This requires you to specify a message type, an
appropriate remote address, and a replication frequency. In general terms,
remote permissions can also refer to any user involved in SQL Remote
replication (for example, the consolidated publisher and remote publisher).

referential integrity
The tables of a relational database are related to each other by foreign keys.
Adaptive Server Anywhere provides tools that maintain the referential
integrity of the database: that is, ensure that the relations between the rows in
different tables remain valid.

1117
remote DBA The Message Agent should be run using a user ID with REMOTE DBA
authority authority, to ensure that actions can be carried out, without creating security
loopholes.

remote user
In SQL Remote replication, a user who has been granted remote permissions
in a replication setup. When the remote database is extracted from the
consolidated database, the remote user becomes the publisher of the remote
database, able to exchange publication updates with the consolidated
database. While groups can also be granted remote permissions, note that
users in these "remote groups" do not inherit remote permissions from their
group.

replication
For databases, a process by which the changes to data in one database
(including creation, updating, and deletion of records) are also applied to the
corresponding records in other databases. Adaptive Server Anywhere
supports replication using SQL Remote or Sybase Replication Server.

replication
frequency In SQL Remote replication, a setting for each remote user that determines
how often the publisher’s message agent should send replication messages to
that remote user. The frequency can be specified as on-demand, every given
interval, or at a certain time of day.

replication
message In SQL Remote replication, a discrete communication that is sent from a
publishing database to a subscribing database. Messages can contain a
mixture of publication updates and passthrough statements (manual SQL
statements such as DDL).

row
All data in relational databases is held in tables, composed of rows and
columns. Each row holds a separate occurrence of each column. In a table of
employee information, for example, each row contains information about a
particular employee.

row-level trigger
A trigger that executes BEFORE or AFTER each row modified by the
triggering insert, update, or delete operation is changed.

1118
server In Adaptive Server Anywhere, servers are database servers—the programs
that manage the physical structure of the database and process queries on its
data. Servers can be local servers or network servers.
In Sybase Central, local servers and network servers are both called servers.

service
In the Windows NT operating system, applications set up as NT services can
run even when the user ID starting them logs off the machine.
Running a database server as a service under NT allows databases to keep
running while not tying up the machine on which they are running.

SQL
Structured Query Language (SQL) is the language used to communicate to
databases. SQL is very widely used in database applications, and in order to
ensure compatibility among databases, SQL is the subject of standards set by
several standards bodies.

SQL Remote
An asynchronous message-based replication system for two-way server-to-
laptop, server-to-desktop, and server-to-server replication between databases.

SQL statement
SQL allows several kinds of statement. Some statements modify the data in a
database (commands), others request information from the database
(queries), and others modify the database schema itself.

statement-level
trigger A trigger that executes after the entire triggering statement is completed.

statistic
Values that reflect the performance of the database system with respect to
disk and memory usage. The CURRREAD statistic, for example, represents
the number of file reads issued by the engine which have not yet completed.

stored procedure
Stored procedures are procedures kept in the database itself, which can be
called from client applications. Stored procedures provide a way of providing
uniform access to important functions automatically, as the procedure is held
in the database, not in each client application.

Structured Query
Language See SQL.

1119
subscriber In SQL Remote replication, a remote user who is subscribed to one or more
of a database’s publications.

subscribing
In a Replication Server or SQL Remote installation, a database that has
subscribed to a replication or publication receives updates of changes to the
data in that replication or publication.

subscription
In SQL Remote replication, a link between a publication and a remote user,
allowing the user to exchange updates on that publication with the
consolidated database. The user’s subscription may include an argument
(value) for the publication’s SUBSCRIBE BY parameter (if any).

synchronization
In SQL Remote replication, the process by which SQL Remote deletes all
existing rows from those tables of a remote database that form part of a
publication, and copies the publication’s entire contents from the
consolidated database to the remote database. Synchronization is performed
during the initial extraction of the remote database from the consolidated
database, and may also be necessary later if a remote database becomes
corrupt or gets out of step with the consolidated database (and cannot be
repaired using passthrough mode). Synchronization can be accomplished by
bulk extraction (the recommended method), by manually loading from files,
or by sending synchronization messages through the message system.

system object
In a database, a table, view, stored procedure, or user-defined data type.
System tables store information about the database itself, while system
views, procedures, and user-defined data types largely support Sybase
Transact-SQL compatibility.

system tables
Every database includes a set of tables called the system tables, which hold
information about the database structure itself: descriptions of the tables,
users and their permissions, and so on.
The system tables are created and maintained automatically by the database
server. They are owned by the special user ID SYS, and cannot be modified
by database users.

system views
Every database includes a set of views, which present the information held in
the system tables. in a more easily understood format.

1120
table All data in relational databases is stored in tables. Each table consists of rows
and columns. Each column carries a particular kind of information (a phone
number, a name, and so on), while each row specifies a particular entry. Each
row in a relational database table must be uniquely identifiable by a primary
key.

TCP/IP
Transmission Control Protocol/Internet Protocol (TCP/IP) is a network
protocol supported by Adaptive Server Anywhere

template Templates, located in the right panel of Sybase Central, are special icons that
perform a specific task. Most templates help you create objects of certain
types. For example, the Add Index template opens a wizard that helps you
create an index.

temporary table
Data in a temporary table is held for a single connection only. Global
temporary table definitions (but not data) are kept in the database until
dropped. Local temporary table definitions and data exist for the duration of
a single connection only.

transaction
A transaction is a logical unit of work that should be processed in its entirety
by the database (though not necessarily at once) or not at all. Adaptive
Server Anywhere supports transaction processing, with locking features built
in to allow concurrent transactions to access the database without corrupting
the data. Transactions begin following a COMMIT or ROLLBACK
statement and end either with a COMMIT statement, which makes all the
changes to the database required by the transaction permanent, or a
ROLLBACK statement, which undoes all the changes made by the
transaction.

transaction log
A log storing all changes made to a database, in the order in which they are
made. In the event of a media failure on a database file, the transaction log is
essential for database recovery. The transaction log should therefore be kept
on a different device from the database files for optimal security.

transaction log
mirror An identical copy of the transaction log file, maintained at the same time.
Every time a database change is written to the transaction log file, it is also
written to the transaction log mirror file.

1121
A mirror file should be kept on a separate device from the transaction log, so
that if either device fails, the other copy of the log keeps the data safe for
recovery.

trigger
A trigger is a procedure stored in the database that is executed automatically
by the database server whenever a particular action occurs, such as a row
being updated. Triggers are used to enforce complex forms of referential
integrity, or to log activity on database tables.

uncompress With the Uncompression utility, you can expand a compressed database file
created by the Compression utility. The Uncompression utility reads the
compressed file and creates an uncompressed database file.
The Uncompression utility does not uncompress files other than the main
database file (dbspace files).

unique constraint
A unique constraint identifies one or more columns that uniquely identify
each row in the table. A table may have several unique constraints.

unload
Unloading a database dumps the structure and/or data of the database to text
files (command files for the structure, ASCII comma-delimited files for the
data). This may be useful for creating extractions, creating a backup of your
database, or building new copies of your database with the same or slightly
modified structure. You can also unload the data (but not the structure) of a
particular table.

updates
In replication, each set of changes sent from one database to another is an
update to a publication or replication.

user-defined data
type A named combination of base data type, default value, check condition, and
nullability. Defining similar columns using the same user-defined data type
encourages consistency throughout the database.

user account
Every connection with a database requires a user account. The permissions
that a user has are tied to their user account. A user account consists of a user
ID and password.

user ID
A string of characters that identifies the user when connecting to a particular
database. The user ID, together with a password, constitutes a user account.

1122
validate When the information in a database, or a database table, is checked for
integrity it is validated.

view
A view is a computed table. Every time a user uses a view of a particular
table, or combination of tables, it is recomputed from the information stored
in those tables. Views can be useful for security purposes, and to tailor the
appearance of database information to make data access straightforward. As
a permanent part of the database schema, a view is a database object.

write file
If a database is used with a write file, all changes made to the database do not
modify the database itself, but instead are made to the write file. Write files
are useful in applications development, so the developer can have access to
the database without interfering with it. Also, write files are used in
conjunction with compressed databases and other read-only databases.

1123
1124
Index

Access
ODBC configuration for, 53
& remote data access, 939

& access modifiers


UNIX command line, 18 Java, 530
access plans
cache size, effect of, 862
reading, 839
*
accessing
* (asterisk) Connect dialog, 38
SELECT statement, 153
actions
*= (asterisk equals) CASCADE, 377
Transact-SQL outer join operator, 220 RESTRICT, 377
SET DEFAULT, 377
SET NULL, 377
@ active connection
Connection window, 1055
@@identity global variable, 974
Adaptive Server Enterprise
compatibility, 957
migrating databases, 715
=
adding
=* (equals asterisk) column data INSERT statement, 256
Transact-SQL outer join operator, 220 external logins, 903
JAR files, 560
Java classes, 559
remote procedures, 914
> statistics to the Performance Monitor, 831
>> administrator role
Java methods, 535 Adaptive Server Enterprise, 963
ADO
connecting, 63
A
ADO applications
access SQL statements, 264
security features, 772

1125
A–A

Agent connection parameter altering


about, 64 columns, 128, 129
procedures, 441
aggregate functions remote servers, 900
about, 174 tables, 128, 129
Adaptive Server Enterprise compatibility, 192 triggers, 454
ALL keyword, 174 views, 142
data types, 175
DISTINCT keyword and, 174 ANSI code pages
GROUP BY clause, 178 about, 293
Java columns, 573 choosing, 306
NULL, 176
scalar aggregates, 175 ANSI SQL/92 standard
vector aggregates, 178 typical inconsistencies, 386

Aggregate functions ANSI_BLANKS option


order by clause and, 189 Open Client, 1000

aliases ANSI_INTEGER_OVERFLOW option


correlation names, 161 Open Client, 1000

All connections anti-insert


about, 1055 locks, 414, 421, 422
write locks, 414
ALL keyword
aggregate functions, 174 anti-insert locks, 415
UNION clause, 190 APC_pw parameter, 1014
ALL permissions, 743 APC_user parameter, 1014, 1023
ALLOW_NULLS_BY_DEFAULT option, 969 APCs
Open Client, 1000 function APCs, 1022
alphabetic characters Replication Server, 1005
defined, 317 application layer
ALTER DATABASE statement about, 89
Java, 554, 555 applications
ALTER permissions, 743 deploying, 867, 878
deploying Embedded SQL, 884
ALTER PROCEDURE statement SQL, 263
about, 1022
effects, 1023 architectures
Adaptive Server, 961
ALTER statement
automatic commit, 383 arithmetic expressions
operator precedence, 158
ALTER TABLE statement
and concurrency, 427 arithmetic operations, 174
CHECK conditions, 367 ASA 7.0 sample
examples, 129 data source, 44
foreign keys, 134
primary keys, 132 asademo.db file
REPLICATE ON, 1011 about, xviii
asajdbc server class, 927

1126
B–B

asajdbc.zip retrieving audit information, 779


deploying, 887 security features, 772, 778
turning on, 778
asaodbc server class, 931
AUTO_COMMIT option
ASAProv OLE DB provider Interactive SQL, 383
about, 62 settings, 383
asasrv.ini file autocommit
server information, 74 transactions, 383
ascending order Autocommit
ORDER BY clause, 187 ODBC configuration, 53
ASCII character set autocommit mode
about, 292 JDBC, 603
ASCII character sets performance, 805
about, 293 transactions, 283
ASCII format, 699 autoexec.ncf
automatic loading, 6
asejdbc server class, 928
autoincrement
aseodbc server class, 931 IDENTITY column, 973
assigning columns AUTOINCREMENT
data types, 371 default, 364
domains, 371 negative numbers, 365
assumptions affecting optimization, 841 signed data types, 365
UltraLite applications, 365
asterisk (*) when to use, 426
SELECT statement, 153
AUTOMATIC_TIMESTAMP option, 969
asynchronous procedures Open Client, 1000
about, 1023
Replication Server, 1005 automation
administration tasks, 495
AT clause generating unique keys, 426
CREATE EXISTING TABLE statement, 905
AutoStop connection parameter
atomic compound statements, 461 ODBC configuration, 55
attributes availability
choosing, 344 database server, 18
definition of, 335 high, 685
Java, 583
SQLCA.lock, 389 AVG function, 174

audit trail, 689


auditing B
about, 778
comments, 780 B+ trees
dblog utility, 781 indexes, 817
dbtran utility, 781
background
dbwrite utility, 781
running the database server, 18
example, 780

1127
B–B

backup plans bindery


about, 661 NetWare, 98
backups bit data type
concepts, 651 outer joins and, 221
databases not involved in replication, 657
dbltm and, 658 bitmaps, 355
dbremote and, 658 BLOBS, 355
dbsync and, 658 inserting, 258
designing procedures, 654
external, 649 block cursors, 277
for remote databases, 660 blocking, 392, 405
full, 674 deadlock, 393
internal, 649 transactions, 392
live, 685 troubleshooting, 393
MobiLink and, 658
MobiLink consolidated databases, 657 BLOCKING option, 392
offline, 649
bookmarks, 280
online, 649
planning, 655 breakpoints
replication, 1027, 1028 about, 1053
Replication Agent and, 658 clearing, 1053
scheduling, 655 conditions, 1053
SQL Remote and, 658 counts, 1053
SQL statement, 649 debugging, 632
strategies, 655 setting, 1053
Sybase Central, 649
types of, 649 Breakpoints window, 1053
unfinished, 682 browsing
validating, 662 table data, 131
base table, 125 views, 144

batch mode browsing databases


for LTM, 1024 and isolation levels, 395

batches buffer size


about, 435, 457 ODBC configuration, 55
control statements, 457 buffer space
data definition statements, 457 ODBC configuration, 56
SQL statements allowed, 487
Transact-SQL overview, 981 buffering
writing, 981 replication commands, 1024

BEGIN TRANSACTION statement bulk loading


remote data access, 917 performance, 713
switch, 12
BETWEEN keyword
range queries, 164 bulk operations
performance, 806
bi-directional replication, 428
byte code
bind parameters Java classes, 517
prepared statements, 266

1128
C–C

international aspects, 295


C Java, 535
cache Java data types, 563
about, 807 passwords, 971
dynamic sizing, 807 remote access, 923
initial size, 807 server name, 10
Java, 589 sort order, 188
maximum size, 807 SQL, 151, 535
minimum size, 807 Transact-SQL compatibility, 970
monitoring size, 810
CASE statement
performance, 800
syntax, 459
server names, 74
size switch, 10 case-sensitivity
UNIX, 809 data, 971
Windows 95/98, 808 domains, 971
Windows NT, 808 user IDs, 971
cache size catalog
effect on access plans, 862 Adaptive Server Enterprise compatibility, 963
caching catch
subqueries, 865 exceptions, 1054
call stack catch block
moving through, 1053 Java, 531
CALL statement Catch window, 1054
about, 437
examples, 442 CBSize connection parameter
parameters, 464 about, 64
syntax, 459 CBSpace connection parameter
Calls window, 1053 about, 64

cancel CD-ROM
external functions, 492 databases, 792
deploying, 888
cannot find database server, 70
chained mode, 283
cardinality
relationships and, 337 CHAINED option
JDBC, 603
Cartesian product, 211, 214 Open Client, 1000
CASCADE action changing
about, 377 collations, 327
case sensitivity changing isolation levels within transactions, 390
collations, 316
command line, 7 changing the isolation level, 388
connection parameters, 65 character data
creating databases, 968 searching for, 168
database name, 10
databases, 970 character LTM, 1025
domains, 970
identifiers, 971
1129
C–C

character set checking referential integrity at commit, 420


application, 299
determining, 299 checkpoint log, 668
server, 299 CHECKPOINT_TIME option
character set translation using, 671
about, 323 checkpoints
error messages, 311 backups, 651
character sets log, 668
about, 287 scheduling, 672
avoiding translation, 313 urgency, 671
choosing, 319 choosing
definition, 291 message plans, 813
encoding, 288
fixed width, 294 choosing drivers, 39
for Replication Server, 1019 choosing isolation levels, 394
Interactive SQL, 324
LTM, 1025, 1026 class fields
multibyte, 294, 310 about, 526
single-byte, 292
class methods
Sybase Central, 324
about, 526
translation, 323
Unicode, 310 Class.forName method
UNIX default, 299 loading jConnect, 614
variable width, 294
Windows, 293 classes
Windows default, 299 about, 523
as data types, 563
character strings compiling, 523
about, 168 constructors, 525
quotes, 168 creating, 558
DebugScript, 637
Character strings
example, 565
select list using, 157
importing, 617
characters installing, 517, 558
alphabetic, 317 instances, 529
digits, 317 Java, 529
white space, 317 remote servers, 925
runtime, 533
CHECK conditions supported, 520
columns, 367 updating, 561
deleting, 370 versions, 561
domains, 368
modifying, 370 Classes window, 1054
previous releases, 368
classes.zip
tables, 369
deploying, 887
Transact-SQL, 962
CLASSPATH environment variable
CHECK constraint, 356
about, 539
check constraints jConnect, 612
using in domains, 372

1130
C–C

setting, 600 column names


source code, 1057 international aspects, 295
joining tables using, 202
clauses
about, 150 column permissions, 743
COMPUTE, 977
FOR BROWSE, 977 columns
FOR READ ONLY, 977 allowing NULL values, 356
FOR UPDATE, 977 altering, 128, 129
GROUP BY ALL, 977 assigning data types, 371
INTO, 467 assigning domains, 371
ON EXCEPTION RESUME, 476, 480, 987 constraints, 356
WITH HOLD, 276 data types, 355
defaults, 362
Client Service for NetWare, 98 GROUP BY clause, 178
IDENTITY, 973
client side, 699 Java data types, 563
CLOSE statement joins and, 198, 199
procedures, 471 joins and datatypes, 199
naming, 355
code pages order in insert statements, 256
ANSI, 293 properties, 355
definition, 291 select list, 154
Interactive SQL, 324 SELECT statements, 154
OEM, 293 timestamp, 972
overview, 292 updating Java, 568
Sybase Central, 324
Windows, 293 column-statistics registry, 857
collation file com.sybase package
editing, 314 runtime classes, 553
collations command delimiter
about, 287, 294 setting, 485
changing, 327 command line
choosing, 319 case sensitivity, 7
creating, 325 starting the server, 7
custom, 314, 325, 326 switches, 9
default, 794
definition, 291 CommBufferSize connection parameter
file format, 314 about, 64
for Replication Server, 1019
internals, 314 COMMBUFFERSIZE connection parameter
ISO_1, 306 TCP/IP, 96
LTM, 1025, 1026 CommBufferSpace connection parameter
multibyte, 310 about, 64
OEM, 308
WIN_LATIN1, 306 COMMENT statement
automatic commit, 383
column attributes
AUTOINCREMENT, 426 comments, 780
generating default values, 426

1131
C–C

COMMIT statement completing transactions, 383


autocommit mode, 283
compound statements, 461 components
cursors, 284 transaction attribute, 952
JDBC, 603 composite indexes, 849
LTM, 1016
procedures and triggers, 484 compound statements
transactions, 383 atomic, 461
verify referential integrity, 420 declarations, 460
using, 460
CommLinks connection parameter
about, 64 Computations. See Computed columns, 157
switches, 13 COMPUTE clause
CommLinks parameteer CREATE TABLE, 586
parentheses, 312 unsupported, 977

COMMMIT statement computed columns


remote data access, 917 creating, 586
INSERT statements, 587
communications Java, 586
about, 85 limitations, 588
application layer, 89 recalculation, 588
compatibility, 89 triggers, 587
complications, 91 UPDATE statements, 587
data link layer, 88
multiple stacks, 92 conceptual data modeling, 333
network adapters, 106 conceptual database models
network layer, 88 definition of, 335
OSI Reference Model, 86
physical layer, 88 concurrency, 384
stack compatibility, 87 about, 386, 426
supported, 94 and data definition statements, 427
transport layer, 88 benefits of, 384
troubleshooting, 102 consistency, 386
data definition, 426
compareTo method how locking works, 413
object comparisons, 573 improving and indexes, 424
comparing values improving using indexes, 397
for joins, 199 inconsistency, 386
ISO SQL/92 standard, 386
comparison operators performance, 384
NULL values, 170 primary keys, 426
symbols, 163 replication, 428
comparisons types of locks, 414
NULL values, 170 concurrent transactions
sort orders, 163 blocking, 392, 405
trailing blanks, 163
conditions
compatibility breakpoints, 1053
Adaptive Server Enterprise, 957
with Adaptive Server Enterprise, 700

1132
C–C

configuration file utility database, 794


creating LTM, 1023 Windows CE, 58, 59, 60
format for LTM, 1023
LTM, 1014, 1023 connection
using, 9 creating, 1013

configuring connection limit


Adaptive Server Anywhere for Replication NetBIOS, 100
Server, 1017, 1019 connection name
interfaces file, 994 ODBC configuration, 56
LTM, 1023
ODBC data sources, 52 connection parameters
sql.ini, 994 about, 36, 64
the Performance Monitor, 832 case sensitivity, 65
conflicts, 66
conflicts data sources, 49
cyclical blocking, 393 embedded databases, 66
locking, 392 introduction, 35
transaction blocking, 392, 405 location of, 69
conflicts between locks, 416 priority, 65
table of, 64
CONN connection parameter
about, 64 connection scenarios, 41

connect connection string


permission, 741 about, 69

Connect dialog connection strings


accessing, 38 about, 36, 64
Advanced tab, 1047 character sets, 312
Database tab, 1046 introduction, 35
Identification tab, 1046 representing, 35
overview, 40, 1045 Connection window, 1055
connected users, 752 connection_property function
managing connected users, 752 about, 829
connecting ConnectionName connection parameter
ADO, 63 about, 64
character sets, 312
Connect dialog overview, 40, 1045 connections
database connection scenarios, 41 about, 33
firewalls, 96 active, 1055
from Interactive SQL, 38 debugging, 624
from Sybase Central, 38 default parameters, 47
integrated logins, 775 definition, 34
jConnect, 615 details, 67
OLE DB, 62 embedded database, 43
RAS, 97 from utilities, 48
starting a database without connecting, 122 Interactive SQL, 75
starting a local server, 42 jConnect URL, 614
to a local database, 42 JDBC, 595, 597
using a data source, 44 JDBC client applications, 597
JDBC defaults, 603

1133
C–C

JDBC example, 597, 601 copying


JDBC in the server, 601 data with INSERT, 257
local database, 41 procedures, 442
network, 45 tables, 136
overview, 34 views, 140
performance, 74
problems, 67 copying databases
programming interfaces, 35 replicating data and concurrency, 428
remote, 917 correctness, 394
simple, 41
troubleshooting, 67, 75 correlation names
table names, 161, 209
connectivity
jConnect, 39 Correlation names
JDBC-ODBC Bridge, 39 self-joins, 209

consistency cost estimation during optimization, 864


about, 382 COUNT(*) function, 176
assuring using locks, 413
correctness and scheduling, 394 COUNT(*)function
dirty reads, 386, 398, 416 NULL, 176
during transactions, 386
counts
effects of unserializable schedules, 395
breakpoints, 1053
example of non-repeatable read, 403
ISO SQL/92 standard, 386 CPU
phantom rows, 386, 405, 408, 417, 424 number used, 11
repeatable reads, 386, 401, 416, 417
two-phase locking, 422 create connection statement, 1013
versus isolation levels, 387, 405, 408, 424 CREATE DATABASE statement
versus typical transactions, 395 Adaptive Server Enterprise, 962
consolidated databases Java, 554, 555
setting, 121 permissions, 11, 796
using, 117
constraints, 369 utility database, 794
columns and tables, 356
unique constraints, 369 CREATE DBSPACE statement
using, 788
constructors
about, 525 CREATE DEFAULT statement
inserting data, 566 unsupported, 962
Java, 530 CREATE DOMAIN statement
CONTINUE_AFTER_RAISERROR option Transact-SQL compatibility, 962
Open Client, 1000 using, 371

control statements CREATE EXISTING TABLE statement


list, 459 using, 907

conventions CREATE FUNCTION statement


documentation, xv about, 446
file names, 871 CREATE INDEX statement
conversion errors, 704 and concurrency, 427
example, 816

1134
C–C

CREATE PROCEDURE statement CT-library, 990


examples, 439
parameters, 463 current date and time defaults, 363

create replication definition statement, 1014 cursor positioning


troubleshooting, 276
CREATE RULE statement
unsupported, 962 cursors, 280
about, 269
CREATE statement and LOOP statement, 471
automatic commit, 383 availability, 273
canceling, 280
create subscription statement, 1015 choosing a type, 273
CREATE TABLE connection limit, 768
example, 126 describing, 281
dynamic scroll, 272
CREATE TABLE statement DYNAMIC SCROLL, 276
and concurrency, 427 fat, 277
foreign keys, 134 fetching, 275
primary keys, 132 fetching multiple rows, 277
proxy tables, 908 fetching rows, 277
Transact-SQL, 975 in procedures, 471
CREATE TRIGGER statement insensitive, 272
about, 451 introduction, 269
isolation level, 276
CREATE VIEW statement no scroll, 272
WITH CHECK OPTION clause, 140 ODBC configuration, 53
on SELECT statements, 471
creating
performance, 278
data types, 371, 372
platforms, 273
database spaces, 788
positioning, 275
domains, 371, 372
prepared statements, 271
external logins, 903
procedures and triggers, 471
groups, 753
read only, 272
indexes, 146
result sets, 269
ODBC data sources, 49, 52
savepoints, 284
procedures, 439
scroll, 272
proxy tables, 906, 907, 908
scrollable, 279
remote procedures, 439
step-by-step, 270
remote servers, 898
transactions, 284
replication definition, 1014
unique, 272
Replication Server connection, 1013
updating and deleting, 279
subscription, 1015
uses of, 270
tables, 126
using, 275
users, 741
views, 138 custom collations
about, 314
creating databases
creating, 314
security, 782
creating databases, 326
Windows CE, 305
cross joins, 211
and self-joins, 211

1135
D–D

data organization
D physical, 845
daemon
data source description
database server as, 18
ODBC, 52
daemon database server, 18
data source name
data ODBC, 52
case sensitivity, 971
data sources
consistency, 386
about, 36, 49
duplicated, 358
configuring, 52
exporting, 693, 696, 705
connection strings, 35
formats, 699
creating, 49, 52
importing, 693, 696, 701, 717
Embedded SQL, 49
integrity and correctness, 394
example, 44
invalid, 358
external servers, 930
viewing, 131
file, 56
data consistency ODBC, 49
assuring using locks, 413 UNIX, 57
correctness, 394 using with jConnect, 39
dirty reads, 386, 398, 416 Windows CE, 58
ISO SQL/92 standard, 386
data types, 371
phantom rows, 386, 405, 408, 417, 424
aggregate functions, 175
repeatable reads, 386, 401, 416, 417
assigning columns, 371
two-phase locking, 422
choosing, 355
data definition creating, 371, 372
concurrency, 426 deleting, 373
Java, 563
data definition language joins and, 199
about, 112 remote procedures, 915
data definition statements SQL and C, 492
and concurrency, 427 supported, 1021
timestamp, 972
data entry UNION operation, 190
and isolation levels, 395
database
data integrity accessing the Connect dialog, 38
about, 357 connecting from Interactive SQL, 38
column constraints, 356 connecting from Sybase Central, 38
constraints, 360, 367 connecting to a local database, 42
effects of unserializable schedules on, 395 connection scenarios, 41
overview, 358
rules in the system tables, 379 database access
controlling, 775
data link layer
about, 88 database administrator
troubleshooting, 103 defined, 736
roles, 964
data model normalization, 346
database design
data modification Java, 583
permissions, 254 performance, 801

1136
D–D

database file database spaces


backups, 651 altering, 790
location, 7 creating, 788
media failure, 685 deleting, 790
ODBC configuration, 55
database statistics
database files about, 834
fragmentation, 805
limit, 788 database threads
performance, 803 blocked, 393
security, 773 database utilities
database migration and database connections, 48
importign, 715 DatabaseFile connection parameter
database name about, 64
case sensitivity, 10 DatabaseName connection parameter
ODBC configuration, 54 about, 64
switch, 9
databases
database objects allocating space, 790
editing properties, 120 altering database spaces, 790
database options backup, 645
ALLOW_NULLS_BY_DEFAULT, 969 case sensitivity, 968, 970
AUTOMATIC_TIMESTAMP, 969 character set, 311
Open Client, 1000 connecting to, 33, 67
QUOTED_IDENTIFIER, 969 creating, 115, 116, 117
startup settings, 1000 creating database spaces, 788
creating for Windows CE, 116
database server custom collations, 326
about, 3 deleting, 118
automatic starting, 6 deleting database spaces, 790
locating, 70 deploying, 888
name caching, 74 design concepts, 335
name switch, 9 designing, 333
NetWare, 6 disconnecting from databases, 119
running in the background, 18 erasing, 118
security, 782 exporting, 721
starting, 5, 7 file compatibility, 115
stopping, 15 importing, 715
switches, 9 initializing, 115, 116, 117
UNIX, 6 installing jConnect meta-data support, 123
Windows 95/98, 5 Java, 583
Windows NT, 5 Java classes, 334
Windows NT services, 18 Java-enabling, 553, 556
joins and design, 196
database servers large databases, 788
deploying, 887 multiple, 912
locating, 75 multiple-file, 788
new, 25
normalizing, 346
permissions, 735
read-only, 12, 792
1137
D–D

rebuilding, 697, 729 DBASE format, 699


recovery, 645
reloading, 729 dbbackup utility
replicating entire, 1028 full backup, 674
setting a consolidated database, 121 live backup, 685
setting options, 120 DBCOLLAT utility, 326
showing system objects, 121 custom collations, 325
showing system tables, 136
starting, 17 dbcon7.dll, 879, 884, 890
starting without connecting, 122 dbctrs7.dll
stopping, 17 deploying, 887
transaction log, 115
Transact-SQL compatibility, 967 dbdsn
unloading and reloading, 697, 729, 732 using, 52
upgrading, 711
dbeng7.exe
URL, 615
deploying, 887
utility, 794
validating, 674 dbeng7.exe file, 4
verifying design, 352
viewing and editing properties, 120 DBERASE utility, 118
working with, 115 dbextf.dll
working with objects, 111 deploying, 887
DatabaseSwitches connection parameter DBF connection parameter
about, 64 about, 64
DataSourceName connection parameter embedded databases, 43
about, 64 dbfile.dll, 890
DataWindows DBG connection parameter
remote data access, 895 about, 64
dates dbinit utility
entry rules, 168 Java, 554, 555
procedures and triggers, 486
searching for, 168 dbipx7.dll, 879, 884

db_property function dbisql utility, 710


about, 829 dbjava7.dll
DB2 deploying, 887
migrating databases, 715 dblgen7.dll, 879, 884, 890
DB2 remote data access, 933 deploying, 887

db2odbc server class, 933 dblib7.dll, 884

DBA DB-Library, 990


defined, 736 dblog utility
DBA authority auditing, 781
about, 736 transaction log mirrors, 692
granting, 742 dbmapi.dll, 890
not inheritable, 753
security tips, 773

1138
D–D

DBN connection parameter DDL


about, 64 about, 112
dbo user ID deadlock
Adaptive Server Enterprise, 963 about, 392
transaction blocking, 393
dbodbc7.dll, 879
deadlocks
dbodtr7.dll, 879 reasons for, 393
dbping utility Debug connection parameter
using, 75 about, 64
dbremote.exe, 890 debugging
DBS connection parameter about, 621
about, 64 breakpoints, 632
compiling classes, 630
dbserv7.dll connecting, 624
deploying, 887 event handlers, 504, 510
dbsmtp.dll, 890 features, 622
getting started, 624
dbspaces introduction, 622
creating, 788 Java, 630
managing, 962 local variables, 629, 633
permissions, 623
dbsrv7.exe
requirements, 623
deploying, 887
stored procedures, 627
dbstop utility tutorial, 627, 630
permissions, 11
DebugScript class, 637
using, 15
decision support
dbtool7.dll, 890
and isolation levels, 395
dbtran utility
DECLARE statement
auditing, 779, 781
compound statements, 460
transaction logs, 689
procedures, 471, 475
uncommitted changes, 689
default character set
DBUNLOAD
about, 299
replication, 697
UNIX, 299
dbunload utility, 705, 710 Windows, 299
dbupgrad utility DefaultCollation property
Java, 554, 555 about, 794
dbvalid utility defaults
using, 674 AUTOINCREMENT, 364
column, 362
dbvim.dll, 890 connection parameters, 47
dbwrite utility constant expressions, 366
auditing, 781 creating, 362
creating in Sybase Central, 363
dbwtsp7.dll, 890 current date and time, 363
INSERT statement and, 256

1139
D–D

Java, 563 database servers, 887


NULL, 365 databases, 888
string and number, 365 embedded databases, 889
Transact-SQL, 962 Embedded SQL, 884
user ID, 364 file locations, 870
using in domains, 372 Interactive SQL, 886, 889
with transactions and locks, 426 jConnect, 885
JDBC, 885
definitions models, 868
isolation levels, 386 ODBC, 878
delaying referential integrity checks, 420 ODBC driver, 879
ODBC settings, 878, 880
DELETE permissions, 743 Open Client, 885
DELETE statement, 1020 overview, 868
Java objects, 569 personal database server, 889
locking during, 421 read only, 888
positioned, 279 registry settings, 878, 880
using, 261 silent installation, 875
SQL Remote, 890
DELETE_OLD_LOGS option, 1028 System Management Server, 877
write files, 872
deleting
data types, 373 descending order
database spaces, 790 ORDER BY clause, 187
domains, 373
groups, 758 describing result sets, 281
indexes, 147 descriptors, 281
integrated logins, 80
JAR files, 569 designing databases
Java classes, 569 about, 333
procedures, 443 concepts, 335
remote procedures, 915 procedure, 341
remote servers, 900 destructors
tables, 130 Java, 530
triggers, 455
user-defined data types, 373 devices
users, 751 managing, 962
views, 143
dialog boxes, 1033
deleting databases, 118 File menu, 1035
security, 782 overview, 1034
Tools menu, 1045
Delphi
ODBC configuration for, 53 dialup networking
connections, 97
dependencies
of services, 27, 28 digit characters
defined, 317
deploying applications, 867
directories
deployment executable, xvii
about, 867 installation, xvii
applications, 878
CD-ROM, 888

1140
D–D

directory structure Enterprise Application Server, 951


UNIX, 870 recovery, 950
three-tier computing, 946
dirty reads, 386, 398, 416
versus isolation levels, 387 DLLs
calling from procedures, 488
DisableMultiRowFetch connection parameter
about, 64 DMRF connection parameter
about, 64
DISCONNECT statement
using, 119 documentation
conventions, xv
disconnecting
from databases, 119 documents
other users from a database, 119 inserting, 258
disk controllers domains
transaction log management, 664 assigning columns, 371
case-sensitivity, 971
disk crashes CHECK conditions, 368
about, 650 creating, 371, 372
disk full deleting, 373
error writing to transaction log, 664 examples of uses, 371
using, 371
disk mirroring
transaction logs, 664 double quotes
character strings, 168
disk space
event example, 501 Driver Not Capable error
Java values, 580 ODBC configuration, 53
DISK statements drivers
unsupported, 962 choosing a driver for a connection, 39
disks DROP CONNECTION statement
fragmentation and performance, 790 using, 119
recovery from failure of, 685 DROP DATABASE statement
DISTINCT clause Adaptive Server Enterprise, 962
SELECT statement, 159 using, 118
DISTINCT keyword DROP OPTIMIZER STATISTICS statement, 857
aggregate functions, 174, 176 example, 813
Java columns, 573 DROP statement
distributed applications and concurrency, 427
about, 616 automatic commit, 383
example, 618 DROP TABLE statement
requirements, 616 example, 130
Distributed Transaction Coordinator DROP TRIGGER statement
three-tier computing, 947 about, 455
distributed transactions DROP VIEW statement
about, 943, 944, 949 example, 143
architecture, 946, 947
enlistment, 946

1141
E–E

dropping editing
domains, 373 properties of database objects, 120
groups, 758
indexes, 147 effects of
procedures, 443 transaction scheduling, 395
remote procedures, 915 unserializable transaction scheduling, 395
remote servers, 900 efficiency
tables, 130 improving and locks, 397
triggers, 455 improving using indexes, 424
users, 751
views, 143 embedded databases
connectin, 43
DSEdit connection parameters, 66
about, 992 deploying, 889
DSEDIT utility Java, 44
entries, 996 starting, 43
starting, 995 Embedded SQL
using, 994, 1008 autocommit mode, 283
DSN connection parameter connections, 35
about, 49, 64 cursor types, 273
Windows CE, 58 interface library, 68
SQL statements, 264
DTC
three-tier computing, 947 ENC connection parameter
about, 64
DUMP DATABASE statement
unsupported, 962 encoding
character sets, 291
DUMP TRANSACTION statement definition, 291
unsupported, 962 multibyte character sets, 317
duplicate encrypted passwords
data, 358 ODBC configuration, 54
duplicate results EncryptedPassword connection parameter
eliminating, 159 about, 64
duplicate rows encryption
removing with UNION, 190 communications, 783
network packets, 55
dynamic cache sizing passwords, 776
about, 807, 808, 809
Encryption connection parameter
dynamic scroll cursors, 272 about, 64
DYNAMIC SCROLL cursors ending transactions, 383
troubleshooting, 276
ENG connection parameter
about, 64

E EngineName connection parameter


about, 64
early release of locks, 396, 423, 424 character sets, 65

1142
E–E

enlistment estimates
distributed transactions, 946 providing, 827
ENP connection parameter estimates of cost during optimization, 864
about, 64
ethernet, 105
Enterprise Application Server
component transaction attribute, 952 euro symbol
distributed transactions, 951 1252LATIN1 collation, 306, 307
three-tier computing, 947 Evaluate window, 1055
transaction coordinator, 951
event handlers
entities debugging, 504, 510
about, 335 defined, 496
attributes, 335 internals, 507
choosing, 341
definition of, 335 event handling
forcing integrity, 374 about, 495
identifiers, 335 event types
Java, 583 about, 506
entity-relationship diagrams events
definition of, 335 about, 500
reading, 338 defined, 496
entity-relationship modeling, 333 handling, 495, 496
internals, 506
enumeration of joins, 862
examples
environment variables, 48 implications of locking, 408
SQLCONNECT, 48 non-repeatable read, 403
equality non-repeatable reads, 398, 401
Java objects, 572 phantom rows, 405, 408

equals operator Excel and remote access, 938


comparison operator, 163 exception handlers
erasing databases, 118 procedures and triggers, 479

error handling exceptions


Jave, 530 declaring, 475
ON EXCEPTION RESUME, 476 Java, 530
procedures and triggers, 474 trapping, 1054

error messages exclusive locks, 414


character set translation, 311 exclusive versus shared locks, 415
errors executable directory
conversion, 704 about, xvii
procedures and triggers, 474
Transact-SQL, 986, 987 EXECUTE IMMEDIATE statement
procedures, 483
escape characters
Java, 537 executeQuery method
SQL, 537 about, 607
executeUpdate JDBC method, 268

1143
F–F

executeUpdate method fetch operation


about, 605 cursors, 277
multiple rows, 277
exporting scrollable cursors, 279
Adaptive Server Enterprise compatibility, 700
available file formats, 699 FETCH statement
data, 696 procedures, 471
issues, 705
NULL values, 727 fetching
procedures, 705 limits, 275
query results, 725 fields
schema, 724 class, 526
tables, 723 instance, 526
tools, 697, 705 Java, 525
exporting data, 724 private, 530
file formats, 707 protected, 530
NULLS option, 707 public, 530, 540
overview, 694 file data sources
tools, 705 creating, 56
UNLOAD TABLE statement, 702
file formats, 699
exporting databases, 721
File menu, 1035
expressions
NULL values, 171 file names
conventions, 871
extended characters language, 871
about, 293 version number, 871
external, 699 FileDataSourceName connection parameter
external functions about, 64
canceling, 492 FileDSN connection parameter
prototypes, 490 about, 49, 64
return types, 493 Windows CE, 58
external logins files
about, 903 deployment location, 870
creating, 903 fragmentation, 805
dropping, 904 naming conventions, 871
extracting, 697, 710, 734 performance, 803
procedures, 709 finallyblock
Java, 531
finishing transactions, 383
F firewalls
failure connecting across, 96
recovery from, 645
FIRST clause
fat cursors, 277 queries, 188
FIXED format, 699

1144
G–G

fixed width character sets functions


about, 294 external, 488
replicating, 1022
FLOAT_AS_DOUBLE option support for APC format, 1022
Open Client, 1000 TRACEBACK, 475
follow bytes tsequal, 973
about, 294, 312 user-defined, 446
FOR BROWSE clause
unsupported, 977
FOR READ ONLY clause
G
ignored, 977 generated join conditions, 203
FOR statement generating
syntax, 459 physical data model, 349
FOR UPDATE clause generating unique keys, 426
unsupported, 977
getConnection method
foreign key, 133 instances, 603
foreign key relationships getObject method
joining tables using, 200 using, 618
foreign keys, 133, 134 global temporary table, 125
as invalid data, 358
creating, 133, 134 global temporary tables, 698
deleting, 133 Globals window, 1055
mandatory/optional, 376
performance, 811 -gn command-line option
referential integrity, 376 threads, 575
showing in Sybase Central, 133
go
format batch statement delimiter, 457
Java objects, 580
GRANT statement
forward log, 651 and concurrency, 427
creating groups, 753
FORWARD TO statement, 913 DBA authority, 742
FoxPro group membership, 754
format, 699 JDBC, 611
new users, 741
Foxpro and remote access, 940 passwords, 742
permissions, 743
fragmentation and performance, 790
procedures, 747
frame type, 105 RESOURCE authority, 742
table permissions, 743
FROM clause, 161 Transact-SQL, 965
UPDATE statement, 260 WITH GRANT OPTION, 747
FROM keyword without password, 757
joins, 198 granting
full backups remote permissions, 749
performing, 674

1145
H–I

graph type
configuring the Performance Monitor, 832
H
handling events, 495, 496
graphing
setting the interval time, 832 hardware mirroring
using the Performance Monitor, 830 transaction logs, 664
greater than hash values, 848
comparison operator, 163
range specification, 164 HAVING clause
GROUP BY and, 184
greater than or equal to logical operators, 185
comparison operator, 163 without aggregates, 184
GROUP BY ALL clause heap size
unsupported, 977 Java, 590
GROUP BY clause high availability
about, 178 database server, 18
Adaptive Server Enterprise compatibility, 192 live backups, 685
aggregate functions, 178
execution, 180 HOLDLOCK keyword
Java columns, 573 Transact-SQL, 978
order by and, 189 how locking is implemented, 424
SQL/92, 192
WHERE clause, 182
GROUP permissions
not inheritable, 753
I
groups I/O
Adaptive Server Enterprise, 964 estimates, 827
creating, 753 idle, 672
deleting, 758 IBM
deleting integrated logins, 80 protocol stacks, 91
granting integrated logins, 78 remote data access to DB2, 933
leaving, 755
managing, 753 icons
membership, 754 for running services, 24
of services, 28 used in manuals, xvi
permissions, 738, 756 IDebugAPI interface, 638
PUBLIC, 758
remote permissions, 749 IDebugWindow interface, 641
revoking membership, 755
identifiers
setting options, 740
case insensitivity, 295
SYS, 758
case sensitivity, 971
without passwords, 757
definition of, 335
-gx option international aspects, 295
threads, 924 qualifying, 151
uniqueness, 971
using in domains, 372
IDENTITY column, 973
retrieving values, 974

1146
I–I

idle I/O ISO SQL/92 standard, 386


task, 672 phantom rows, 386, 405, 408, 417, 424
idle server inconsistencies dirty reads, 398
event example, 502
inconsistencies non-repeatable reads, 386, 401, 416,
IdleTime event 417
polling, 506
inconsistency
IF statement example of non-repeatable read, 403
syntax, 459
indexes
images about, 816
inserting, 258 benefits and locking, 397
composite, 849
implementation of locking, 424 composite, hash values, 849
import statement creating, 146, 816
Java, 529, 538 deleting, 147
jConnect, 613 hash values, 848
how indexes work, 817
importing improving concurrency, 424
Adaptive Server Enterprise compatibility, 700 inspecting, 148
available file formats, 699 Java, 586
data, 696 Java columns, 573
file formats, 703 Java values, 580
issues, 701 optimization and, 848
non-matching table structures, 703 performance, 802, 816
NULL values, 703 selection of during optimization, 863
procedures, 701 temporary tables, 819
tools, 697, 701 Transact-SQL, 971
using temporary tables, 698 validating, 146
Importing working with, 145
databases, 715 initial cache size, 807
importing data inner joins
conversion errors, 704 FROM clause, 205
DEFAULTS option, 719
interactively, 717 INOUT parameters
LOAD TABLE statement, 718 Java, 578
overview, 694 INPUT statement, 701, 717
performance, 713
temporary tables, 703, 719 insensitive cursors, 272
importing databases insert locks, 415
database migration, 715
INSERT permissions, 743
IN keyword, 166
CREATE TABLE statement, 788 INSERT statement, 701, 717, 1020
matching lists, 165 about, 255
IDENTITY columns and, 257
inconsistencies Java, 566
avoiding using locks, 413 JDBC, 605, 606
dirty reads, 386, 416 locking during, 418
effects of unserializable schedules, 395 objects, 610

1147
I–I

performance, 266 security features, 775


SELECT, 255 using, 78, 80
inserting integrity
BLOBS, 258 constraints, 360, 367
of data, 357
Inspect window, 1056 overview, 358
INSTALL statement Interactive SQL
class versions, 581 AUTO_COMMIT option, 383
introduction, 534 command delimiter, 485
using, 559, 560 COMMIT_ON_EXIT option, 383
installation deploying, 886, 889
silent, 875 exiting, 383
Messages pane, 811
installation directory
about, xvii Interactive SQL export wizard, 705
installation programs Interactive SQL Import Wizard, 701
deploying, 869 interface libraries
installing connections, 34
JAR files, 560 locating, 68
Java classes, 559 interfaces
jConnect meta-data support, 123 IDebugAPI, 638
InstallShield IDebugWindow, 641
silent installation, 875 Java, 530
instance fields interfaces file
about, 526 configuring, 994
Open Servers, 1008
instance methods
about, 526 interference between transactions, 392, 405
instances interleaving transactions, 394
Java classes, 529 internal, 699
instantiated internals
definition, 529 event handlers, 507
INT connection parameter event handling, 506
about, 64 events, 506
schedules, 506
Integrated connection parameter
about, 64 INTO clause
using, 467
integrated logins
about, 77 invalid data, 358
creating, 78 IP address
default user, 84 about, 996
deleting, 80 ping, 104
network aspects, 83
ODBC configuration, 54 IPX
operating systems, 77 deprecated protocol, 94
security, 81, 83 Windows NT, 98

1148
J–J

IPX protocol JAR files


starting, 13 adding, 560
deleting, 569
IR! settings installing, 558, 560
network adapters, 106 updating, 561
IS NULL keyword, 170 versions, 561
ISNULL function Java
about, 170 about, 549, 621
adding to Version 7 databases, 556
ISO SQL/92 standard API, 519, 533
concurrency, 386 catch block, 531
typical inconsistencies and, 386 class versions, 580
ISO_1 collation classes, 529
about, 306 compareTo method, 573
compiling classes, 523
Isolation level computed columns, 586
ODBC configuration, 52 connection parameters, 66
constructors, 530
isolation levels
creating columns, 563
about, 386
data types, 563
applications, 284
database design, 583
changing, 388
debugging, 621, 630
changing within a transaction, 390
defaults, 563
choosing, 394
deleting rows, 569
choosing types of locking, 404
desctructors, 530
cursors, 276
enabling a database, 553
definition, 386
error handling, 530
implementation at level 0, 416
escape characters, 537
implementation at level 1, 416
fields, 525
implementation at level 2, 417
finally block, 531
implementation at level 3, 417
heap size, 590
ODBC, 389
indexes, 573, 580
setting, 388
inserting, 566
tutorials, 398
inserting objects, 568
versus typical inconsistencies, 387, 405, 408, 424
installing classes, 558
versus typical transactions, 395
interfaces, 530
viewing, 391
introduction, 514, 523, 541
ISOLATION_LEVEL option JDBC, 591
Open Client, 1000 main method, 537, 575
memory issues, 589
methods, 525
namespace, 590
J NULL, 563
objects, 524
Jaguar
overview, 550
Enterprise Application Server, 951
performance, 580
Jar files persistence, 537
Java, 529 primary keys, 573
Procedure Not Found error, 575
queries, 570

1149
J–J

querying objects, 616 packages, 613


replicating objects, 581 system objects, 613
runtime classes, 553 TDS, 990
runtime environment, 551 URL, 614
sample tables, 550 versions, 612
storage, 580
supported classes, 520 jConnect driver, 39
supported platforms, 519 jConnect meta-data support
try block, 531 installing, 123
unloading and reloading objects, 581
updating values, 568 JDBC
version, 533 about, 591
virtual machine, 517, 518, 590 applications overview, 592
autocommit, 603
Java classes autocommit mode, 283
adding, 559 client connections, 597
database design, 334 client-side, 595
deleting, 569 connecting, 597
installing, 559 connecting to a database, 615
Java columns connection code, 597
updating, 569 connection defaults, 603
connections, 595
Java data types cursor types, 273
inserting, 610 data access, 604
retrieving, 610 deploying, 885
examples, 592, 597
Java objects INSERT statement, 605, 606
comparing, 572 jConnect, 612
java package non-standard classes, 593
runtime classes, 553 overview, 592
permissions, 611
Java stored procedures prepared statements, 609
about, 577 requirements, 592
example, 577 runtime classes, 553
JAVA_HEAP_SIZE option SELECT statement, 607
using, 590 server-side, 595
server-side connections, 601
JAVA_NAMESPACE_SIZE option SQL statements, 264
using, 590 version, 533, 593
version 2.0 features, 593
jcatalog.sql file
ways to use, 592
jConnect, 613
JDBC connectivity, 39
jConnect
about, 612 jdbcdrv.zip file
CLASSPATH environment variable, 612 jConnect, 612
connections, 597, 601
database setup, 613 JDBCExamples class
deploying, 885 about, 604
installation, 612 JDBCExamples.java file, 592
jdbcdrv.zip, 612
loading, 614 JDBC-ODBC Bridge driver, 39

1150
K–L

jdemo.sql keywords
sample tables, 550 HOLDLOCK, 978
NOHOLDLOCK, 978
JDK remote servers, 923
definition, 519 SQL and Java, 538
version, 533
join operators
Transact-SQL, 979
L
joins
about, 196 LANalyzer, 105
Cartesian product, 211 language
column names in, 199 locale, 298
correlation names and, 209
cross joins, 211 language resource library
enumeration of, 862 messages file, 291
execution plans, 820
language support
from clause in, 198
about, 287
join conditions, 203
collations, 319
joining many tables to one, 216
multibyte character sets, 310
key joins, 200
overview, 288
many tables in, 216
natural joins, 202 languages
outer join conditions, 207 file names, 871
performance and, 215
process of, 214 laptop computers
relational model and, 196 and transactions, 428
restrictions, 199 leaf page
self-joins, 209 indexes, 817
self-joins and cross joins, 211
star joins, 216 LEAVE statement
Transact-SQL, 979 syntax, 459
Transact-SQL outer joins, 220 leaving
Transact-SQL outer, null values and, 222 groups-, 755
Transact-SQL outer, restrictions on, 221
Transact-SQL outer, views and, 222 left-outer joins
updates using, 260 FROM clause, 205
using comparisons, 203
legend
where clause in, 204
resizing, 832
less than
comparison operator, 163
K range specification, 164
key joins, 200 less than or equal to
keyboard mapping comparison operator, 163
about, 291 levels
keys of isolation, 386
assigning, 347
performance, 811

1151
L–L

levels of isolation Locals window, 1056


changing, 388
changing within a transaction, 390 LocalSystem account
setting default, 388 about, 19
options, 24
leveness
ODBC configuration, 55 locks
about, 381, 392, 413
libctl.cfg file anti-insert, 414, 415, 421, 422
dsedit, 995 blocking, 392, 405
choosing isolation levels, 404
LIKE operator conflict handling, 392, 405
wildcards, 167 conflicting types, 416
line breaks deadlock, 393
SQL, 151 early release of, 396, 423, 424
exclusive, 414
Links connection parameter how locking works, 413
about, 64 implementation at level 0, 416
list, matching in SELECT, 166 implementation at level 1, 416
implementation at level 2, 417
literal values implementation at level 3, 417
NULL, 171 inconsistencies versus typical isolation levels,
387, 424
live backups, 685
insert, 414, 415
regular backups and, 665
isolation levels, 386
LivenessTimeout connection parameter nonexclusive, 414, 415
about, 64 orphans and referential integrity, 419
phantom rows versus isolation levels, 405, 408
LOAD DATABASE statement procedure for deletes, 421
unsupported, 962 procedure for inserts, 418
LOAD table statement, 710 procedure for updates, 420
read, 414, 415, 422
LOAD TABLE statement, 701, 718 reducing the impact through indexes, 397
security, 782 shared versus exclusive, 415
LOAD TRANSACTION statement two-phase locking, 422
unsupported, 962 types of, 414
typical transactions versus isolation levels, 395
loading, 694, 696 uses, 414
write, 414
loading data
security, 782 LOG connection parameter
about, 64
local temporary tables, 698
log files
locale
checkpoint, 668
character sets, 311
rollback, 670
language, 298
transaction, 651
locales
Log Transfer Manager, 991
about, 297
setting, 321 log translation utility, 689
localhost machine name, 996 Logfile connection parameter
about, 64

1152
M–M

logging SQL statementsl, 122 LTO connection parameter


about, 64
logical operators
HAVING clauses, 185
LOGIN_MODE database option
integrated logins, 78 M
logins main method
integrated, 77, 78 Java, 537, 575

logs maintenance user


rollback log, 385 primary site, 1009
replicate site, 1012
LOOP statement user ID, 1017
in procedures, 471
syntax, 459 managing
transactions, 917
Lotus format, 699
managing connected users, 752
Lotus Notes
passwords, 940 mandatory
remote data access, 940 foreign keys, 376

lower code page mandatory relationships, 338


about, 293 manual commit mode, 283
LTM many-to-many relationships
character set configuration, 1026 definition of, 337
character sets, 1025 resolving, 350, 351
collations, 1025, 1026
configuration, 1008, 1023 master database
configuration file, 1014 unsupported, 961
Open Client/Open Server character sets, 1025
MAX function, 174
Open Client/Open Server collations, 1025
Java columns, 573
starting, 1014
supported operations, 1020 maximum cache size, 807
transaction log management, 1028
media failure
LTM configuration file protection, 663
about, 1023
character sets, 1026 media failures
creating, 1023 about, 650
format, 1023 recovery, 653
recovery of, 685
LTM_admin_pw parameter, 1014
membership
LTM_admin_user parameter, 1014 revoking group membership, 755
LTM_charset parameter, 1014 memory
LTM configuration file, 1026 connection limit, 768
Java, 589
LTM_language parameter
LTM configuration file, 1026 Message Agent
transaction log management, 660
LTM_sortorder parameter
LTM configuration file, 1026 message plans, 813

1153
N–N

MESSAGE statement modifying


procedures, 475 remote servers, 900
views, 142
messages
language resource library, 291 monitor
adding statistics to the graph, 831
methods configuring, 832
>>, 535 opening the Performance Monitor, 830
class, 526 Performance Monitor overview, 830
declaring, 527 removing statistics to the graph, 831
dot operator, 534 resizing the legend, 832
instance, 526
Java, 525 monitoring
private, 530 cache, 810
protected, 530
public, 530 monitoring performance, 833
static, 526 more than one transaction at once, 384
void return type, 576
MSDASQL OLE DB provider
Methods window, 1056 about, 62
Microsoft Access msodbc server class, 936
ODBC configuration for, 53
remote data access, 939 multibyte character sets
about, 294
Microsoft Excel using, 310
remote data access, 938
multibyte characters
Microsoft Foxpro encodings, 317
remote data access, 940 properties, 317
Microsoft network software multiple databases
about, 91 dsedit entries, 996
Microsoft SQL Server and remote access, 936 joins, 912

Microsoft Transaction Server multiple record fetching


three-tier computing, 947 ODBC configuration, 56

Microsoft Visual Basic multiple transactions


ODBC configuration for, 53 benefits of concurrency, 384
concurrency, 384
migration
importing databases, 715 multi-processor machines
switch, 10
MIN function, 174
Java columns, 573 multi-threading
Java, 575
minimal column definitions, 1021
minimum cache size, 807
mirror N
transaction log, 651, 652, 691, 692 name spaces
MobiLink indexes, 971
backups, 658 triggers, 971

1154
N–N

named pipes network adapters


about, 101 configuring, 106
drivers, 102
Named Pipes network communication, 88
supported protocols, 94
network communications, 30, 31
NamedPipes
starting protocol, 13 network connections
transport protocol, 88 switch, 13
namespace network drives
Java, 590 database files, 7
naming and nesting savepoints, 385 network layer
about, 88
national language support
about, 287 network protocol
collations, 319 introducing, 45
multibyte character sets, 310
overview, 288 network protocols
about, 85
natural joins, 202 application layer, 89
FROM clause, 202 compatibility, 89
complications, 91
NDIS data link layer, 88
compatibility, 92 layers, 86
drivers, 102 multiple stacks, 92
network communication, 88 network adapters, 106
negative permissions, 777 network layer, 88
ODBC configuration, 55
nesting and naming savepoints, 385 OSI Reference Model, 86
net.cfg file, 105 physical layer, 88
stack compatibility, 87
NetBEUI supported, 94
transport protocol, 88 transport layer, 88
troubleshooting, 102
NetBIOS
about, 100 network server
connection limit, 100 about, 4
supported protocols, 94 connecting to, 45
transport protocol, 88 encryption, 783
troubleshooting, 103
Windows 95/98, 99 new databases, 25
Windows NT, 100 NLMs
NetBIOS protocol calling from procedures, 488
starting, 13 no scroll cursors, 272
NetWare NOHOLDLOCK keyword
bindery, 98 ignored, 978
database server, 6
network adapter settings, 105 non-dirty reads, 398
protocol stacks, 91 nonexclusive locks, 414, 415

1155
O–O

non-repeatable read
example, 403
O
object-oriented programming
non-repeatable reads, 386, 401, 416, 417
Java, 529
isolation levels, 387, 424
style, 540
normal forms
objects
first normal form, 348
class versions, 580
normalizing database designs, 346
inserting, 610
second normal form, 348
Java, 524
third normal form, 349
qualified names, 760
normalization, 196 querying, 616
replication, 581
not equal to retrieving, 610
comparison operator, 163 storage format, 561
not greater than storage of Java, 580
comparison operator, 163 types, 524
unloading and reloading, 581
NOT keyword
search conditions, 164 occasionally connected users
using, 171 replicating data and concurrency, 428

not less than ODBC


comparison operator, 163 Administrator, 49
applications, 389
Notes and remote access, 940 applications, and locking, 389
Novell client software, 102 autocommit mode, 283
connections, 35
NULL cursor types, 273
aggregate functions, 176 data sources, 49, 881
DISTINCT clause, 160 deploying, 878
Java, 563 driver deployment, 879
Transact-SQL compatibility, 976 driver location, 68
external servers, 930
NULL value
initialization file for UNIX, 57
allowing in columns, 356
registry entries, 881
column default, 969
SQL statements, 264
NULL values translation driver, 323
about, 169 UNIX support, 57
column definition, 171 Windows CE, 58
comparing, 170
ODBC Administrator
default, 365
using, 49
default parameters, 170
output, 707, 727 ODBC connectivity, 39
sort order, 188
Transact-SQL outer joins, 222 ODBC data sources
about, 36
NULLS option, 707, 727 configuring, 52
creating, 49, 52
null-supplying table
UNIX, 57
outer joins, 220
using with jConnect, 39
NWLink IPX/SPX Compatible Transport, 98
odbc server class, 938

1156
O–O

ODBC server classes, 930 Open Client


autocommit mode, 283
ODBC settings configuring, 994
deploying, 878, 880 cursor types, 273
ODBC translation driver deploying, 885
ODBC configuration, 52 interface, 990
options, 1000
ODI SQL statements, 264
compatibility, 92
drivers, 102 Open Server
network communication, 88 adding, 994, 1008
address, 996
ODI drivers, 102 architecture, 990
ODI2NDI translation driver, 92 connecting to, 1009
starting, 992
ODIHLP translation driver, 92 system requirements, 992
ODINSUP translation driver, 92 OPEN statement
procedures, 471
OEM code pages
about, 293 operating system
choosing, 306 file names, 871
offline backups, 649 operators
defined, 654 arithmetic, 158
NOT keyword, 164
OLE DB
precedence, 158
ASAProv provider, 62
connecting, 62 optimization of queries
providers, 62 about, 835
assumptions, 841
OLE transactions
reading access plans, 839
three-tier computing, 946
rewriting subqueries as EXISTS predicates, 843
OmniConnect support, 991 steps in, 838
ON EXCEPTION RESUME clause optimizations
about, 476 using indexes, 424
not with exception handling, 480
optimizer
stored procedures, 474
about, 826, 835
Transact-SQL, 987
assumptions, 841
ON phrase predicate analysies, 850
joining tables, 203 role of, 836
selectivity estimation, 856
one-to-many relationships semantic subquery transformations, 852
definition of, 337
resolving, 350 optional foreign keys, 376
one-to-one relationships optional relationships, 338
definition of, 337
options
resolving, 350
BLOCKING, 392
online backups, 649 DEFAULTS, 719
defined, 654 ISOLATION_LEVEL, 388
NULLS, 707, 727

1157
P–P

Open Client, 1000


setting database options, 120
P
setting user and group options, 740 packages
startup settings, 1000 installing, 560
Java, 529, 538
or (|) bitwise operator
jConnect, 613
using, 171
locating, 1057
Oracle
packets
migrating databases, 715
network communications, 88
Oracle and remote access, 935
page size
oraodbc server class, 935 performance, 802
switch, 11
ORDER BY clause
GROUP BY, 189 parentheses
Java columns, 573 in arithmetic statements, 158
limiting results, 188 UNION operators, 190
performance, 823
password
ordering default, 736
Java objects, 572
Password connection parameter
ordering of transactions, 394 about, 64
organization passwords
of data, physical, 845 case sensitivity, 971
changing, 742
orphan and referential integrity, 419 length, 773
OSI Reference Model Lotus Notes, 940
protocol stacks, 86 ODBC configuration, 54
security features, 776
OUT parameters security tips, 773
Java, 578 utility database, 796
outer joins performance
FROM clause, 205 about, 799
join conditions, 207 autocommit, 805
restrictions, 221 bulk loading, 713
Transact-SQL, 220, 979 bulk operations, 806
Transact-SQL, restrictions on, 221 cache, 800
Transact-SQL, views and, 222 cursors, 273, 278
outer reference database design, 801
definition, 185 disk fragmentation, 790
file fragmentation, 805
output redirection, 725 file management, 803
improving versus locks, 397
OUTPUT statement, 705, 725
indexes, 145, 802
owners Java values, 580
about, 737 JDBC, 609
joins and, 215
keys, 811
LTM, 1024
monitoring, 829, 833

1158
P–P

multiple table queries, 820 procedures, 747


page size, 802, 817 procedures calling external functions, 488
prepared statements, 266 RESOURCE authority, 737, 742
primary keys, 665 revoking, 750
server options, 6 revoking remote, 749
statistics, 833 scheme, 775
switches, 10 security features, 775
TCP/IP, 96 setting table permissions, 743
temporary tables, 824 switches, 11
tips, 800 tables, 738, 743
transaction log, 651, 665, 800 the right to grant, 747
transaction log mirrors, 652 triggers, 455, 737, 749
using Interactive SQL to examine, 811 user-defined functions, 448
views, 738, 745, 762
Performance Monitor, 830 WITH GRANT OPTION, 747
adding statistics, 831
configuring, 832 persistence
graphing, 830 Java classes, 537
opening, 830
overview, 830 personal server
removing statistics, 831 about, 4
resizing the legend, 832 deploying, 889
setting the interval time, 832 phantom
Windows NT, 833 rows, 386, 405, 408, 417, 424
Performance Monitor (NT) phantom rows
starting, 833 versus isolation levels, 387, 405, 408, 424
permissions physical data model
about, 777 generating, 349
Adaptive Server Enterprise, 964
conflicts, 767 physical data organiation, 845
connect, 741 physical layer
data modification, 254 about, 88
DBA authority, 736 troubleshooting, 105
debugging, 623
file administration statements, 796 ping
granting passwords, 741 TCP/IP, 104
granting remote, 749 testing networks, 30
group, 753 testing Open Client, 998
group membership, 754
planning
groups, 738, 756
backups, 655, 661
individual, 741
inheriting, 747, 753 plans
integrated login permission, 78 choosing message plans, 813
JDBC, 611
listing, 769 platforms
managing, 735 cursors, 273
negative, 777 Java support, 519
overview, 736 plus operator
passwords, 742 NULL values, 171
procedure result sets, 468

1159
P–P

polling frequency, 26 primary keys, 132


AUTOINCREMENT, 364
port numbers concurrency, 426
TCP/IP, 993 creating, 132
portable computers entity integrity, 375
replicating databases, 428 generation, 426
Java columns, 573
portable SQL, 975 modifying, 131, 132
positioned delete operation, 279 performance, 811
transaction log, 665
positioned update operation, 279
primary site
positioned updates adding Replication Server information, 1009
about, 276 Replication Server, 1005
uses of LTM at, 1006
PowerBuilder
remote data access, 895 primary sites
creating, 1007
predicates
Replication Server, 1006
about, 162
analysis of, 850 print
selectivity estimation, 856 Java, 536
prefetch, 277, 278 println method
Java, 536
PREFETCH option, 278
private
PREPARE statement
Java access, 530
remote data access, 917
procedure language
prepared statements
overview, 980
bind parameters, 266
connection limit, 768 procedure not found error
cursors, 271 Java methods, 607
dropping, 266
Java objects, 568 procedures
JDBC, 609 about, 435, 437
using, 266 adding remote procedures, 914
altering, 441
PreparedStatement class and Adaptive Server Anywhere LTM, 1020
setObject method, 568 benefits of, 438
calling, 442
PreparedStatement interface
command delimiter, 485
about, 609
copying, 442
prepareStatement method, 268 creating, 439
cursors, 471
preparing dates, 486
to commit, 947 default error handling, 474
preserved table deleting, 443
outer joins, 220 deleting remote procedures, 915
error handling, 474, 986, 987
primary key, 131 exception handlers, 479
creating, 131 EXECUTE IMMEDIATE statement, 483
external functions, 488

1160
Q–Q

multiple result sets from, 469 protocol stacks


parameters, 463, 464 layers, 86
permissions, 747
permissions for creating, 737 protocols
permissions for result sets, 468 about, 85
replicating, 1022 application layer, 89
result sets, 444, 468 compatibility, 89
return values, 986 complications, 91
returning results, 466 data link layer, 88
returning results from, 443 layers, 86
savepoints, 484 multiple stacks, 92
security, 438, 762 network adapters, 106
SQL statements allowed, 462 network layer, 88
structure, 462 OSI Reference Model, 86
table names, 485 physical layer, 88
times, 486 stack compatibility, 87
tips, 485 supported, 13, 94
Transact-SQL, 983 switch, 13
Transact-SQL overview, 980 transport layer, 88
translation, 983 troubleshooting, 102
using, 439 two-phase locking, 423
using cursors in, 471 prototypes
variable result sets from, 470 external functions, 490
verifying input, 486
warnings, 478 proxy tables, 905
writing, 485 about, 896, 905
creating, 896, 906, 907, 908
Procedures window, 1057 location, 905
processors public
multiple, 11 Java access, 530
number used, 11
public fields
programming interfaces issues, 540
connections, 35
PUBLIC group, 758
properties
setting all database object properties, 120 publications
data replication and concurrency, 428
property
definition of, 335 PUT operation, 279

property function PUT statement, 279


about, 829 PWD connection parameter
property sheets, 120, 1061 about, 64
overview, 1063
protected
Java access, 530 Q
protected keyword qualifications
Java, 529 about, 162

1161
R–R

qualified names read-only


database objects, 151 databases, 12
tables, 757 deploying databases, 888
qualified object names, 760 read-only media
modifying databases, 792
qualifying
column names in joins, 198 rebuilding, 697, 709, 729
databases, 697
queries issues, 709
about, 149 procedures, 709
Java, 570 purpose, 711
JDBC, 607 replicating databases, 732
optimization, 826, 835 tools, 710
Transact-SQL, 976
recovery
query results about, 645
exporting, 725 distributed transactions, 950
Query window, 1057 media failure, 653, 685
rapid, 685
questions switch, 12
character sets, 289 system failure, 649
quotation marks transaction log, 651
Adaptive Server Enterprise, 168 transaction log mirrors, 652
in strings, 168 uncommitted changes, 689
specification, 168 urgency, 671

QUOTED_IDENTIFIER option, 969 RECOVERY_TIME option


introduction, 168 using, 671
Open Client, 1000 redirecting
quotes output to files, 725
Java strings, 536 references
showing references from other tables, 133
REFERENCES permissions, 743
R referential integrity
RAISERROR statement actions, 377
ON EXCEPTION RESUME, 987 breached by client application, 376
Transact-SQL, 987 checking, 378
enforcing, 374
range queries, 164 losing, 376
RAS orphans, 419
dialup networking, 97 permissions, 254
verification at commit, 420
read locks, 414, 415, 422
reflexive relationships, 339
read only cursors, 272
registry
reading access plans, 839 deploying, 878, 880
reading entity-relationship diagrams, 338 ODBC, 881
relational model
joins and, 196

1162
R–R

relationships classes, 925, 926


about, 336 creating, 898
cardinality of, 337 deleting, 900
changing to an entity, 339 external logins, 903
choosing, 341 listing properties, 902
definition of, 335 transaction management, 917
mandatory versus optional, 338
many-to-many, 337 remote tables
one-to-many, 337 about, 896
one-to-one, 337 accessing, 893
reflexive, 339 listing, 901
resolving, 349 listing columns, 909
roles, 337 REMOTEPWD
RELEASE SAVEPOINT statement, 385 using, 615

reloading removing
databases, 697, 729 statistics from the Performance Monitor, 831
users from groups, 755
relocatable
defined, 589 repeatable reads, 386, 401, 416, 417

remote data replicate minimal columns


location, 905 support for, 1021

remote data access REPLICATE ON clause, 1011


about, 893 replicate site
case sensitivity, 923 adding Replication Server information, 1012
DataWindows, 895 Replication Server, 1005
internal operations, 919 uses of LTM at, 1006
introduction, 894
passthrough mode, 913 replicate sites
PowerBuilder, 895 creating, 1007
remote servers, 898 Replication Server, 1005
SQL Remote unsupported, 923 replication
troubleshooting, 923 backup procedures, 660, 1027, 1028
unsupported features, 923 benefits, 1004
remote databases buffering, 1024
replication, 428 concurrency, 428
concurrency issues, 426
remote permissions, 749 creating a replication definition, 1014
remote procedure calls defining, 1020
about, 914 entire databases, 1028
introduction, 1004
remote procedures, 914 Java objects, 581
adding, 914 of procedures, 1022
creating, 439 procedures, 1022
data types, 915 rebuilding, 697, 709
deleting, 915 rebuilding databases, 697
rebuilding databases involved in, 732
remote servers Replication Server, 1005
about, 898 setting flag, 1011
altering, 900

1163
R–R

stored procedures, 1023 resource managers


transaction log management, 660, 1027, 1028 about, 944
three-tier computing, 946
Replication Agent
backups, 658 response file
definition, 875
Replication Server
Adaptive Server Anywhere character sets, 1019 RESTRICT action
Adaptive Server Anywhere collations, 1019 about, 377
backup procedures, 1027
creating a connection, 1013 restrictions
creating a replication definition, 1014 remote data access, 923
creating a subscription, 1015 result sets
preparing Adaptive Server Anywhere databases, cursors, 269
1012 Java methods, 577
primary site, 1006, 1013 Java stored procedures, 577
purposes, 1004 metadata, 281
replicate site, 1005, 1013 multiple, 469
replicating an entire database, 1028 permissions, 468
replicating procedures, 1022, 1023 procedures, 444, 468
rssetup.sql script, 1009 Transact-SQL, 984
starting an Adaptive Server Anywhere server, using, 275
1008 variable, 470
support, 991
supported versions, 1006 retrieving objects, 616
transaction log management, 1027 RETURN statement
Replication Server Adaptive Server Anywhere about, 466
configuration, 1017, 1019 return types
reserved words external functions, 493
remote servers, 923 return values
SQL and Java, 538 procedures, 986
RESETUP.SQL script REVOKE statement
about, 1017 and concurrency, 427
preparing to run, 1017 Transact-SQL, 965
running, 1019
revoking
RESIGNAL statement permissions, 750
about, 480 remote permissions, 749
resizing revoking group membership, 755
Performance Monitor legend, 832
right-outer joins
resolving FROM clause, 205
relationships, 349
roles
RESOURCE authority Adaptive Server Enterprise, 963
about, 737 definition of, 337
granting, 742
not inheritable, 753 ROLLBACK
TO SAVEPOINT statement, 385
resource dispensers
three-tier computing, 946

1164
S–S

rollback log SAVEPOINT statement


about, 670 and transactions, 385
savepoints, 385
savepoints
ROLLBACK statement, 385 cursors, 284
compound statements, 461 nesting and naming, 385
cursors, 284 procedures and triggers, 484
log, 670 within transactions, 385
procedures and triggers, 484
transactions, 383 saving transaction results, 383
triggers, 981 scalar aggregates, 175
ROLLBACK TO SAVEPOINT statement scan factor, 856
cursors, 284
scan_retry parameter, 1014
rolling back transactions, 383
schedules, 394
Row variables window, 1057 about, 498
rows defined, 496
copying with INSERT, 258 definition of serializable, 394
selecting, 162 effects of serializability, 395
effects of unserializable, 395
RS parameter, 1014 internals, 506
serializable versus early release of locks, 424
RS_pw parameter, 1014 two-phase locking, 422
RS_source_db parameter, 1014 scheduling
RS_source_ds parameter, 1014 about, 495, 496, 498
backups, 655, 661
RS_user parameter, 1014
scheduling of transactions, 394
rssetup.sql command file, 1009, 1012
schema
rules exporting, 724
Transact-SQL, 962
scope
runtime classes Java, 530
contents, 553
installing, 553 scripts
Java, 533 DebugScript class, 637
IDebugAPI interface, 638
runtime environment IDebugWindow interface, 641
Java, 551 writing, 637
runtime Java classes, 533 scroll cursors, 272
scrollable cursors, 279

S security
about, 735, 771
SA_DEBUG group auditing, 778, 779
debugger, 623 creating databases, 782
database server, 773, 782
sample database deleting databases, 782
about, xviii encryption, 783
Java, 550 event example, 502

1165
S–S

integrated logins, 81, 83, 775 self-joins, 209


loading data, 782 cross joins and, 211
overview, 772
passwords, 776 semicolon
procedures, 438, 747, 762 command delimiter, 485
server command line, 772 serializable schedules
services, 24 definition, 394
system functions, 773 effect of, 395
tips, 773 two-phase locking, 422
unloading data, 782 versus early release of locks, 424
utility database, 796
views, 762 serialization
distributed computing, 618
seialization objects, 617
Java objects, 580 objects in tables, 561
select list serializeable schedules
about, 153 correctness, 394
UNION operation, 190
UNION statements, 190 server address
DSEDIT, 996
SELECT permissions, 743
server classes
SELECT statement, 149 about, 897
about, 149 asajdbc, 927
character data, 168 asaodbc, 931
choosing rows, 150 asejdbc, 928
column headings, 155 aseodbc, 931
column order, 154 db2odbc, 933
cursors, 471 defining, 896
FROM clause, 198 msodbc, 936
INSERT from, 255 odbc, 930, 938
INTO clause, 467 oraodbc, 935
Java, 570
JDBC, 607 server information
keys and query access, 811 asasrv.ini, 74
objects, 610 server name
results, 150 case sensitivity, 10
specifying rows, 162 ODBC configuration, 54
strings in display, 156
Transact-SQL, 976 server side, 699
variables, 978
ServerName connection parameter
selectivity about, 64
default values, 859 character sets, 65
estimation of, 856
ServerPort option, 993
troubleshooting, 860
user estimates of, 858 servers
graphing with the Performance Monitor, 830
selectivity estimates
starting a database without connecting, 122
user supplied, 827
SELF_RECURSION option
Adaptive Server Enterprise, 981

1166
S–S

services seven-bit characters


about, 19, 25, 26 about, 293
account, 24
adding, 19, 20 shared objects
adding new databases, 25 calling from procedures, 488
command-line switches, 23 shared versus exclusive locks, 415
configuring, 22
database server for Windows NT, 18 SIGNAL statement
dependencies, 27, 28 procedures, 475
eligible programs, 19 Transact-SQL, 987
executable file, 25 single-byte character sets
failure to start, 23 about, 292
groups, 28
icon on the desktop, 24 SMP
managing, 19 number of processors, 11
multiple, 27
sort order
options, 23
collations, 287
parameters, 22
ORDER BY clause, 187
pausing, 26
removing, 19, 21 sort orders
security, 24 definition, 291
service manager, 27
starting, 26 sorting
starting order, 28 collation file, 315
startup options, 22 with index, 823
stopping, 26 source code
Windows NT Control Panel, 27 locations, 1057
SET clause Source code window, 1058
UPDATE statement, 259
Source path window, 1057
SET DEFAULT action
about, 377 sp_addgroup procedure
Transact-SQL, 965
SET NULL action
about, 377 sp_addlogin
support, 961
SET OPTION statement
Transact-SQL, 969 sp_addlogin procedure
Transact-SQL, 965
setAutocommit method
about, 603 sp_adduser procedure
Transact-SQL, 965
setObject method
using, 618 sp_bindefault procedure
Transact-SQL, 962
setup program
silent installation, 875 sp_bindrule procedure
Transact-SQL, 963
setup script
about, 1017 sp_changegroup procedure
preparing to run, 1017 Transact-SQL, 965
running, 1019 sp_dboption procedure
Transact-SQL, 969

1167
S–S

sp_dropgroup procedure sql.ini file, 1008


Transact-SQL, 965
SQL/92
sp_droplogin procedure GROUP BY clause, 192
Transact-SQL, 965
SQL_database parameter, 1014
sp_dropuser procedure
Transact-SQL, 965 SQL_pw parameter, 1014

sp_setreplicate procedure SQL_server parameter, 1014


about, 1011, 1022 SQL_user parameter, 1014
sp_setrepproc procedure SQLBindParameter ODBC function, 267
about, 1022
SQLCA.lock
specifying drivers, 39 selecting isolation levels, 389
SPX versus isolation levels, 387
about, 98 SQLCODE variable
supported protocols, 94 introduction, 474
transport protocol, 88
Windows 95/98, 99 SQLCONNECT environment variable
connections, 48
SPX protocol
starting, 13 SQLDA
descriptors, 282
SQL
ADO applications, 264 SQLExecute ODBC function, 267
applications, 263 SQLFreeStmt ODBC function, 267
Embedded SQL applications, 264
entering, 151 SQLLOCALE environment variable
JDBC applications, 264 about, 302, 311
ODBC applications, 264 setting, 321
Open Client applications, 264
SQLPrepare ODBC function, 267
SQL Remote
SQLSTATE variable
backup procedures, 660
introduction, 474
backups, 658
deploying, 890 standard output
Java objects, 581 Java, 536
purposes, 1004 redirecting to files, 725
rebuilding databases, 697
remote data access, 923 star joins, 216
replicating and concurrent transactions, 428 Start connection parameter
transaction log management, 660 about, 64
SQL Server START JAVA statement
migrating databases, 715 using, 590
SQL Server and remote access, 936 start line
SQL statements ODBC configuration, 54
logging in Sybase Central, 122 Start parameter
sql.ini embedded databases, 44
configuring, 994

1168
S–S

starting databases RESIGNAL, 480


jConnect, 615 RETURN, 466
REVOKE, 965
starting transactions, 383 ROLLBACK, 284, 484, 981
StartLine connection parameter ROLLBACK TO SAVEPOINT, 284
about, 64 SELECT, 467, 976, 978
switches, 9 SIGNAL, 475, 987
UNLOAD, 705
statement-level triggers, 980 UNLOAD TABLE, 702, 705
statements UPDATE, 1020
ALTER TABLE, 1011 UPDATE positioned, 279
CALL, 437, 442, 459, 464 WHILE, 459
CASE, 459 static methods
CLOSE, 471 about, 526
COMMIT, 284, 461, 484, 1016
compound, 460 Statics window, 1058
CREAT DEFAULT, 962 statistics
CREATE DATABASE, 794, 796, 962 adding to the Performance Monitor, 831
CREATE DOMAIN, 962 available, 834
CREATE FUNCTION, 446 displaying, 834
CREATE PROCEDURE, 439, 463 monitoring, 829
CREATE RULE, 962 performance, 833
CREATE TABLE, 975 removing to the Performance Monitor, 831
CREATE TRIGGER, 451
DECLARE, 460, 471, 475 STOP JAVA statement
DELETE, 1020 using, 590
DELETE positioned, 279
DISK, 962 storage
DROP DATABASE, 962 Java objects, 580
DUMP DATABASE, 962 stored procedure language
DUMP TRANSACTION, 962 overview, 980
EXECUTE IMMEDIATE, 483
FETCH, 471 stored procedures
FOR, 459 debugging, 627
GRANT, 965 INOUT parameters and Java, 578
IF, 459 Java, 577
INPUT statement, 717 OUT parameters and Java, 578
INSERT, 266, 1020 security features, 772
INSERT statement, 717 strings
LEAVE, 459 Java, 536
LOAD DATABASE, 962 matching, 166
LOAD TABLE, 718 quotation marks, 168
LOAD TRANSACTION, 962
logging, 122 subqueries
LOOP, 459, 471 caching of, 865
MESSAGE, 475 rewriting as EXISTS predicates, 843
OPEN, 471
Subqueries
optimization, 835
in keyword and, 165
OUTPUT, 705, 725
PUT, 279 subquery transformations during optimization, 852
RAISERROR, 987

1169
S–S

subscriptions SYSCOLUMN table


creating, 1015 integrity, 379
data replication and concurrency, 428
SYSCOLUMNS view
subtransactions conflicting name, 968
procedures and triggers, 484 integrity, 379
sub-transactions SYSDUMMY table
and savepoints, 385 permissions, 769
SUM function, 174 SYSFKCOL table
integrity, 379
summary values, 178
about, 173 SYSFOREIGNKEY table
integrity, 379
sun package
runtime classes, 553 SYSFOREIGNKEYS view
integrity, 379
swap space
database cache, 809 SYSGROUP table
permissions, 769
Sybase Central, 701, 710
and column defaults, 363 SYSGROUPS view
column constraints, 369 permissions, 770
dialog boxes, 1033
dropping views, 143 SYSINDEX table
logging SQL statements, 122 index information, 148
managing services, 19 SYSINDEXES view
property sheets, 1061 conflicting name, 968
translating procedures, 983 index information, 148
SYBASE environment variable SYSINFO table
dsedit, 995 collation files, 314
Sybase runtime Java classes SYSIXCOL table
about, 553 index information, 148
sybase.sql package SYSPROCAUTH view
runtime classes, 553 permissions, 770
sybinit SYSPROCPERM table
about, 992 permissions, 769
sybping, 1009 sysservers system table
symbols remote servers, 898
string comparisons, 166 SYSTABAUTH view
SYS group, 758 permissions, 770

SYSCOLAUTH view SYSTABLE table


permissions, 770 integrity, 379
view information, 144
SYSCOLLATION table
collation files, 314 SYSTABLEPERM table
permissions, 769
SYSCOLPERM table
permissions, 769 system administrator
Adaptive Server Enterprise, 963

1170
T–T

system catalog SYSVIEWS view


Adaptive Server Enterprise compatibility, 963 view information, 144
system events
about, 500
defined, 496 T
system failures Table Editor
about, 650 Advanced Table Properties dialog, 125
recovery, 649 setting the table type, 125
system functions table names
tsequal, 973 fully qualified, 485
System Management Server international aspects, 295
deployment, 877 local, 906
procedures and triggers, 485
system objects, 121, 136
table permissions
system security officer setting, 743
Adaptive Server Enterprise, 963
table types
system tables base table, 125
about, 136 global temporary table, 125
Adaptive Server Enterprise compatibility, 963
and indexes, 148 tables
character sets, 314 adding keys to, 131, 132, 133, 134
integrity, 379 advanced table properties, 125
national languages, 314 altering, 128, 129
owner, 963 browsing data, 131
permissions, 769 column names, 355
SYSCOLLATION, 314 constraints, 356
SYSINFO, 314 copying, 136
Transact-SQL name conflicts, 968 copying rows, 258
users and groups, 769 copying tables between databases, 136
views, 144 copying tables within a database, 136
correlation names, 161, 209
system views creating, 126
and indexes, 148 defining proxy, 906, 907, 908
integrity, 379 deleting, 130
permissions, 769 dropping, 130
exporting, 723
SYSTRIGGER table
group owners, 757
integrity, 379
importing, 718
SYSUSERAUTH view joining many, 216
permissions, 770 listing remote, 901
managing foreign keys, 133
SYSUSERLIST view managing primary keys, 132
permissions, 770 managing table constraints, 369
SYSUSERPERM table managing the foreign key, 133, 134
permissions, 769 managing the primary key, 131
names, in joins, 198, 209
SYSUSERPERMS view naming in queries, 161
permissions, 770 owner, 737

1171
T–T

permissions, 737, 738 query processing, 824


properties, 355 Transact-SQL, 976
proxy, 905
qualified names, 757, 760 testing
remote access, 896 database design, 346, 352
replicating, 1011, 1020 theorems
showing references from other tables, 133 two-phase locking, 423
showing the primary key in Sybase Central, 131
structure, 128 this
system tables, 136 Java methods, 576
Transact-SQL, 975 threaded applications
Transact-SQL outer joins, 220 UNIX, 871
unloading, 702
working with, 124 threads
Java, 575
tabular data stream, 990
Threads window, 1059
tasks
backup, 674 three-tier computing
events, 508 about, 943
recovery, 674 architecture, 945
schedules, 508 Distributed Transaction Coordinator, 947
distributed transactions, 946
TCP/IP Enterprise Application Server, 947
about, 95 Microsoft Transaction Server, 947
addresses, 996 resource dispensers, 946
connecting across firewalls, 96 resource managers, 946
Open Server, 992
performance, 96 times
port numbers, 993 procedures and triggers, 486
supported protocols, 94
TIMESTAMP data type
testing, 104
skipping, 256
transport protocol, 88
Transact-SQL, 972
troubleshooting, 104
UNIX, 95 tools
Windows 95/98, 95 rebuild, 710
Windows NT, 95
Tools menu, 1045
TCP/IP protocol
starting, 13 TOP clause
queries, 188
TDS, 990
TRACEBACK function, 475
telnet
TCP/IP testing, 104 trailing blanks
testing networks, 30 comparisons, 163
creating databases, 968
TEMP environment variable Transact-SQL, 968
disk space, 30
transaction attribute
temporary tables component, 952
importing data, 703, 719
indexes, 819 transaction blocking
local and global, 698 about, 392

1172
T–T

transaction coordinator deadlock, 393


Enterprise Application Server, 951 definition of, 382
distributed, 944, 949
transaction log interference between, 392, 405
about, 645 isolation level, 284
allocating space, 790 managing, 917
changing location, 690 more than one at once, 384
limiting size, 501 overview, 382
location, 7 procedures and triggers, 484
Log Transfer Manager, 1006 remote data access, 917
management, 1020 replicating concurrent, 428
media failure, 686 savepoints, 385
mirror, 652, 691, 692, 1027 starting, 383
overview, 651 sub-transactions and savepoints, 385
placing, 663 using, 383
primary keys, 665
role in data replication, 429 transactions processing
size, 665 blocking, 392, 405
switch, 12
uncommitted changes, 689 Transact-SQL
validating, 672 about, 957
batches, 981
transaction log mirror creating databases, 967
purpose, 664 IDENTITY column, 973
starting, 691, 692 joins, 979
NULL, 976
transaction logs overview, 958
performance, 800 procedures, 980
transaction management, 917 result sets, 984
timestamp column, 972
transaction processing, 384 trailing blanks, 968
benefits of concurrency, 384 triggers, 980
concurrent, 384 variables, 985
effects of scheduling, 395 writing portable SQL, 975
performance, 384
scheduling, 394 Transact-SQL compatibility
serializable scheduling, 394 databases, 970
transaction log based replication, 429 translation driver
two-phase locking, 422 ODI and NDIS, 88
transaction scheduling Translation driver
effects of, 395 ODBC configuration, 52
transactions, 283 translation drivers
about, 381, 382, 383 about, 92
benefits of concurrency, 384 limitations, 93
blocking, 392, 393, 405 multiple protocol stacks, 92
completing, 383 ODBC, 323
concurrency, 384
concurrent, 384 transport layer
consistency, 382 about, 88
cursors, 284
data modification, 254

1173
U–U

trees TSQL_HEX_CONSTANT option


indexes, 817 Open Client, 1000
trigger condition TSQL_VARIABLES option
defined, 500 Open Client, 1000
triggers tutorials
about, 435, 437 implications of locking, 408
altering, 454 isolation levels, 398
benefits of, 438 non-repeatable reads, 398, 401
command delimiter, 485 phantom rows, 405, 408
creating, 451
cursors, 471 two-phase commit
dates, 486 three-tier computing, 946, 947
deleting, 455 two-phase locking, 422
error handling, 474 protocol, 423
exception handlers, 479
executing, 453 two-phase locking theorem, 423
execution permissions, 455 type
permissions, 737, 749 objects, 524
permissions for creating, 737
recursion, 981
ROLBACK statement, 981
savepoints, 484 U
SQL statements allowed in, 462
statement-level, 980 UID connection parameter
structure, 462 about, 64
times, 486 UNC connection parameter
Transact-SQL, 971, 980 about, 64
using, 450
warnings, 478 unchained mode, 283
troubleshooting Unconditional connection parameter
backups, 682 about, 64
common problems, 105
Unicode character sets
cursor positioning, 276
about, 310
database connections, 67
deadlocks, 393 UNION operation
debugging classes, 630 about, 190
protocols, 102
remote data access, 923 unique columns
selectivity estimates, 860 Java columns, 573
server address, 998 unique cursors, 272
server startup, 30, 31
Windows CE connections, 60 unique keys
wiring problems, 105 generating and concurrency, 426

TRUNCATE TABLE statement unique results


about, 262 limiting, 159

try block UNIX


Java, 531 database server, 6
default character set, 299
tsequal function, 973
1174
U–U

deployment issues, 870 user IDs


directory structure, 870 Adaptive Server Enterprise, 964
ODBC support, 57 case-sensitivity, 971
TCP/IP, 95 default, 364, 736
threaded applications, 871 listing, 769
managing, 735
unknown values. about, 169 security features, 772
UNLOAD statement, 705 security tip, 773
security, 782 user interface
UNLOAD table statement, 710 dialog boxes, 1033
property sheets, 1061
UNLOAD TABLE statement, 702, 705
security, 782 user-defined classes
Java, 534
unloading, 694, 696
databases, 697 user-defined data types
CHECK conditions, 368
unloading and reloading
databases, 729, 732 user-defined functions
calling, 447
unloading data creating, 446
security, 782 dropping, 448
unserializable transaction scheduling execution permissions, 448
effects of, 395 external functions, 488
parameters, 464
UPDATE permissions, 743 using, 446
UPDATE statement, 1020 Userid connection parameter
Java, 568 about, 64
locking during, 420
positioned, 279 users
set methods, 569 connected users, 752
using, 259 creating, 741
using join operations, 260 deleting, 751
deleting integrated logins, 80
upgrading databases, 711 granting integrated logins, 78
occasionally connected, 428
upper code page
remote permissions, 749
about, 293
removing from groups, 755
URL setting options, 740
database, 615
uses for locks, 414
jConnect, 614
utilities. See database utilities
use ID
ODBC configuration, 54 utility database
about, 794
user defined data types
connecting to, 794
creating, 371, 372
passwords, 796
deleting, 373
security and passwords, 796
user -defined data types, 371
user estimates
selectivity, 858

1175
V–W

updating, 140
V using, 140
validating working with, 138
backups, 662
Visual Basic
databases, 662
ODBC configuration for, 53
indexes, 146
VM
validation
Java virtual machine, 518
column constraints, 356
starting, 590
variable width character sets stopping, 590
about, 294
void
variables Java methods, 525, 576
assigning, 978
debugging, 629, 633
local, 978
SELECT statement, 978 W
SET statement, 978 waiting
Transact-SQL, 985 to access locked rows, 405
vector aggregates, 178 to verify referential integrity, 420

verifying waiting to access locked rows, 392


database design, 352 warnings
version procedures and triggers, 478
Java, 533 Watcom-SQL
JDBC, 533 definition, 958
version number where clause
file names, 871 joins and, 204
versions WHERE clause
classes, 580 about, 162
viewing compared to HAVING, 184
table data, 131 DELETE, 261
view data, 144 GROUP BY clause, 182
NULL values, 170
views string comparisions, 166
browsing data, 144 UPDATE statement, 260
check option, 140
copying, 140 WHILE statement
creating, 138 syntax, 459
deleting, 143 white space characters
FROM clause, 161 defined, 317
joins, 195
modifying, 142 wide fetches, 277
owner, 737
wildcards
permissions, 737, 738, 745
LIKE operator, 167
security, 762
string comparisons, 166
security features, 772
SYSCOLUMNS, 968 WIN_LATIN1 collation
SYSINDEXES, 968 about, 306

1176
X–Z

windows Windows NT Performance Monitor, 833


Breakpoints, 1053
Calls, 1053 wiring
Catch, 1054 troubleshooting, 105
Classes, 1054 WITH GRANT OPTION clause, 747
Connection, 1055
Evaluate, 1055 WITH HOLD clause
Globals, 1055 cursors, 276
Inspection, 1056 write files
Locals, 1056 about, 792
Methods, 1056 deployment, 872
Procedures, 1057
Query, 1057 write locks, 414
Row variables, 1057
Source, 1058
Source path, 1057
Statics, 1058 X
Threads, 1059 xp_cmdshell
Windows security features, 773
default character set, 299 xp_sendmail
protocol stacks, 91 security features, 773
Windows 95/98 xp_startmail
database server, 5 security features, 773
Windows CE xp_stopmail
character set, 305 security features, 773
connecting from the desktop, 59, 60
creating databases, 305
Java unsupported, 519
ODBC, 58 Z
Windows NT -z option
and NetBIOS, 100 database server, 31
and TCP/IP, 95
client Service for NetWare, 98 zip files
database server, 5 Java, 529
NetWare, 98
services, 18

1177
Z–Z

1178

You might also like