Professional Documents
Culture Documents
4 Documentation
Search terms:
search...
SQLAlchemy 1.4 Documentation
Release: 1.4.39
CURRENT RELEASE
| Release Date: June 24, 2022
Your new development
career awaits. Check out the
latest listings.
Working with Engines and Connections
ADS VIA CARBON
This section details direct usage of the Engine ,
Connection , and related objects. Its
important to note that when
using the SQLAlchemy ORM, these objects are not
SQLAlchemy Core generally accessed; instead,
the Session object is used as the interface to the
database.
However, for applications that are built around direct usage of textual SQL
SQL Expression Language Tutorial (1.x API)
statements and/or SQL expression constructs without involvement by the ORM’s
SQL Statements and Expressions API higher level management services, the Engine and
Connection are king (and queen?) -
read on.
Schema Definition Language
Using Transactions
The typical usage of create_engine() is once per particular database
URL, held
Nesting of Transaction Blocks globally for the lifetime of a single application process. A single
Engine manages
many individual DBAPI connections on behalf of
the process and is intended to be
Arbitrary Transaction Nesting as
called upon in a concurrent fashion. The
Engine is not synonymous to the DBAPI
an Antipattern connect function, which
represents just one connection resource - the Engine is most
efficient when created just once at the module level of an application, not
per-object
Migrating from the “nesting”
or per-function call.
pattern
tip
Library Level (e.g. emulated)
Autocommit
When using an Engine with multiple Python processes, such as when
using os.fork or
Setting Transaction Isolation Levels Python multiprocessing , it’s important that the
engine is initialized per process. See
including DBAPI Autocommit Using Connection Pools with Multiprocessing or os.fork() for
details.
print("username:", row['username'])
SQL Compilation Caching
Configuration
Above, the Engine.connect() method returns a Connection
object, and by using it in a
Estimating Cache Performance Python context manager (e.g. the with:
statement) the Connection.close() method is
Using Logging automatically invoked at the
end of the block. The Connection , is a proxy object for an
actual DBAPI connection. The DBAPI connection is retrieved from the connection
How much memory does the cache
pool at the point at which Connection is created.
use?
The object returned is known as CursorResult , which
references a DBAPI cursor and
Disabling or using an alternate
provides methods for fetching rows
similar to that of the DBAPI cursor. The DBAPI
dictionary to cache some (or all) cursor will be closed
by the CursorResult when all of its result rows (if any) are
https://docs.sqlalchemy.org/en/14/core/connections.html 1/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
exhausted. A CursorResult that returns no rows, such as that of
an UPDATE
SQLAlchemy 1.4 Documentation statement (without any returned rows),
releases cursor resources immediately upon
construction.
CURRENT RELEASE
Home
| Download this Documentation When the Connection is closed at the end of the with: block, the
referenced DBAPI
connection is released to the connection pool. From
the perspective of the database
Search terms:
search... itself, the connection pool will not actually
“close” the connection assuming the pool
has room to store this connection for
the next use. When the connection is returned
to the pool for re-use, the
pooling mechanism issues a rollback() call on the DBAPI
Your new development connection so that
any transactional state or locks are removed, and the connection
career awaits. Check out the is ready for
its next use.
latest listings.
Our example above illustrated the execution of a textual SQL string, which
should be
ADS VIA CARBON
invoked by using the text() construct to indicate that
we’d like to use textual SQL.
SQLAlchemy Core The Connection.execute() method can of
course accommodate more than that,
including the variety of SQL expression
constructs described in SQL Expression
SQL Expression Language Tutorial (1.x API) Language Tutorial (1.x API).
SQL Statements and Expressions API
r1 = connection.execute(table1.select())
https://docs.sqlalchemy.org/en/14/core/connections.html 2/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
and complicated, unless an
application makes more of a first-class framework
SQLAlchemy 1.4 Documentation around the behavior. See
the following subsection Arbitrary Transaction
Nesting as an Antipattern.
CURRENT RELEASE
Home
| Download this Documentation
The Transaction object also handles “nested” behavior by keeping
track of the
Search terms:
search... outermost begin/commit pair. In this example, two functions both
issue a transaction
on a Connection , but only the outermost
Transaction object actually takes effect
when it is committed.
Your new development
career awaits. Check out the # method_a starts a transaction and calls method_b
latest listings.
def method_a(connection):
method_b(connection)
SQLAlchemy Core
# method_b also starts a transaction
SQL Statements and Expressions API with connection.begin(): # open a transaction - this runs
# context of method_a's transacti
Schema Definition Language
connection.execute(text("insert into mytable values ('
Column and Data Types connection.execute(mytable.insert(), {"col1": "bat", "
method_a(conn)
Working with Engines and Connections
Basic Usage
Above, method_a is called first, which calls connection.begin() . Then
it calls method_b .
Using Transactions When method_b calls connection.begin() , it just
increments a counter that is
Nesting of Transaction Blocks decremented when it calls commit() . If either
method_a or method_b calls rollback() ,
the whole transaction is
rolled back. The transaction is not committed until method_a
Arbitrary Transaction Nesting as calls the
commit() method. This “nesting” behavior allows the creation of functions
an Antipattern which “guarantee” that a transaction will be used if one was not already
available, but
will automatically participate in an enclosing transaction if
one exists.
Migrating from the “nesting”
pattern Arbitrary Transaction Nesting as an Antipattern
Library Level (e.g. emulated) With many years of experience, the above “nesting” pattern has not proven to
be
very popular, and where it has been observed in large projects such
as Openstack, it
Autocommit
tends to be complicated.
Setting Transaction Isolation Levels
The most ideal way to organize an application would have a single, or at
least very
including DBAPI Autocommit
few, points at which the “beginning” and “commit” of all
database transactions is
Understanding the DBAPI-Level demarcated. This is also the general
idea discussed in terms of the ORM at When do I
Autocommit Isolation Level construct a Session, when do I commit it, and when do I close it?. To
adapt the
example from the previous section to this practice looks like:
Using Server Side Cursors (a.k.a. stream
results) # method_a calls method_b
method_b(connection)
Execution
Translation of Schema Names # method_b uses the connection and assumes the transaction
# is external
https://docs.sqlalchemy.org/en/14/core/connections.html 3/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
That is, method_a() and method_b() do not deal with the details
of the transaction at
SQLAlchemy 1.4 Documentation all; the transactional scope of the connection is
defined externally to the functions
that have a SQL dialogue with the
connection.
CURRENT RELEASE
Home
| Download this Documentation It may be observed that the above code has fewer lines, and less indentation
which
tends to correlate with lower cyclomatic complexity. The
above code is organized
Search terms:
search... such that method_a() and method_b() are always
invoked from a point at which a
transaction is begun. The previous
version of the example features a method_a() and
a method_b() that are
trying to be agnostic of this fact, which suggests they are
Your new development prepared for
at least twice as many potential codepaths through them.
career awaits. Check out the
latest listings. Migrating from the “nesting” pattern
ADS VIA CARBON As SQLAlchemy’s intrinsic-nested pattern is considered legacy, an application
that
for either legacy or novel reasons still seeks to have a context that
automatically
SQLAlchemy Core
frames transactions should seek to maintain this functionality
through the use of a
SQL Expression Language Tutorial (1.x API)
custom Python context manager. A similar example is also
provided in terms of the
ORM in the “seealso” section below.
SQL Statements and Expressions API
To provide backwards compatibility for applications that make use of this
pattern, the
Schema Definition Language
following context manager or a similar implementation based on
a decorator may be
Column and Data Types used:
Engine Configuration
@contextlib.contextmanager
with connection.begin():
yield connection
Arbitrary Transaction Nesting as
an Antipattern The above contextmanager would be used as:
Migrating from the “nesting”
pattern # method_a starts a transaction and calls method_b
def method_a(connection):
Autocommit method_b(connection)
Setting Transaction Isolation Levels # method_b either starts a transaction, or uses the one alread
including DBAPI Autocommit # present
def method_b(connection):
method_a(conn)
Execution
Using Logging
use?
connection = None
https://docs.sqlalchemy.org/en/14/core/connections.html 4/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
nonlocal connection
Home
| Download this Documentation with connection:
with connection.begin():
Search terms:
search... yield connection
else:
yield connection
def method_a(connectivity):
Column and Data Types # method_b also wants to use a connection from the context, so
Engine and Connection Use # also calls "with:", but also it actually uses the connection
def method_b(connectivity):
Using Transactions
# method_a
Setting Transaction Isolation Levels Migrating from the “subtransaction” pattern - ORM version
How much memory does the cache itself transactional. For true AUTOCOMMIT,
see the next section
Setting Transaction Isolation Levels including DBAPI
use?
Autocommit.
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 5/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The previous transaction example illustrates how to use Transaction
so that several
SQLAlchemy 1.4 Documentation executions can take part in the same transaction. What happens
when we issue an
INSERT, UPDATE or DELETE call without using
Transaction ? While some DBAPI
CURRENT RELEASE
implementations provide various special “non-transactional” modes, the core
Home
| Download this Documentation
behavior of DBAPI per PEP-0249 is that a transaction is always in progress,
providing only rollback() and commit() methods but no begin() .
SQLAlchemy
Search terms:
search...
assumes this is the case for any given DBAPI.
Basic Usage Full control of the “autocommit” behavior is available using the generative
Connection.execution_options() method provided on Connection
and Engine , using the
Using Transactions
“autocommit” flag which will
turn on or off the autocommit for the selected scope.
Nesting of Transaction Blocks For example, a
text() construct representing a stored procedure that commits might
use
it so that a SELECT statement will issue a COMMIT:
Arbitrary Transaction Nesting as
an Antipattern
with engine.connect().execution_options(autocommit=True) as co
Migrating from the “nesting” conn.execute(text("SELECT my_mutating_procedure()"))
pattern
Using Logging
It is important to note, as will be discussed further in the section below at
How much memory does the cache Understanding the DBAPI-Level Autocommit Isolation Level, that “autocommit”
isolation level like
any other isolation level does not affect the “transactional” behavior
use?
of
the Connection object, which continues to call upon DBAPI
.commit() and
Disabling or using an alternate .rollback() methods (they just have no effect under
autocommit), and for which the
https://docs.sqlalchemy.org/en/14/core/connections.html 6/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
.begin() method assumes the DBAPI will
start a transaction implicitly (which means
CURRENT RELEASE
Home
| Download this Documentation SQLAlchemy dialects should support these isolation levels as well as autocommit
to
as great a degree as possible. The levels are set via family of
“execution_options”
Search terms:
search... parameters and methods that are throughout the Core, such
as the
Connection.execution_options() method. The parameter is
known as
Connection.execution_options.isolation_level and
the values are strings which are
Your new development typically a subset of the following names:
career awaits. Check out the
latest listings.
# possible values for Connection.execution_options(isolation_l
ADS VIA CARBON
"AUTOCOMMIT"
SQLAlchemy Core
"READ COMMITTED"
"READ UNCOMMITTED"
Column and Data Types Not every DBAPI supports every value; if an unsupported value is used for a
certain
Engine Configuration For example, to force REPEATABLE READ on a specific connection, then
begin a
transaction:
Working with Engines and Connections
Using Transactions
connection.execute(<statement>)
Nesting of Transaction Blocks
Configuration "postgresql://scott:tiger@localhost/test",
execution_options={
Using Logging }
use?
With the above setting, the DBAPI connection will be set to use a
"REPEATABLE READ"
Disabling or using an alternate isolation level setting for each new transaction
begun.
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 7/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
An application that frequently chooses to run operations within different
isolation
SQLAlchemy 1.4 Documentation levels may wish to create multiple “sub-engines” of a lead
Engine , each of which will
be configured to a different
isolation level. One such use case is an application that
CURRENT RELEASE
has operations
that break into “transactional” and “read-only” operations, a separate
Home
| Download this Documentation
Engine that makes use of "AUTOCOMMIT" may be
separated off from the main engine:
Search terms:
search...
from sqlalchemy import create_engine
eng = create_engine("postgresql://scott:tiger@localhost/test")
Your new development
career awaits. Check out the
autocommit_engine = eng.execution_options(isolation_level="AUT
latest listings.
Schema Definition Language The isolation level setting, regardless of which one it is, is unconditionally
reverted
when a connection is returned to the connection pool.
Column and Data Types
Note
Engine and Connection Use
Using Transactions
See also
Nesting of Transaction Blocks
Autocommit Setting Transaction Isolation Levels / DBAPI AUTOCOMMIT - for the ORM
Using Server Side Cursors (a.k.a. stream Understanding the DBAPI-Level Autocommit Isolation Level
results) In the parent section, we introduced the concept of the
Connection.execution_options.isolation_level
parameter and how it can be used to
Connectionless Execution, Implicit
set database isolation levels, including
DBAPI-level “autocommit” which is treated by
Execution SQLAlchemy as another transaction
isolation level. In this section we will attempt to
clarify the implications
of this approach.
Translation of Schema Names
SQL Compilation Caching If we wanted to check out a Connection object and use it
“autocommit” mode, we
would proceed as follows:
Configuration
ADS VIA CARBON # this begin() does nothing, isolation stays at AUTOCOMMIT
with connection.begin() as trans:
SQLAlchemy Core
connection.execute(<statement>)
connection.execute(<statement>)
SQL Expression Language Tutorial (1.x API)
Schema Definition Language When we run a block like the above with logging turned on, the logging
will attempt
to indicate that while a DBAPI level .commit() is called,
it probably will have no effect
Column and Data Types due to autocommit mode:
Engine Configuration
...
Working with Engines and Connections INFO sqlalchemy.engine.Engine COMMIT using DBAPI connection.co
Basic Usage
an Antipattern
with engine.connect() as connection:
Autocommit
# this will raise; "transaction" is already begun
Configuration Isolation level settings, including autocommit mode, are reset automatically
when
Estimating Cache Performance the connection is released back to the connection pool. Therefore it is
preferable to
avoid trying to switch isolation levels on a single
Connection object as this leads to
Using Logging
excess verbosity.
How much memory does the cache
To illustrate how to use “autocommit” in an ad-hoc mode within the scope of a
single
use?
Connection checkout, the
Connection.execution_options.isolation_level parameter
Disabling or using an alternate must be re-applied with the previous isolation level.
We can write our above block
“correctly” as (noting 2.0 style usage below):
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 9/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
CURRENT RELEASE
Home
| Download this Documentation engine = create_engine(..., future=True)
Search terms:
search... with engine.connect() as connection:
connection.execution_options(isolation_level="AUTOCOMMIT")
latest listings.
SQL Expression Language Tutorial (1.x API) # switch to default isolation level
connection.execution_options(isolation_level=connection.de
SQL Statements and Expressions API
# use a begin block
Engine Configuration Above, to manually revert the isolation level we made use of
Connection.default_isolation_level to restore the default
isolation level (assuming
Working with Engines and Connections
that’s what we want here). However, it’s
probably a better idea to work with the
Basic Usage architecture of of the
Connection which already handles resetting of isolation level
automatically upon checkin. The preferred way to write the above is to
use two
Using Transactions
blocks
Nesting of Transaction Blocks
an Antipattern
# use an autocommit block
Autocommit
# use a regular block
https://docs.sqlalchemy.org/en/14/core/connections.html 10/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Some DBAPIs, such as the cx_Oracle DBAPI, exclusively use server side cursors
SQLAlchemy 1.4 Documentation internally. All result sets are essentially unbuffered across the total span
of a result
set, utilizing only a smaller buffer that is of a fixed size such
as 100 rows at a time.
CURRENT RELEASE
Home
| Download this Documentation For those dialects that have conditional support for buffered or unbuffered
results,
there are usually caveats to the use of the “unbuffered”, or server
side cursor mode.
Search terms:
search... When using the psycopg2 dialect for example, an error is
raised if a server side
cursor is used with any kind of DML or DDL statement.
When using MySQL drivers
with a server side cursor, the DBAPI connection is in
a more fragile state and does
Your new development not recover as gracefully from error conditions
nor will it allow a rollback to proceed
career awaits. Check out the until the cursor is fully closed.
latest listings.
For this reason, SQLAlchemy’s dialects will always default to the less error
prone
ADS VIA CARBON
version of a cursor, which means for PostgreSQL and MySQL dialects
it defaults to a
SQLAlchemy Core buffered, “client side” cursor where the full set of results
is pulled into memory
before any fetch methods are called from the cursor.
This mode of operation is
SQL Expression Language Tutorial (1.x API) appropriate in the vast majority of cases;
unbuffered cursors are not generally useful
SQL Statements and Expressions API except in the uncommon case
of an application fetching a very large number of rows
in chunks, where
the processing of these rows can be complete before more rows
Schema Definition Language are fetched.
Column and Data Types
To make use of a server side cursor for a particular execution, the
Engine and Connection Use Connection.execution_options.stream_results option
is used, which may be called on
the Connection object,
on the statement object, or in the ORM-level contexts
Engine Configuration
mentioned below.
Working with Engines and Connections
When using this option for a statement, it’s usually appropriate to use
a method like
Basic Usage Result.partitions() to work on small sections
of the result set at a time, while also
fetching enough rows for each
pull so that the operation is efficient:
Using Transactions
an Antipattern
for partition in result.partitions(100):
Library Level (e.g. emulated) If the Result is iterated directly, rows are fetched internally
using a default buffering
Autocommit scheme that buffers first a small set of rows,
then a larger and larger buffer on each
fetch up to a pre-configured limit
of 1000 rows. This can be affected using the
Setting Transaction Isolation Levels
max_row_buffer execution
option:
including DBAPI Autocommit
_process_row(row)
Connectionless Execution, Implicit
Execution
The size of the buffer may also be set to a fixed size using the
Result.yield_per()
Translation of Schema Names
method. Calling this method with a number
of rows will cause all result-fetching
SQL Compilation Caching methods to work from
buffers of the given size, only fetching new rows when the
buffer is empty:
Configuration
use? _process_row(row)
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 11/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The stream_results option is also available with the ORM. When using the
ORM,
SQLAlchemy 1.4 Documentation either the Result.yield_per() or Result.partitions()
methods should be used to set
the number of ORM rows to be buffered each time
while yielding:
CURRENT RELEASE
Home
| Download this Documentation
with orm.Session(engine) as session:
Search terms:
search... result = session.execute(
select(User).order_by(User_id).execution_options(strea
)
Engine Configuration When using a 1.x style ORM query with Query , yield_per is
available via
Working with Engines and Connections Query.yield_per() - this also sets the stream_results
execution option:
Basic Usage
for row in session.query(User).yield_per(100):
Migrating from the “nesting” Deprecated since version 2.0: The features of “connectionless” and “implicit”
pattern execution
in SQLAlchemy are deprecated and will be removed in version 2.0.
See
“Implicit” and “Connectionless” execution, “bound metadata” removed for
Library Level (e.g. emulated)
background.
Autocommit
Using Server Side Cursors (a.k.a. stream result = engine.execute(text("select username from users"))
print("username:", row['username'])
Connectionless Execution, Implicit
Execution
In addition to “connectionless” execution, it is also possible
to use the
Translation of Schema Names Executable.execute() method of
any Executable construct, which is a marker for SQL
expression objects
that support execution. The SQL expression object itself
SQL Compilation Caching
references an
Engine or Connection known as the bind, which it uses
in order to
Configuration provide so-called “implicit” execution services.
https://docs.sqlalchemy.org/en/14/core/connections.html 12/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Column('name', String(50))
CURRENT RELEASE
Home
| Download this Documentation Explicit execution delivers the SQL text or constructed SQL expression to the
Connection.execute() method of Connection :
Search terms:
search...
engine = create_engine('sqlite:///file.db')
# ....
metadata_obj.bind = engine
# ....
Setting Transaction Isolation Levels Above, we associate an Engine with a MetaData object using
the special attribute
including DBAPI Autocommit MetaData.bind . The select() construct produced
from the Table object has a method
Executable.execute() , which will
search for an Engine that’s “bound” to the Table .
Understanding the DBAPI-Level
Autocommit Isolation Level Overall, the usage of “bound metadata” has three general effects:
Using Server Side Cursors (a.k.a. stream SQL statement objects gain an Executable.execute() method which
results) automatically
locates a “bind” with which to execute themselves.
Connectionless Execution, Implicit The ORM Session object supports using “bound metadata” in order
to establish
Execution which Engine should be used to invoke SQL statements
on behalf of a particular
mapped class, though the Session
also features its own explicit system of
Translation of Schema Names establishing complex Engine /
mapped class configurations.
SQL Compilation Caching
The MetaData.create_all() , MetaData.drop_all() , Table.create() ,
Table.drop() ,
Configuration and “autoload” features all make usage of the bound
Engine automatically
without the need to pass it explicitly.
Estimating Cache Performance
Using Logging Note
https://docs.sqlalchemy.org/en/14/core/connections.html 13/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
In applications where multiple Engine objects are present, each
SQLAlchemy 1.4 Documentation one logically associated
with a certain set of tables (i.e. vertical
sharding), the “bound metadata” technique can be used
so that
CURRENT RELEASE
individual Table can refer to the appropriate Engine automatically;
Home
| Download this Documentation
in particular this is supported within the ORM via the Session
object
as a means to associate Table objects with an appropriate
Search terms:
search...
Engine ,
as an alternative to using the bind arguments accepted
directly by the Session .
Your new development However, the “implicit execution” technique is not at all
career awaits. Check out the appropriate for use with the
ORM, as it bypasses the transactional
latest listings.
context maintained by the Session .
ADS VIA CARBON
results)
'user', metadata_obj,
Column('name', String(50))
Execution
)
Translation of Schema Names
schema_translate_map={None: "user_schema_one"})
https://docs.sqlalchemy.org/en/14/core/connections.html 14/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
connection = engine.connect().execution_options(
career awaits. Check out the None: "user_schema_one", # no schema name -> "user_
latest listings. "special": "special_schema", # schema="special" become
ADS VIA CARBON "public": None # Table objects with sche
})
SQLAlchemy Core
Arbitrary Transaction Nesting as To use the schema translation feature with the ORM Session ,
set this option at the
Autocommit
Connectionless Execution, Implicit different schema translate maps are given on a per-
statement basis, as
the ORM Session does not take
Execution
current schema translate
values into account for
Translation of Schema Names individual objects.
Configuration schema_translate_map
configurations, the Horizontal
Sharding extension may
be used. See the example at
Estimating Cache Performance
Horizontal Sharding.
Using Logging
https://docs.sqlalchemy.org/en/14/core/connections.html 15/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
stmt = stmt.order_by(table.c.id)
Library Level (e.g. emulated) it is important to note that the SQL compilation cache is caching
the SQL string that is passed to the database only, and not the
Autocommit
data
returned by a query. It is in no way a data cache and does not
Setting Transaction Isolation Levels impact the results returned for a particular SQL statement nor
https://docs.sqlalchemy.org/en/14/core/connections.html 16/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The size of the cache can grow to be a factor of 150% of the size given, before
it’s
SQLAlchemy 1.4 Documentation pruned back down to the target size. A cache of size 1200 above can therefore
grow
to be 1800 elements in size at which point it will be pruned to 1200.
CURRENT RELEASE
Home
| Download this Documentation The sizing of the cache is based on a single entry per unique SQL statement
rendered,
per engine. SQL statements generated from both the Core and the ORM
Search terms:
search... are
treated equally. DDL statements will usually not be cached. In order to determine
what the cache is doing, engine logging will include details about the
cache’s
behavior, described in the next section.
Your new development
career awaits. Check out the Estimating Cache Performance Using Logging
latest listings.
The above cache size of 1200 is actually fairly large. For small applications,
a size of
ADS VIA CARBON 100 is likely sufficient. To estimate the optimal size of the cache,
assuming enough
memory is present on the target host, the size of the cache
should be based on the
SQLAlchemy Core
number of unique SQL strings that may be rendered for the
target engine in use. The
SQL Expression Language Tutorial (1.x API)
most expedient way to see this is to use
SQL echoing, which is most directly enabled
by using the
create_engine.echo flag, or by using Python logging; see the
section
SQL Statements and Expressions API Configuring Logging for background on logging configuration.
pattern
id = Column(Integer, primary_key=True)
Autocommit bs = relationship("B")
__tablename__ = "b"
results)
Base.metadata.create_all(e)
Execution
Using Logging
for a_rec in s.query(A):
https://docs.sqlalchemy.org/en/14/core/connections.html 17/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
When run, each SQL statement that’s logged will include a bracketed
cache statistics
SQLAlchemy 1.4 Documentation badge to the left of the parameters passed. The four
types of message we may see
are summarized as follows:
CURRENT RELEASE
Home
| Download this Documentation [raw sql] - the driver or the end-user emitted raw SQL using
Connection.exec_driver_sql() - caching does not apply
Search terms:
search...
[no key] - the statement object is a DDL statement that is not cached, or
the
statement object contains uncacheable elements such as user-defined
Your new development
constructs or arbitrarily large VALUES clauses.
career awaits. Check out the
[generated in Xs] - the statement was a cache miss and had to be
compiled,
latest listings.
then stored in the cache. it took X seconds to produce the
compiled construct.
ADS VIA CARBON
The number X will be in the small fractional seconds.
SQLAlchemy Core
[cached since Xs ago] - the statement was a cache hit and did not
have to be
SQL Expression Language Tutorial (1.x API) recompiled. The statement has been stored in the cache since
X seconds ago.
The number X will be proportional to how long the application
has been running
SQL Statements and Expressions API and how long the statement has been cached, so for example
would be 86400
Schema Definition Language for a 24 hour period.
Column and Data Types Each badge is described in more detail below.
Engine and Connection Use The first statements we see for the above program will be the SQLite dialect
checking for the existence of the “a” and “b” tables:
Engine Configuration
Migrating from the “nesting” they already exist in string form, and there
is nothing known about what kinds of
result rows will be returned since
SQLAlchemy does not parse SQL strings ahead of
pattern
time.
Library Level (e.g. emulated)
The next statements we see are the CREATE TABLE statements:
Autocommit
INFO sqlalchemy.engine.Engine
data VARCHAR,
Configuration
)
https://docs.sqlalchemy.org/en/14/core/connections.html 18/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
second time and DDL
is also a database configurational step where performance is
SQLAlchemy 1.4 Documentation not as critical.
CURRENT RELEASE The [no key] badge is important for one other reason, as it can be produced
for SQL
Home
| Download this Documentation statements that are cacheable except for some particular sub-construct
that is not
currently cacheable. Examples of this include custom user-defined
SQL elements
Search terms:
search... that don’t define caching parameters, as well as some constructs
that generate
arbitrarily long and non-reproducible SQL strings, the main
examples being the
Values construct as well as when using “multivalued
inserts” with the Insert.values()
Your new development method.
career awaits. Check out the
latest listings. So far our cache is still empty. The next statements will be cached however,
a
segment looks like:
ADS VIA CARBON
SQLAlchemy Core
INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?)
SQL Statements and Expressions API INFO sqlalchemy.engine.Engine [cached since 0.0003533s ago] (N
INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?)
Basic Usage
Using Transactions Above, we see essentially two unique SQL strings; "INSERT INTO a (data) VALUES (?)"
and "INSERT INTO b (a_id, data) VALUES (?, ?)" . Since SQLAlchemy uses
bound
Nesting of Transaction Blocks
parameters for all literal values, even though these statements are
repeated many
Arbitrary Transaction Nesting as times for different objects, because the parameters are separate,
the actual SQL
an Antipattern string stays the same.
pattern
the above two statements are generated by the ORM unit of work
Library Level (e.g. emulated) process, and in fact will be caching these in a separate cache that
Autocommit is
local to each mapper. However the mechanics and terminology
are the same.
The section Disabling or using an alternate
Setting Transaction Isolation Levels
dictionary to cache some (or all) statements below will describe
including DBAPI Autocommit how user-facing
code can also use an alternate caching container
https://docs.sqlalchemy.org/en/14/core/connections.html 19/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
since] , this is
the total time that a statement has been present in the cache. For an
SQLAlchemy 1.4 Documentation application that’s been running for six hours, this number may read [cached
since
21600 seconds ago] , and that’s a good thing. Seeing high numbers for
“cached since”
CURRENT RELEASE
is an indication that these statements have not been subject to
cache misses for a
Home
| Download this Documentation
long time. Statements that frequently have a low number of
“cached since” even if
the application has been running a long time may
indicate these statements are too
Search terms:
search...
frequently subject to cache misses, and that
the
create_engine.query_cache_size may
need to be increased.
Your new development Our example program then performs some SELECTs where we can see the same
career awaits. Check out the pattern of “generated” then “cached”, for the SELECT of the “a” table as well
as for
latest listings. subsequent lazy loads of the “b” table:
ADS VIA CARBON
WHERE ? = b.a_id
Column and Data Types INFO sqlalchemy.engine.Engine SELECT b.id AS b_id, b.a_id AS b
FROM b
WHERE ? = b.a_id
Basic Usage
Using Transactions
Nesting of Transaction Blocks From our above program, a full run shows a total of four distinct SQL strings
being
cached. Which indicates a cache size of four would be sufficient. This is
obviously an
Arbitrary Transaction Nesting as extremely small size, and the default size of 500 is fine to be left
at its default.
an Antipattern
How much memory does the cache use?
Migrating from the “nesting”
The previous section detailed some techniques to check if the
pattern
create_engine.query_cache_size needs to be bigger. How do we know
if the cache is
Library Level (e.g. emulated) not too large? The reason we may want to set
create_engine.query_cache_size to not
be higher than a certain
number would be because we have an application that may
Autocommit
make use of a very large
number of different statements, such as an application that
Setting Transaction Isolation Levels is building queries
on the fly from a search UX, and we don’t want our host to run out
including DBAPI Autocommit of memory
if for example, a hundred thousand different queries were run in the past
24 hours
and they were all cached.
Understanding the DBAPI-Level
Autocommit Isolation Level It is extremely difficult to measure how much memory is occupied by Python
data
structures, however using a process to measure growth in memory via top as a
Using Server Side Cursors (a.k.a. stream successive series of 250 new statements are added to the cache suggest a
results) moderate Core statement takes up about 12K while a small ORM statement takes
about
20K, including result-fetching structures which for the ORM will be much
Connectionless Execution, Implicit
greater.
Execution
https://docs.sqlalchemy.org/en/14/core/connections.html 20/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
conn.execute(table.select())
SQLAlchemy 1.4 Documentation
CURRENT RELEASE
The SQLAlchemy ORM uses the above technique to hold onto per-mapper caches
Home
| Download this Documentation
within the unit of work “flush” process that are separate from the default
cache
Search terms:
search... configured on the Engine , as well as for some
relationship loader queries.
The cache can also be disabled with this argument by sending a value of
None :
SQLAlchemy Core
Caching for Third Party Dialects
SQL Expression Language Tutorial (1.x API)
The caching feature requires that the dialect’s compiler produces SQL
strings that
SQL Statements and Expressions API are safe to reuse for many statement invocations, given
a particular cache key that is
keyed to that SQL string. This means
that any literal values in a statement, such as
Schema Definition Language
the LIMIT/OFFSET values for
a SELECT, can not be hardcoded in the dialect’s
Column and Data Types compilation scheme, as
the compiled string will not be re-usable. SQLAlchemy
supports rendered
bound parameters using the
Engine and Connection Use
BindParameter.render_literal_execute()
method which can be applied to the existing
Engine Configuration Select._limit_clause and
Select._offset_clause attributes by a custom compiler,
which
are illustrated later in this section.
Working with Engines and Connections
As there are many third party dialects, many of which may be generating literal
values
Basic Usage
from SQL statements without the benefit of the newer “literal execute”
feature,
Using Transactions SQLAlchemy as of version 1.4.5 has added an attribute to dialects
known as
Dialect.supports_statement_cache . This attribute is
checked at runtime for its
Nesting of Transaction Blocks
presence directly on a particular dialect’s class,
even if it’s already present on a
Arbitrary Transaction Nesting as superclass, so that even a third party
dialect that subclasses an existing cacheable
an Antipattern SQLAlchemy dialect such as
sqlalchemy.dialects.postgresql.PGDialect must still
explicitly include this
attribute for caching to be enabled. The attribute should only
Migrating from the “nesting” be enabled
once the dialect has been altered as needed and tested for reusability of
pattern compiled SQL statements with differing parameters.
Library Level (e.g. emulated) For all third party dialects that don’t support this attribute, the logging for
such a
Autocommit dialect will indicate dialect does not support caching .
Setting Transaction Isolation Levels When a dialect has been tested against caching, and in particular the SQL
compiler
including DBAPI Autocommit has been updated to not render any literal LIMIT / OFFSET within
a SQL string
directly, dialect authors can apply the attribute as follows:
Understanding the DBAPI-Level
Autocommit Isolation Level
from sqlalchemy.engine.default import DefaultDialect
results)
supports_statement_cache = True
Connectionless Execution, Implicit
Execution The flag needs to be applied to all subclasses of the dialect as well:
https://docs.sqlalchemy.org/en/14/core/connections.html 21/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
As an example, suppose a dialect overrides the SQLCompiler.limit_clause()
method,
SQLAlchemy 1.4 Documentation which produces the “LIMIT / OFFSET” clause for a SQL statement,
like this:
CURRENT RELEASE
# pre 1.4 style code
Home
| Download this Documentation
def limit_clause(self, select, **kw):
Search terms:
search... text = ""
Using Transactions
# 1.4 cache-compatible code
an Antipattern
limit_clause = select._limit_clause
pattern
if select._simple_int_clause(limit_clause):
Autocommit self.process(limit_clause.render_literal_execute()
)
including DBAPI Autocommit # assuming the DB doesn't support SQL expressions for
# Otherwise render here normally
Autocommit Isolation Level "dialect 'mydialect' can only render simple intege
)
self.process(offset_clause.render_literal_execute(
Connectionless Execution, Implicit
)
https://docs.sqlalchemy.org/en/14/core/connections.html 22/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SELECT x FROM y
Your new development After changes like the above have been made as appropriate, the
career awaits. Check out the Dialect.supports_statement_cache flag should be set to True .
It is strongly
latest listings. recommended that third party dialects make use of the
dialect third party test suite
which will assert that operations like
SELECTs with LIMIT/OFFSET are correctly
ADS VIA CARBON
rendered and cached.
SQLAlchemy Core
See also
SQL Expression Language Tutorial (1.x API)
Why is my application slow after upgrading to 1.4 and/or 2.x? - in the Frequently Asked
SQL Statements and Expressions API
Questions section
Schema Definition Language
https://docs.sqlalchemy.org/en/14/core/connections.html 23/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
stmt += lambda s: s.where(table.c.col == parameter)
Home
| Download this Documentation
with engine.connect() as conn:
Search terms:
search... result = run_my_statement(some_connection, "some parameter
Your new development Above, the three lambda callables that are used to define the structure
of a SELECT
career awaits. Check out the statement are invoked exactly once, and the resulting SQL
string cached in the
latest listings.
compilation cache of the engine. From that point
forward, the run_my_statement()
ADS VIA CARBON function may be invoked any number
of times and the lambda callables within it will
not be called, only
used as cache keys to retrieve the already-compiled SQL.
SQLAlchemy Core
Note
SQL Expression Language Tutorial (1.x API)
SQL Statements and Expressions API It is important to note that there is already SQL caching in place
when the lambda system is not used. The lambda system only
Schema Definition Language
adds an
additional layer of work reduction per SQL statement
Column and Data Types invoked by caching
the building up of the SQL construct itself and
also using a simpler
cache key.
Engine and Connection Use
Engine Configuration
Quick Guidelines for Lambdas
Working with Engines and Connections
Above all, the emphasis within the lambda SQL system is ensuring that there
is never
Basic Usage a mismatch between the cache key generated for a lambda and the
SQL string it will
produce. The LamdaElement and related
objects will run and analyze the given lambda
Using Transactions
in order to calculate how
it should be cached on each run, trying to detect any
Nesting of Transaction Blocks potential problems.
Basic guidelines include:
Arbitrary Transaction Nesting as Any kind of statement is supported - while it’s expected that
select()
an Antipattern constructs are the prime use case for lambda_stmt() ,
DML statements such as
insert() and update() are
equally usable:
Migrating from the “nesting”
pattern
def upd(id_, newname):
Autocommit
stmt += lambda s: s.where(users.c.id==id_)
https://docs.sqlalchemy.org/en/14/core/connections.html 24/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
CURRENT RELEASE
...
Home
| Download this Documentation
>>> engine = create_engine("sqlite://", echo=True)
Search terms:
search...
... print(conn.scalar(my_stmt(5, 10)))
...
SQLAlchemy Core
SELECT max(?, ?) AS max_1
Engine Configuration The lambda should ideally produce an identical SQL structure in all cases -
Avoid using conditionals or custom callables inside of lambdas that might make
Working with Engines and Connections
it produce different SQL based on inputs; if a function might conditionally
use
Basic Usage two different SQL fragments, use two separate lambdas:
Using Transactions
# **Don't** do this:
an Antipattern stmt += (
Autocommit
else:
... return y
Home
| Download this Documentation ...
Search terms:
search... # ...
SQLAlchemy Core
Above, the use of get_x() and get_y() , if they are necessary, should
occur
SQL Expression Language Tutorial (1.x API)
outside of the lambda and assigned to a local closure variable:
SQL Statements and Expressions API
Using Transactions
Nesting of Transaction Blocks Avoid referring to non-SQL constructs inside of lambdas as they are not
cacheable by default - this issue refers to how the LambdaElement
creates a
Arbitrary Transaction Nesting as
cache key from other closure variables within the statement. In order
to provide
an Antipattern the best guarantee of an accurate cache key, all objects located
in the closure of
the lambda are considered to be significant, and none
will be assumed to be
Migrating from the “nesting”
appropriate for a cache key by default.
So the following example will also raise a
pattern
rather detailed error message:
Library Level (e.g. emulated)
Autocommit >>> class Foo:
...
...
Execution
Traceback (most recent call last):
https://docs.sqlalchemy.org/en/14/core/connections.html 26/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The above error indicates that LambdaElement will not assume
that the Foo object
SQLAlchemy 1.4 Documentation passed in will continue to behave the same in all
cases. It also won’t assume it
can use Foo as part of the cache key
by default; if it were to use the Foo object as
CURRENT RELEASE
part of the cache key,
if there were many different Foo objects this would fill up
Home
| Download this Documentation
the cache
with duplicate information, and would also hold long-lasting
references to
all of these objects.
Search terms:
search...
The best way to resolve the above situation is to not refer to foo
inside of the
lambda, and refer to it outside instead:
Your new development
career awaits. Check out the >>> def my_stmt(foo):
latest listings.
... x_param, y_param = foo.x, foo.y
Engine Configuration
... track_closure_variables=False
Using Transactions There is also the option to add objects to the element to explicitly form
part of
the cache key, using the track_on parameter; using this parameter
allows
Nesting of Transaction Blocks
specific values to serve as the cache key and will also prevent other
closure
Arbitrary Transaction Nesting as variables from being considered. This is useful for cases where part
of the SQL
an Antipattern being constructed originates from a contextual object of some sort
that may
have many different values. In the example below, the first
segment of the
Migrating from the “nesting” SELECT statement will disable tracking of the foo variable,
whereas the second
pattern segment will explicitly track self as part of the
cache key:
Autocommit
... stmt = lambda_stmt(
... )
... track_on=[self]
https://docs.sqlalchemy.org/en/14/core/connections.html 27/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
>>> cache_key = stmt._generate_cache_key()
CacheKey(key=(
Home
| Download this Documentation <class 'sqlalchemy.sql.selectable.Select'>,
'_raw_columns',
Search terms:
search... (
'1',
),
# a few more elements are here, and many more for a more
),)
Column and Data Types
Using Transactions The lambda construction system by contrast creates a different kind of cache
key:
>>> print(cache_key)
Configuration
... stmt = lambda_stmt(lambda: select(col))
https://docs.sqlalchemy.org/en/14/core/connections.html 28/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Home
| Download this Documentation CacheKey(key=(
'0',
<class 'sqlalchemy.sql.elements.ColumnClause'>,
'name',
latest listings.
(
SQLAlchemy Core ),
),
SQL Expression Language Tutorial (1.x API) <code object <lambda> at 0x7f07323c5190, file "<stdin>", lin
<class 'sqlalchemy.sql.lambdas.LinkedLambdaElement'>,
'0',
'q',
Engine Configuration (
<class 'sqlalchemy.sql.sqltypes.NullType'>,
),
Basic Usage
(
<class 'sqlalchemy.sql.elements.ColumnClause'>,
'type',
an Antipattern
(
pattern ),
),
https://docs.sqlalchemy.org/en/14/core/connections.html 29/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The Engine is intended to normally be a permanent
fixture established up-front and
SQLAlchemy 1.4 Documentation maintained throughout the lifespan of an
application. It is not intended to be created
and disposed on a
per-connection basis; it is instead a registry that maintains both a
CURRENT RELEASE
pool
of connections as well as configurational information about the database
and
Home
| Download this Documentation
DBAPI in use, as well as some degree of internal caching of per-database
resources.
Search terms:
search... However, there are many cases where it is desirable that all connection resources
referred to by the Engine be completely closed out. It’s
generally not a good idea to
rely on Python garbage collection for this
to occur for these cases; instead, the
Your new development Engine can be explicitly disposed using
the Engine.dispose() method. This disposes
career awaits. Check out the of the engine’s
underlying connection pool and replaces it with a new one that’s
latest listings. empty.
Provided that the Engine
is discarded at this point and no longer used, all
ADS VIA CARBON
checked-in connections
which it refers to will also be fully closed.
SQL Expression Language Tutorial (1.x API) When a program wants to release any remaining checked-in connections
held
by the connection pool and expects to no longer be connected
to that database
SQL Statements and Expressions API
at all for any future operations.
Schema Definition Language
When a program uses multiprocessing or fork() , and an
Engine object is copied
Column and Data Types to the child process,
Engine.dispose() should be called so that the engine
Execution
Using Connection Pools with Multiprocessing or os.fork()
https://docs.sqlalchemy.org/en/14/core/connections.html 30/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
abstraction of textual
SQL in that it normalizes how bound parameters are passed, as
SQLAlchemy 1.4 Documentation well as that
it supports datatyping behavior for parameters and result set rows.
Search terms:
search... underlying driver (known as the DBAPI) without any intervention
from the text()
construct, the Connection.exec_driver_sql()
method may be used:
SQL Expression Language Tutorial (1.x API) Working with the DBAPI cursor directly
SQL Statements and Expressions API There are some cases where SQLAlchemy does not provide a genericized way
at
accessing some DBAPI functions, such as calling stored procedures as well
as
Schema Definition Language
dealing with multiple result sets. In these cases, it’s just as expedient
to deal with the
Column and Data Types raw DBAPI connection directly.
Engine and Connection Use The most common way to access the raw DBAPI connection is to get it
from an
already present Connection object directly. It is
present using the
Engine Configuration
Connection.connection attribute:
Working with Engines and Connections
dbapi_conn = connection.connection
Using Transactions
Nesting of Transaction Blocks The DBAPI connection here is actually a “proxied” in terms of the
originating
connection pool, however this is an implementation detail
that in most cases can be
Arbitrary Transaction Nesting as
ignored. As this DBAPI connection is still
contained within the scope of an owning
an Antipattern
Connection object, it is
best to make use of the Connection object for most features
Migrating from the “nesting” such
as transaction control as well as calling the Connection.close()
method; if these
operations are performed on the DBAPI connection directly,
the owning Connection
pattern
will not be aware of these changes in state.
Library Level (e.g. emulated)
To overcome the limitations imposed by the DBAPI connection that is
maintained by
Autocommit
an owning Connection , a DBAPI connection is also
available without the need to
Setting Transaction Isolation Levels procure a
Connection first, using the Engine.raw_connection() method
of Engine :
including DBAPI Autocommit
dbapi_conn = engine.raw_connection()
Understanding the DBAPI-Level
Autocommit Isolation Level
This DBAPI connection is again a “proxied” form as was the case before.
The purpose
Using Server Side Cursors (a.k.a. stream of this proxying is now apparent, as when we call the .close()
method of this
results) connection, the DBAPI connection is typically not actually
closed, but instead
released back to the
engine’s connection pool:
Connectionless Execution, Implicit
Execution
dbapi_conn.close()
Translation of Schema Names
SQL Compilation Caching While SQLAlchemy may in the future add built-in patterns for more DBAPI
use cases,
there are diminishing returns as these cases tend to be rarely
needed and they also
Configuration vary highly dependent on the type of DBAPI in use,
so in any case the direct DBAPI
Estimating Cache Performance calling pattern is always there for those
cases where it is needed.
https://docs.sqlalchemy.org/en/14/core/connections.html 31/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Some recipes for DBAPI connection use follow.
SQLAlchemy 1.4 Documentation
Calling Stored Procedures and User Defined Functions
CURRENT RELEASE
SQLAlchemy supports calling stored procedures and user defined functions
several
Home
| Download this Documentation
ways. Please note that all DBAPIs have different practices, so you must
consult your
Search terms:
search... underlying DBAPI’s documentation for specifics in relation to your
particular usage.
The following examples are hypothetical and may not work with
your underlying
DBAPI.
Your new development For stored procedures or functions with special syntactical or parameter concerns,
career awaits. Check out the DBAPI-level callproc
may potentially be used with your DBAPI. An example of this
latest listings.
pattern is:
ADS VIA CARBON
try:
connection.commit()
connection.close()
Engine and Connection Use
Engine Configuration
Note
Working with Engines and Connections
Not all DBAPIs use callproc and overall usage details will vary. The
Basic Usage
above
example is only an illustration of how it might look to use a
Using Transactions particular DBAPI
function.
Arbitrary Transaction Nesting as Your DBAPI may not have a callproc requirement or may require a stored
procedure
an Antipattern or user defined function to be invoked with another pattern, such as
normal
SQLAlchemy connection usage. One example of this usage pattern is,
at the time of
Migrating from the “nesting”
this documentation’s writing, executing a stored procedure in
the PostgreSQL
pattern database with the psycopg2 DBAPI, which should be invoked
with normal connection
Autocommit
connection.execute("CALL my_procedure();")
Setting Transaction Isolation Levels
including DBAPI Autocommit
This above example is hypothetical. The underlying database is not guaranteed to
Understanding the DBAPI-Level support “CALL” or “SELECT” in these situations, and the keyword may vary
dependent on the function being a stored procedure or a user defined function.
You
Autocommit Isolation Level
should consult your underlying DBAPI and database documentation in these
Using Server Side Cursors (a.k.a. stream situations to determine the correct syntax and patterns to use.
results)
Multiple Result Sets
Connectionless Execution, Implicit
Multiple result set support is available from a raw DBAPI cursor using the
nextset
Execution method:
results_two = cursor_obj.fetchall()
use? finally:
connection.close()
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 32/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
[sqlalchemy.dialects]
[sqlalchemy.dialects]
Using Transactions
mysql.foodialect = foodialect.dialect:FooDialect
Configuration
https://docs.sqlalchemy.org/en/14/core/connections.html 33/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... NestedTransaction Represent a ‘nested’, or SAVEPOINT transaction.
SQL Statements and Expressions API class sqlalchemy.engine. Connection (engine, connection=None,
close_with_result=False, _branch_from=None, _execution_options=None,
Schema Definition Language
_dispatch=None, _has_events=None, _allow_revalidate=True)
Column and Data Types
Provides high-level functionality for a wrapped DB-API connection.
Engine and Connection Use
This is the SQLAlchemy 1.x.x version of the Connection
class. For the 2.0 style
Engine Configuration
version, which features some API
differences, see Connection .
Working with Engines and Connections
The Connection object is procured by calling
the Engine.connect() method of the
Basic Usage Engine
object, and provides services for execution of SQL statements as well
as
transaction control.
Using Transactions
Nesting of Transaction Blocks The Connection object is not thread-safe. While a Connection can be
shared
among threads using properly synchronized access, it is still
possible that the
Arbitrary Transaction Nesting as
underlying DBAPI connection may not support shared
access between threads.
an Antipattern Check the DBAPI documentation for details.
https://docs.sqlalchemy.org/en/14/core/connections.html 34/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The Connection.begin() method is invoked when using
the Engine.begin() context
SQLAlchemy 1.4 Documentation manager method as well.
All documentation that refers to behaviors specific to the
Connection.begin() method also apply to use of the
Engine.begin() method.
CURRENT RELEASE
Home
| Download this Documentation
Legacy use: nested calls to begin() on the same
Connection will return new
Search terms:
search... Transaction
objects that represent an emulated transaction within the scope of
the
enclosing transaction, that is:
latest listings.
trans2.commit() # does nothing
Engine Configuration
See also
Working with Engines and Connections
https://docs.sqlalchemy.org/en/14/core/connections.html 35/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
restored
as was the case in 1.3.x versions; in previous 1.4.x versions, an
SQLAlchemy 1.4 Documentation outer transaction would be “autobegun” but would not be committed.
CURRENT RELEASE
Home
| Download this Documentation See also
Search terms:
search... Connection.begin()
Connection.begin_twophase()
Column and Data Types xid – the two phase transaction id. If not supplied, a
random id will be
generated.
Engine and Connection Use
See also
Your new development
career awaits. Check out the Working with Driver SQL and Raw DBAPI Connections
latest listings.
SQL Statements and Expressions API This is the isolation level setting that the
Connection
has when first procured via
the Engine.connect() method.
This level stays in place until the
Schema Definition Language
Connection.execution_options.isolation_level is used
to change the setting on a
Column and Data Types per- Connection basis.
Using Transactions
an Antipattern
create_engine.isolation_level
- set per Engine isolation level
Migrating from the “nesting”
Connection.execution_options.isolation_level
- set per Connection isolation level
pattern
results)
conn.execute(text("SET search_path TO schema1, schema2")
Execution
# connection is fully closed (since we used "with:", can
Configuration
This Connection instance will remain usable.
When closed
(or exited from a context
Estimating Cache Performance manager context as above),
the DB-API connection will be literally closed and not
returned to its originating pool.
Using Logging
How much memory does the cache This method can be used to insulate the rest of an application
from a modified
state on a connection (such as a transaction
isolation level or similar).
use?
https://docs.sqlalchemy.org/en/14/core/connections.html 37/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Home
| Download this Documentation statement – The statement str to be executed. Bound parameters
must
use the underlying DBAPI’s paramstyle, such as “qmark”,
“pyformat”,
Search terms:
search... “format”, etc.
parameters –
)
Schema Definition Language
Basic Usage )
Using Transactions
(1, 'v1')
https://docs.sqlalchemy.org/en/14/core/connections.html 38/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
any ClauseElement construct that is also
a subclass of Executable , such
SQLAlchemy 1.4 Documentation as a
select() construct
a DDLElement object
Search terms:
search...
a DefaultGenerator object
a Compiled object
{"id":1, "value":"v1"},
)
Basic Usage
Using Transactions
…or individual key/values interpreted by **params:
Nesting of Transaction Blocks
conn.execute(
an Antipattern )
Migrating from the “nesting”
pattern
In the case that a plain SQL string is passed, and the underlying
DBAPI
accepts positional bind parameters, a collection of tuples
or individual
Library Level (e.g. emulated) values in *multiparams may be passed:
Autocommit
conn.execute(
results)
)
Connectionless Execution, Implicit
Execution Note above, the usage of a question mark “?” or other
symbol is contingent
upon the “paramstyle” accepted by the DBAPI
in use, which may be any of
Translation of Schema Names
“qmark”, “named”, “pyformat”, “format”,
“numeric”. See pep-249 for details
SQL Compilation Caching on
paramstyle.
https://docs.sqlalchemy.org/en/14/core/connections.html 39/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
https://docs.sqlalchemy.org/en/14/core/connections.html 40/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Setting Per-Connection / Sub-Engine Tokens - usage example
SQLAlchemy 1.4 Documentation
create_engine.logging_name - adds a name to the
name used by the Python
CURRENT RELEASE logger object itself.
Home
| Download this Documentation
Search terms:
search... isolation_level –
https://docs.sqlalchemy.org/en/14/core/connections.html 41/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Some DBAPIs such as psycopg2 and mysql-python consider
percent signs
SQLAlchemy 1.4 Documentation as significant only when parameters are
present; this option allows code to
generate SQL
containing percent signs (and possibly other characters)
that
CURRENT RELEASE
is neutral regarding whether it’s executed by the DBAPI
or piped into a
Home
| Download this Documentation
script that’s later invoked by
command line tools.
Search terms:
search... stream_results –
Basic Usage
Configuration
method sqlalchemy.engine.Connection. get_isolation_level ()
Estimating Cache Performance
Using Logging Return the current isolation level assigned to this
Connection .
How much memory does the cache This will typically be the default isolation level as determined
by the dialect, unless
use? if the
Connection.execution_options.isolation_level
feature has been used to
alter the isolation level on a
per- Connection basis.
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 42/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
This attribute will typically perform a live SQL operation in order
to procure the
SQLAlchemy 1.4 Documentation current isolation level, so the value returned is the
actual level on the underlying
DBAPI connection regardless of how
this state was set. Compare to the
CURRENT RELEASE
Connection.default_isolation_level accessor
which returns the dialect-level
Home
| Download this Documentation
setting without performing a SQL
query.
Search terms:
search...
New in version 0.9.9.
create_engine.isolation_level
- set per Engine isolation level
SQLAlchemy Core
Connection.execution_options.isolation_level
- set per Connection isolation level
SQL Expression Language Tutorial (1.x API)
Column and Data Types Return the current nested transaction in progress, if any.
Basic Usage
method sqlalchemy.engine.Connection. get_transaction ()
https://docs.sqlalchemy.org/en/14/core/connections.html 43/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
If a transaction was in progress (e.g. the
Connection.begin() method has been
SQLAlchemy 1.4 Documentation called) when
Connection.invalidate() method is called, at the DBAPI
level all state
associated with this transaction is lost, as
the DBAPI connection is closed. The
CURRENT RELEASE
Connection
will not allow a reconnection to proceed until the
Transaction object is
Home
| Download this Documentation
ended, by calling the
Transaction.rollback() method; until that point, any attempt
at
continuing to use the Connection will raise an
InvalidRequestError .
This is to
Search terms:
search...
prevent applications from accidentally
continuing an ongoing transactional
operations despite the
fact that the transaction has been lost due to an
invalidation.
Your new development
career awaits. Check out the The Connection.invalidate() method,
just like auto-invalidation,
will at the
latest listings. connection pool level invoke the
PoolEvents.invalidate() event.
ADS VIA CARBON
Parameters:
SQLAlchemy Core
exception – an optional Exception instance that’s the
reason for the
SQL Expression Language Tutorial (1.x API) invalidation. is passed along to event handlers
and logging functions.
Engine Configuration
attribute sqlalchemy.engine.Connection. invalidated
Working with Engines and Connections Return True if this connection was invalidated.
Basic Usage
method sqlalchemy.engine.Connection. run_callable (callable_, *args,
Using Transactions **kwargs)
Migrating from the “nesting” Deprecated since version 1.4: The Connection.run_callable() method is
deprecated and will be removed in a future release. Invoke the callable
pattern
function directly, passing the Connection.
Library Level (e.g. emulated)
Autocommit
The given *args and **kwargs are passed subsequent
to the Connection argument.
Setting Transaction Isolation Levels
This function, along with Engine.run_callable() ,
allows a function to be run with a
including DBAPI Autocommit
Connection
or Engine object without the need to know
which one is being dealt
Understanding the DBAPI-Level with.
Autocommit Isolation Level
method sqlalchemy.engine.Connection. scalar (object_, *multiparams,
Using Server Side Cursors (a.k.a. stream
**params)
results)
Executes and returns the first column of the first row.
Connectionless Execution, Implicit
Execution The underlying result/cursor is closed after execution.
Configuration
Executes and returns a scalar result set, which yields scalar values
from the first
Estimating Cache Performance column of each row.
Using Logging
This method is equivalent to calling Connection.execute()
to receive a Result
How much memory does the cache object, then invoking the
Result.scalars() method to produce a
ScalarResult
use?
instance.
https://docs.sqlalchemy.org/en/14/core/connections.html 44/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Return the schema name for the given schema item taking into
account current
schema translate map.
Your new development
career awaits. Check out the
method sqlalchemy.engine.Connection. transaction (callable_, *args,
latest listings.
**kwargs)
ADS VIA CARBON
SQL Expression Language Tutorial (1.x API) Deprecated since version 1.4: The Connection.transaction() method is
SQL Statements and Expressions API deprecated and will be removed in a future release. Use the Engine.begin()
context manager instead.
Schema Definition Language
Engine Configuration
def do_something(conn, x, y):
Basic Usage
conn.transaction(do_something, 5, 10)
Using Transactions
Nesting of Transaction Blocks The operations inside the function are all invoked within the
context of a single
Transaction .
Upon success, the transaction is committed. If an
exception is raised,
Arbitrary Transaction Nesting as
the transaction is rolled back
before propagating the exception.
an Antipattern
Note
Migrating from the “nesting”
pattern The transaction() method is superseded by
the usage of the
Python with: statement, which can
be used with
Library Level (e.g. emulated)
Connection.begin() :
Autocommit
conn.execute(text("some statement"),
Connectionless Execution, Implicit
Execution
Using Logging
https://docs.sqlalchemy.org/en/14/core/connections.html 45/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The purpose of CreateEnginePlugin is to allow third-party
systems to apply engine,
SQLAlchemy 1.4 Documentation pool and dialect level event listeners without
the need for the target application to
be modified; instead, the plugin
names can be added to the database URL. Target
CURRENT RELEASE
applications for
CreateEnginePlugin include:
Home
| Download this Documentation
connection and SQL performance tools, e.g. which use events to track
number
Search terms:
search... of checkouts and/or time spent with statements
SQLAlchemy Core
from sqlalchemy.engine import CreateEnginePlugin
SQL Expression Language Tutorial (1.x API) from sqlalchemy import event
Basic Usage "update the URL to one that no longer includes our pa
return url.difference_update_query(["log_cursor_loggi
Using Transactions
pattern self,
conn,
Autocommit statement,
context,
Execution
'sqlalchemy.plugins': [
Configuration
A plugin that uses the above names would be invoked from a database
URL as in:
Estimating Cache Performance
Using Logging
from sqlalchemy import create_engine
use?
"mysql+pymysql://scott:tiger@localhost/test?"
https://docs.sqlalchemy.org/en/14/core/connections.html 46/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQLAlchemy 1.4 Documentation The plugin URL parameter supports multiple instances, so that a URL
may specify
CURRENT RELEASE multiple plugins; they are loaded in the order stated
in the URL:
Home
| Download this Documentation
engine = create_engine(
Search terms:
search... "mysql+pymysql://scott:tiger@localhost/test?"
"plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")
"mysql+pymysql://scott:tiger@localhost/test",
Schema Definition Language New in version 1.2.3: plugin names can also be specified
to create_engine()
as a list
Column and Data Types
Nesting of Transaction Blocks As of version 1.4 of SQLAlchemy, arguments should continue to be consumed
from
the kwargs dictionary directly, by removing the values with a
method such as
Arbitrary Transaction Nesting as dict.pop . Arguments from the URL object
should be consumed by implementing the
an Antipattern CreateEnginePlugin.update_url() method, returning a new copy
of the URL with
plugin-specific parameters removed:
Migrating from the “nesting”
pattern
class MyPlugin(CreateEnginePlugin):
self.my_argument_one = url.query['my_argument_one']
Autocommit
self.my_argument_two = url.query['my_argument_two']
)
Using Server Side Cursors (a.k.a. stream
results)
Connectionless Execution, Implicit Arguments like those illustrated above would be consumed from a
create_engine()
call such as:
Execution
Configuration "mysql+pymysql://scott:tiger@localhost/test?"
"plugin=myplugin&my_argument_one=foo&my_argument_two=bar",
Using Logging )
How much memory does the cache
use?
Disabling or using an alternate Changed in version 1.4: The URL object is now immutable; a
CreateEnginePlugin that needs to alter the
URL should implement the newly
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 47/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
added
CreateEnginePlugin.update_url() method, which
is invoked after the
SQLAlchemy 1.4 Documentation plugin is constructed.
CURRENT RELEASE For migration, construct the plugin in the following way, checking
for the
Home
| Download this Documentation existence of the CreateEnginePlugin.update_url()
method to detect which
version is running:
Search terms:
search...
class MyPlugin(CreateEnginePlugin):
SQLAlchemy Core
# detect the 1.3 and earlier API - mutate the
# URL directly
Engine and Connection Use # this method is only called in the 1.4 version
return url.difference_update_query(
Engine Configuration
["my_argument_one", "my_argument_two"]
Basic Usage
How much memory does the cache Changed in version 1.4: The URL object is now immutable, so a
use? CreateEnginePlugin that needs to alter the
URL object should
implement the
CreateEnginePlugin.update_url() method.
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 48/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
kwargs – The keyword arguments passed to
create_engine() .
SQLAlchemy 1.4 Documentation
CURRENT RELEASE
method sqlalchemy.engine.CreateEnginePlugin. engine_created (engine)
Home
| Download this Documentation
Receive the Engine
object when it is fully constructed.
Search terms:
search...
The plugin may make additional changes to the engine, such as
registering engine
or connection pool events.
Your new development
career awaits. Check out the method
latest listings. sqlalchemy.engine.CreateEnginePlugin. handle_dialect_kwargs (dialect_cls,
ADS VIA CARBON dialect_args)
SQLAlchemy Core
parse and modify dialect kwargs
Configuration
Class signature
Estimating Cache Performance
Using Logging class sqlalchemy.engine.Engine ( sqlalchemy.engine.Connectable ,
sqlalchemy.log.Identified )
How much memory does the cache
use?
https://docs.sqlalchemy.org/en/14/core/connections.html 49/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Home
| Download this Documentation
with engine.begin() as conn:
Search terms:
search... conn.execute(
conn.execute(text("my_special_procedure(5)"))
Your new development
career awaits. Check out the
latest listings.
Upon successful operation, the Transaction
is committed. If an error is raised, the
ADS VIA CARBON
Transaction
is rolled back.
SQLAlchemy Core
Legacy use only: the close_with_result flag is normally False ,
and indicates that
SQL Expression Language Tutorial (1.x API) the Connection will be closed when
the operation is complete. When set to True , it
indicates the
Connection is in “single use” mode, where the
CursorResult returned
SQL Statements and Expressions API
by the first call to
Connection.execute() will close the
Connection when that
Schema Definition Language CursorResult has
exhausted all result rows.
Basic Usage
method sqlalchemy.engine.Engine. clear_compiled_cache ()
Using Transactions
Nesting of Transaction Blocks Clear the compiled cache associated with the dialect.
Arbitrary Transaction Nesting as This applies only to the built-in cache that is established
via the
an Antipattern create_engine.query_cache_size parameter.
It will not impact any dictionary caches
that were passed via the
Connection.execution_options.query_cache parameter.
Migrating from the “nesting”
pattern
New in version 1.4.
Library Level (e.g. emulated)
Autocommit
method sqlalchemy.engine.Engine. connect (close_with_result=False)
Setting Transaction Isolation Levels
including DBAPI Autocommit Return a new Connection object.
Execution
method sqlalchemy.engine.Engine. dispose (close=True)
Translation of Schema Names
Configuration A new connection pool is created immediately after the old one has been
disposed. The previous connection pool is disposed either actively, by
closing out
Estimating Cache Performance
all currently checked-in connections in that pool, or
passively, by losing references
Using Logging to it but otherwise not closing any
connections. The latter strategy is more
appropriate for an initializer
in a forked Python process.
How much memory does the cache
use? Parameters:
https://docs.sqlalchemy.org/en/14/core/connections.html 50/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
if left at its default of True , has the
effect of fully closing all currently
SQLAlchemy 1.4 Documentation checked in
database connections. Connections that are still checked out
will not be closed, however they will no longer be associated
with this
CURRENT RELEASE
Engine ,
so when they are closed individually, eventually the
Pool which they
Home
| Download this Documentation
are associated with will
be garbage collected and they will be closed out
fully, if
not already closed on checkin.
Search terms:
search...
If set to False , the previous connection pool is de-referenced,
and
otherwise not touched in any way.
Your new development
career awaits. Check out the
latest listings. New in version 1.4.33: Added the Engine.dispose.close
parameter to allow
ADS VIA CARBON
the replacement of a connection pool in a child
process without interfering
with the connections used by the parent
process.
SQLAlchemy Core
Basic Usage
attribute sqlalchemy.engine.Engine. engine
Using Transactions
The Engine instance referred to by this
Connectable .
Nesting of Transaction Blocks
an Antipattern
method sqlalchemy.engine.Engine. execute (statement, *multiparams,
Migrating from the “nesting” **params)
pattern
Executes the given construct and returns a
CursorResult .
Library Level (e.g. emulated)
Autocommit
Deprecated since version 1.4: The Engine.execute() method is considered
Setting Transaction Isolation Levels legacy as of the 1.x series of SQLAlchemy and will be removed in 2.0. All
including DBAPI Autocommit
statement execution in SQLAlchemy 2.0 is performed by the
Connection.execute() method of Connection , or in the ORM by the
Understanding the DBAPI-Level Session.execute() method of Session . (Background on SQLAlchemy 2.0 at:
Autocommit Isolation Level Migrating to SQLAlchemy 2.0)
Configuration
method sqlalchemy.engine.Engine. execution_options (**opt)
Estimating Cache Performance
Return a new Engine that will provide
Connection objects with the given execution
Using Logging
options.
How much memory does the cache
The returned Engine remains related to the original
Engine in that it shares the
use?
same connection pool and
other state:
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 51/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The Pool used by the new Engine
is the
same instance. The Engine.dispose()
SQLAlchemy 1.4 Documentation method will replace
the connection pool instance for the parent engine as
well
as this one.
CURRENT RELEASE
Home
| Download this Documentation Event listeners are “cascaded” - meaning, the new
Engine
inherits the events
of the parent, and new events can be associated
with the new Engine
Search terms:
search... individually.
Column and Data Types Above, the shard1 engine serves as a factory for
Connection
objects that will
contain the execution option
shard_id=shard1 , and shard2 will produce
Connection
Engine and Connection Use objects that contain the execution option shard_id=shard2 .
Engine Configuration
An event handler can consume the above execution option to perform
a schema
Working with Engines and Connections switch or other operation, given a connection. Below
we emit a MySQL use
statement to switch databases, at the same
time keeping track of which database
Basic Usage
we’ve established using the
Connection.info dictionary,
which gives us a
Using Transactions persistent
storage space that follows the DBAPI connection:
an Antipattern
shards = {"default": "base", shard_1: "db1", "shard_2": "db2
Migrating from the “nesting”
pattern @event.listens_for(Engine, "before_cursor_execute")
Execution
Engine.update_execution_options()
- update the execution
options for a given
Translation of Schema Names Engine in place.
How much memory does the cache Get the non-SQL options which will take effect during execution.
use?
See also
Disabling or using an alternate
Engine.execution_options()
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 52/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... Deprecated since version 1.4: The Engine.has_table() method is
deprecated and will be removed in a future release. Please refer to
Inspector.has_table() .
Your new development
career awaits. Check out the
latest listings. See also
SQL Expression Language Tutorial (1.x API) quoted_name - used to pass quoting information along
with a schema identifier.
Working with Engines and Connections Return a “raw” DBAPI connection from the connection pool.
Basic Usage
The returned object is a proxied version of the DBAPI
connection object used by
Using Transactions the underlying driver in use.
The object will have all the same behavior as the real
DBAPI
connection, except that its close() method will result in the
connection
Nesting of Transaction Blocks
being returned to the pool, rather than being closed
for real.
Arbitrary Transaction Nesting as
This method provides direct DBAPI connection access for
special situations when
an Antipattern
the API provided by
Connection
is not needed. When a Connection object is already
Migrating from the “nesting” present, the DBAPI connection is available using
the Connection.connection
accessor.
pattern
Autocommit
Working with Driver SQL and Raw DBAPI Connections
Setting Transaction Isolation Levels
including DBAPI Autocommit
method sqlalchemy.engine.Engine. run_callable (callable_, *args,
Understanding the DBAPI-Level
**kwargs)
Autocommit Isolation Level
Given a callable object or function, execute it, passing
a Connection as the first
Using Server Side Cursors (a.k.a. stream
argument.
results)
Connectionless Execution, Implicit Deprecated since version 1.4: The Engine.run_callable() method is
Execution deprecated and will be removed in a future release. Use the Engine.begin()
context manager instead.
Translation of Schema Names
https://docs.sqlalchemy.org/en/14/core/connections.html 53/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Connectionless Execution, Implicit The operations inside the function are all invoked within the
context of a single
Execution Transaction .
Upon success, the transaction is committed. If an
exception is raised,
the transaction is rolled back
before propagating the exception.
Translation of Schema Names
Note
SQL Compilation Caching
https://docs.sqlalchemy.org/en/14/core/connections.html 54/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
See also
SQLAlchemy 1.4 Documentation
Engine.begin() - engine-level transactional
context
CURRENT RELEASE
Home
| Download this Documentation Connection.transaction()
- connection-level version of
Engine.transaction()
Search terms:
search...
Engine Configuration
Encapsulate information about an error condition in progress.
Working with Engines and Connections
This object exists solely to be passed to the
ConnectionEvents.handle_error() event,
Basic Usage supporting an interface that
can be extended without backwards-incompatibility.
Using Transactions
New in version 0.9.7.
Nesting of Transaction Blocks
Using Server Side Cursors (a.k.a. stream The Connection in use during the exception.
results)
This member is present, except in the case of a failure when
first connecting.
Connectionless Execution, Implicit
See also
Execution
https://docs.sqlalchemy.org/en/14/core/connections.html 55/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
This member should always be present, even in the case of a failure
when first
SQLAlchemy 1.4 Documentation connecting.
CURRENT RELEASE
Home
| Download this Documentation New in version 1.0.0.
Search terms:
search...
attribute sqlalchemy.engine.ExceptionContext. execution_context =
None
Your new development
career awaits. Check out the The ExecutionContext corresponding to the execution
operation in progress.
latest listings.
This is present for statement execution operations, but not for
operations such as
ADS VIA CARBON
transaction begin/end. It also is not present when
the exception was raised before
SQLAlchemy Core the ExecutionContext
could be constructed.
SQL Expression Language Tutorial (1.x API) Note that the ExceptionContext.statement and
ExceptionContext.parameters
members may represent a
different value than that of the ExecutionContext ,
SQL Statements and Expressions API
potentially in the case where a
ConnectionEvents.before_cursor_execute() event or
Schema Definition Language similar
modified the statement/parameters to be sent.
Basic Usage Represent whether all connections in the pool should be invalidated
when a
“disconnect” condition is in effect.
Using Transactions
Migrating from the “nesting” The purpose of this flag is for custom disconnect-handling schemes where
the
pattern invalidation of other connections in the pool is to be performed
based on other
conditions, or even on a per-connection basis.
Library Level (e.g. emulated)
Autocommit
New in version 1.0.3.
Setting Transaction Isolation Levels
including DBAPI Autocommit
attribute sqlalchemy.engine.ExceptionContext. is_disconnect = None
Understanding the DBAPI-Level
Autocommit Isolation Level Represent whether the exception as occurred represents a “disconnect”
condition.
Using Server Side Cursors (a.k.a. stream
results) This flag will always be True or False within the scope of the
ConnectionEvents.handle_error() handler.
Connectionless Execution, Implicit
Execution SQLAlchemy will defer to this flag in order to determine whether or not
the
connection should be invalidated subsequently. That is, by
assigning to this flag, a
Translation of Schema Names
“disconnect” event which then results in
a connection and pool invalidation can be
SQL Compilation Caching invoked or prevented by
changing this flag.
Configuration
Note
https://docs.sqlalchemy.org/en/14/core/connections.html 56/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Basic Usage String SQL statement that was emitted directly to the DBAPI.
Using Transactions
May be None.
Nesting of Transaction Blocks
class sqlalchemy.engine. NestedTransaction (connection)
Arbitrary Transaction Nesting as
an Antipattern
Represent a ‘nested’, or SAVEPOINT transaction.
Migrating from the “nesting”
The NestedTransaction object is created by calling the
Connection.begin_nested()
pattern method of
Connection .
Library Level (e.g. emulated)
When using NestedTransaction , the semantics of “begin” /
“commit” / “rollback” are
Autocommit as follows:
Setting Transaction Isolation Levels
the “begin” operation corresponds to the “BEGIN SAVEPOINT” command,
including DBAPI Autocommit where
the savepoint is given an explicit name that is part of the state
of this
Understanding the DBAPI-Level object.
Using Logging
Using SAVEPOINT - ORM version of the SAVEPOINT API.
How much memory does the cache
use?
https://docs.sqlalchemy.org/en/14/core/connections.html 57/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Library Level (e.g. emulated) For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
Autocommit ROLLBACK.
https://docs.sqlalchemy.org/en/14/core/connections.html 58/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
class sqlalchemy.engine.RootTransaction ( sqlalchemy.engine.Transaction )
SQLAlchemy 1.4 Documentation
CURRENT RELEASE
Home
| Download this Documentation method sqlalchemy.engine.RootTransaction. close ()
Search terms:
search... inherited from the Transaction.close() method of Transaction
SQL Statements and Expressions API inherited from the Transaction.commit() method of Transaction
Column and Data Types The implementation of this may vary based on the type of transaction in
use:
Engine and Connection Use For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
COMMIT.
Engine Configuration
engine = create_engine("postgresql://scott:tiger@localhost/te
Estimating Cache Performance connection = engine.connect()
https://docs.sqlalchemy.org/en/14/core/connections.html 59/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
The object provides rollback() and commit()
methods in order to control
SQLAlchemy 1.4 Documentation transaction boundaries. It
also implements a context manager interface so that
the
Python with statement can be used with the
Connection.begin() method:
CURRENT RELEASE
Home
| Download this Documentation
with connection.begin():
Search terms:
search... connection.execute(text("insert into x (a, b) values (1,
SQLAlchemy Core
Connection.begin_twophase()
Engine Configuration
Arbitrary Transaction Nesting as This is used to cancel a Transaction without affecting the scope of
an enclosing
an Antipattern transaction.
Autocommit The implementation of this may vary based on the type of transaction in
use:
Setting Transaction Isolation Levels
For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
including DBAPI Autocommit COMMIT.
Understanding the DBAPI-Level
For a NestedTransaction , it corresponds to a
“RELEASE SAVEPOINT”
Autocommit Isolation Level operation.
Using Server Side Cursors (a.k.a. stream
For a TwoPhaseTransaction , DBAPI-specific methods for two
phase
results) transactions may be used.
SQL Compilation Caching The implementation of this may vary based on the type of transaction in
use:
Configuration
For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
Estimating Cache Performance ROLLBACK.
Using Logging
For a NestedTransaction , it corresponds to a
“ROLLBACK TO SAVEPOINT”
How much memory does the cache operation.
use?
For a TwoPhaseTransaction , DBAPI-specific methods for two
phase
Disabling or using an alternate transactions may be used.
https://docs.sqlalchemy.org/en/14/core/connections.html 60/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQLAlchemy Core
Basic Usage
method sqlalchemy.engine.TwoPhaseTransaction. commit ()
Using Transactions
Migrating from the “nesting” For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
pattern COMMIT.
Setting Transaction Isolation Levels For a TwoPhaseTransaction , DBAPI-specific methods for two
phase
including DBAPI Autocommit transactions may be used.
Configuration
The implementation of this may vary based on the type of transaction in
use:
Estimating Cache Performance
For a simple database transaction (e.g. RootTransaction ),
it corresponds to a
Using Logging
ROLLBACK.
How much memory does the cache
For a NestedTransaction , it corresponds to a
“ROLLBACK TO SAVEPOINT”
use?
operation.
Disabling or using an alternate
For a TwoPhaseTransaction , DBAPI-specific methods for two
phase
dictionary to cache some (or all) transactions may be used.
https://docs.sqlalchemy.org/en/14/core/connections.html 61/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Schema Definition Language A Result that gets data from a Python iterator of
IteratorResult
Row objects.
Column and Data Types
Working with Engines and Connections A subclass of Row that delivers 1.x SQLAlchemy
LegacyRow
behaviors
for Core.
Basic Usage
Using Transactions
A wrapper for a Result that returns dictionary
Nesting of Transaction Blocks
MappingResult
values
rather than Row values.
Using Server Side Cursors (a.k.a. stream A wrapper for a Result that returns scalar values
ScalarResult
rather than Row values.
results)
Configuration
attribute sqlalchemy.engine.BaseCursorResult. inserted_primary_key
Estimating Cache Performance
Using Logging
Return the primary key for the row just inserted.
How much memory does the cache The return value is a Row object representing
a named tuple of primary key values
in the order in which the
primary key columns are configured in the source
Table .
use?
Working with Engines and Connections accessor is only useful beyond what’s already supplied by
CursorResult.inserted_primary_key when using the
psycopg2
Basic Usage dialect. Future versions hope to
generalize this feature to
more dialects.
Using Transactions
Understanding the DBAPI-Level When using all other dialects / backends that don’t yet support
this
feature: This accessor is only useful for single row INSERT
statements, and
Autocommit Isolation Level
returns the same information as that of the
Using Server Side Cursors (a.k.a. stream CursorResult.inserted_primary_key within a
single-element list. When an
Configuration
New in version 1.4.
Estimating Cache Performance
Using Logging
See also
How much memory does the cache
use? CursorResult.inserted_primary_key
Your new development Return the collection of inserted parameters from this
execution.
career awaits. Check out the
latest listings. Raises InvalidRequestError if the executed
statement is not a compiled
expression construct
or is not an insert() construct.
ADS VIA CARBON
SQL Expression Language Tutorial (1.x API) Return the collection of updated parameters from this
execution.
SQL Statements and Expressions API
Raises InvalidRequestError if the executed
statement is not a compiled
Schema Definition Language expression construct
or is not an update() construct.
results)
method sqlalchemy.engine.BaseCursorResult. prefetch_cols ()
Connectionless Execution, Implicit
Execution Return prefetch_cols() from the underlying
ExecutionContext .
https://docs.sqlalchemy.org/en/14/core/connections.html 64/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... ValuesBase.return_defaults()
Autocommit
attribute sqlalchemy.engine.BaseCursorResult. rowcount
Setting Transaction Isolation Levels
including DBAPI Autocommit Return the ‘rowcount’ for this result.
Understanding the DBAPI-Level The ‘rowcount’ reports the number of rows matched
by the WHERE criterion of an
Autocommit Isolation Level UPDATE or DELETE statement.
https://docs.sqlalchemy.org/en/14/core/connections.html 65/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
number of rows available from the results of a
SQLAlchemy 1.4 Documentation SELECT statement
as DBAPIs cannot support this
functionality when rows are
unbuffered.
CURRENT RELEASE
Home
| Download this Documentation CursorResult.rowcount
may not be fully
implemented by
all dialects. In particular, most
Search terms:
search... DBAPIs do not support an
aggregate rowcount
result from an executemany call.
The
CursorResult.supports_sane_rowcount() and
Your new development CursorResult.supports_sane_multi_rowcount()
career awaits. Check out the methods
will report from the dialect if each usage
latest listings. is known to be
supported.
ADS VIA CARBON
Statements that use RETURNING may not return a
SQLAlchemy Core correct
rowcount.
Schema Definition Language Getting Affected Row Count from UPDATE, DELETE - in the SQLAlchemy 1.4 / 2.0
Tutorial
Column and Data Types
Setting Transaction Isolation Levels An IteratorResult that works from an iterator-producing callable.
including DBAPI Autocommit
The given chunks argument is a function that is given a number of rows
to return in
Understanding the DBAPI-Level each chunk, or None for all rows. The function should
then return an un-consumed
Autocommit Isolation Level iterator of lists, each list of the requested
size.
Using Server Side Cursors (a.k.a. stream The function can be called at any time again, in which case it should
continue from
results) the same result set but adjust the chunk size as given.
Configuration
class sqlalchemy.engine.ChunkedIteratorResult ( sqlalchemy.engine.IteratorResult )
Disabling or using an alternate This impacts the underlying behavior of the result when iterating over
the result
dictionary to cache some (or all) object, or otherwise making use of methods such as
Result.fetchone() that return
https://docs.sqlalchemy.org/en/14/core/connections.html 66/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
one row at a time. Data
from the underlying cursor or other data source will be
SQLAlchemy 1.4 Documentation buffered up to
this many rows in memory, and the buffered collection will then be
yielded out one row at at time or as many rows are requested. Each time
the
CURRENT RELEASE
buffer clears, it will be refreshed to this many rows or as many
rows remain if
Home
| Download this Documentation
fewer remain.
Search terms:
search... The Result.yield_per() method is generally used in
conjunction with the
Connection.execution_options.stream_results
execution option, which will allow
the database dialect in use to make
use of a server side cursor, if the DBAPI
Your new development supports it.
career awaits. Check out the
latest listings. Most DBAPIs do not use server side cursors by default, which means all
rows will
be fetched upfront from the database regardless of the
Result.yield_per()
ADS VIA CARBON
setting. However,
Result.yield_per() may still be useful in that it batches
the
SQLAlchemy Core SQLAlchemy-side processing of the raw data from the database, and
additionally
when used for ORM scenarios will batch the conversion of
database rows into
SQL Expression Language Tutorial (1.x API) ORM entity rows.
SQL Statements and Expressions API
Library Level (e.g. emulated) The FrozenResult object is returned from the
Result.freeze() method of any Result
Autocommit object.
Setting Transaction Isolation Levels A new iterable Result object is generated from a fixed
set of data each time the
including DBAPI Autocommit FrozenResult is invoked as
a callable:
results)
unfrozen_result_one = frozen()
Execution
print(row)
Configuration
# ... etc
Estimating Cache Performance
Using Logging
use?
https://docs.sqlalchemy.org/en/14/core/connections.html 67/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Re-Executing Statements - example usage within the
ORM to implement a result-
SQLAlchemy 1.4 Documentation set cache.
Search terms:
search...
class sqlalchemy.engine. IteratorResult (cursor_metadata, iterator,
raw=None, _source_supports_scalars=False)
Your new development A Result that gets data from a Python iterator of
Row objects.
career awaits. Check out the
latest listings.
New in version 1.4.
ADS VIA CARBON
SQLAlchemy Core
Class signature
SQL Expression Language Tutorial (1.x API)
Column and Data Types class sqlalchemy.engine. LegacyRow (parent, processors, keymap, key_style,
Working with Engines and Connections The LegacyRow class is where most of the Python mapping
(i.e. dictionary-like)
Basic Usage behaviors are implemented for the row object. The mapping behavior
of Row going
forward is accessible via the _mapping
attribute.
Using Transactions
Nesting of Transaction Blocks New in version 1.4: - added LegacyRow which encapsulates most
of the
an Antipattern
Estimating Cache Performance Will return True if the row contains a column named "some_col" ,
in the way that a
Using Logging Python mapping works.
How much memory does the cache However, it is planned that the 2.0 series of SQLAlchemy will reverse
this behavior
use? so that __contains__() will refer to a value being
present in the row, in the way that
a Python tuple works.
Disabling or using an alternate
dictionary to cache some (or all) See also
https://docs.sqlalchemy.org/en/14/core/connections.html 68/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
RowProxy is no longer a “proxy”; is now called Row and behaves like an enhanced
SQLAlchemy 1.4 Documentation named tuple
CURRENT RELEASE
Home
| Download this Documentation
method sqlalchemy.engine.LegacyRow. items ()
Search terms:
search...
Return a list of tuples, each tuple containing a key/value pair.
Your new development Deprecated since version 1.4: The LegacyRow.items() method is deprecated
career awaits. Check out the and will be removed in a future release. Use the Row._mapping attribute, i.e.,
latest listings. ‘row._mapping.items()’.
SQLAlchemy Core This method is analogous to the Python dictionary .items() method,
except that it
returns a list, not an iterator.
SQL Expression Language Tutorial (1.x API)
Basic Usage
This method is analogous to the Python-2-only dictionary
.iterkeys() method.
Using Transactions
method sqlalchemy.engine.LegacyRow. itervalues ()
Nesting of Transaction Blocks
Return a an iterator against the Row.values() method.
Arbitrary Transaction Nesting as
an Antipattern
Deprecated since version 1.4: The LegacyRow.itervalues() method is
Migrating from the “nesting” deprecated and will be removed in a future release. Use the Row._mapping
pattern
attribute, i.e., ‘row._mapping.values()’.
Configuration
A Result that is merged from any number of
Result objects.
Estimating Cache Performance
Returned by the Result.merge() method.
Using Logging
https://docs.sqlalchemy.org/en/14/core/connections.html 69/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
class sqlalchemy.engine.MergedResult ( sqlalchemy.engine.IteratorResult )
SQLAlchemy 1.4 Documentation
CURRENT RELEASE
Home
| Download this Documentation class sqlalchemy.engine. Result (cursor_metadata)
Search terms:
search... Represent a set of database results.
SQLAlchemy Core
Note
SQL Expression Language Tutorial (1.x API)
SQL Statements and Expressions API In SQLAlchemy 1.4 and above, this object is
used for ORM
results returned by Session.execute() , which can
yield instances
Schema Definition Language of ORM mapped objects either individually or within
tuple-like
Column and Data Types rows. Note that the Result object does not
deduplicate
instances or rows automatically as is the case with the
legacy
Engine and Connection Use Query object. For in-Python de-duplication of
instances or rows,
results)
Configuration
close this Result .
Estimating Cache Performance
The behavior of this method is implementation specific, and is
not implemented
Using Logging
by default. The method should generally end
the resources in use by the result
How much memory does the cache object and also cause any
subsequent iteration or row fetching to raise
ResourceClosedError .
use?
Search terms:
search...
Working with Engines and Connections Example of using the column objects from the statement itself:
Basic Usage
for z, y in result.columns(
statement.selected_columns.c.y
an Antipattern
How much memory does the cache To fetch rows in groups, use the Result.partitions()
method.
use?
Returns:
Disabling or using an alternate
a list of Row objects.
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 71/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... This method is provided for backwards compatibility with
SQLAlchemy 1.x.x.
SQLAlchemy Core
method sqlalchemy.engine.Result. first ()
SQL Expression Language Tutorial (1.x API)
SQL Statements and Expressions API Fetch the first row or None if no row is present.
Schema Definition Language Closes the result set and discards remaining rows.
Column and Data Types
Note
Engine and Connection Use
This method returns one row, e.g. tuple, by default.
To return
Engine Configuration
exactly one single scalar value, that is, the first
column of the
Working with Engines and Connections first row, use the Result.scalar() method,
or combine
Result.scalars() and Result.first() .
Basic Usage
Additionally, in contrast to the behavior of the legacy ORM
Using Transactions
Query.first() method, no limit is applied to the
SQL query
Nesting of Transaction Blocks which was invoked to produce this Result ;
for a DBAPI driver
that buffers results in memory before yielding
rows, all rows
Arbitrary Transaction Nesting as
will be sent to the Python process and all but
the first row will
an Antipattern
be discarded.
Migrating from the “nesting”
See also
pattern
Library Level (e.g. emulated) ORM Query Unified with Core Select
Autocommit
Execution
https://docs.sqlalchemy.org/en/14/core/connections.html 72/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
See also
SQLAlchemy 1.4 Documentation
Re-Executing Statements - example usage within the
ORM to implement a result-
CURRENT RELEASE
set cache.
Home
| Download this Documentation
Search terms:
search...
method sqlalchemy.engine.Result. keys ()
SQLAlchemy Core The keys can represent the labels of the columns returned by a core
statement or
the names of the orm classes returned by an orm
execution.
SQL Expression Language Tutorial (1.x API)
The view also can be tested for key containment using the Python
in operator,
SQL Statements and Expressions API
which will test both for the string keys represented
in the view, as well as for
Schema Definition Language alternate keys such as column objects.
Engine Configuration
method sqlalchemy.engine.Result. mappings ()
Working with Engines and Connections
Using Transactions When this filter is applied, fetching rows will return
RowMapping objects instead of
Row objects.
Nesting of Transaction Blocks
How much memory does the cache This method returns one row, e.g. tuple, by default.
To return
use? exactly one single scalar value, that is, the first
column of the
first row, use the Result.scalar_one() method,
or combine
Disabling or using an alternate
Result.scalars() and Result.one() .
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 73/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Raises:
MultipleResultsFound , NoResultFound
Your new development
career awaits. Check out the
latest listings. See also
Engine and Connection Use Return at most one result or raise an exception.
Basic Usage
New in version 1.4.
Using Transactions
Connectionless Execution, Implicit Each list will be of the size given, excluding the last list to
be yielded, which may
Execution
have a small number of rows. No empty
lists will be yielded.
Translation of Schema Names The result object is automatically closed when the iterator
is fully consumed.
SQL Compilation Caching Note that the backend driver will usually buffer the entire result
ahead of time
unless the
Connection.execution_options.stream_results execution
option is used
Configuration
indicating that the driver should not pre-buffer
results, if possible. Not all drivers
Estimating Cache Performance support this option and
the option is silently ignored for those who do not.
Using Logging
When using the ORM, the Result.partitions() method
is typically more effective
How much memory does the cache from a memory perspective when it is
combined with use of the
use? Result.yield_per() method,
which instructs the ORM loading internals to only
build a certain
amount of ORM objects from a result at a time before yielding
them
Disabling or using an alternate out.
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 74/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQL Statements and Expressions API Yield Per - in the ORM Querying Guide
Using Transactions After calling this method, the object is fully closed,
e.g. the CursorResult.close()
method will have been called.
Nesting of Transaction Blocks
Returns:
Arbitrary Transaction Nesting as
an Antipattern a Python scalar value , or None if no rows remain.
Execution
Return exactly one or no scalar result.
Translation of Schema Names
This is equivalent to calling Result.scalars() and then
Result.one_or_none() .
SQL Compilation Caching
See also
Configuration
Using Logging
Result.scalars()
https://docs.sqlalchemy.org/en/14/core/connections.html 75/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... >>> result = conn.execute(text("select int_id from table"))
>>> result.scalars().all()
[1, 2, 3]
Your new development
career awaits. Check out the
latest listings. When results are fetched from the ScalarResult
filtering object, the single
column-row that would be returned by the
Result is instead returned as the
ADS VIA CARBON
column’s value.
SQLAlchemy Core
Basic Usage
method sqlalchemy.engine.Result. unique (strategy=None)
Using Transactions
Apply unique filtering to the objects returned by this
Result .
Nesting of Transaction Blocks
Arbitrary Transaction Nesting as When this filter is applied with no arguments, the rows or objects
returned will
filtered such that each row is returned uniquely. The
algorithm used to determine
an Antipattern
this uniqueness is by default the Python
hashing identity of the whole tuple. In
Migrating from the “nesting” some cases a specialized
per-entity hashing scheme may be used, such as when
pattern using the ORM, a
scheme is applied which works against the primary key identity
of
returned objects.
Library Level (e.g. emulated)
Autocommit The unique filter is applied after all other filters, which means
if the columns
returned have been refined using a method such as the
Result.columns() or
Setting Transaction Isolation Levels Result.scalars()
method, the uniquing is applied to only the column or columns
including DBAPI Autocommit returned. This occurs regardless of the order in which these
methods have been
called upon the Result object.
Understanding the DBAPI-Level
Autocommit Isolation Level The unique filter also changes the calculus used for methods like
Result.fetchmany() and Result.partitions() .
When using Result.unique() , these
Using Server Side Cursors (a.k.a. stream
methods will continue
to yield the number of rows or objects requested, after
results) uniquing
has been applied. However, this necessarily impacts the buffering
Connectionless Execution, Implicit behavior of the underlying cursor or datasource, such that multiple
underlying
calls to cursor.fetchmany() may be necessary in order
to accumulate enough
Execution
objects in order to provide a unique collection
of the requested size.
Translation of Schema Names
Parameters:
SQL Compilation Caching
strategy – a callable that will be applied to rows or objects
being iterated,
Configuration which should return an object that represents the
unique value of the row. A
Estimating Cache Performance Python set() is used to store
these identities. If not passed, a default
uniqueness strategy
is used which may have been assembled by the source
Using Logging
of this
Result object.
How much memory does the cache
use?
method sqlalchemy.engine.Result. yield_per (num)
Disabling or using an alternate
dictionary to cache some (or all) Configure the row-fetching strategy to fetch num rows at a time.
https://docs.sqlalchemy.org/en/14/core/connections.html 76/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
This impacts the underlying behavior of the result when iterating over
the result
SQLAlchemy 1.4 Documentation object, or otherwise making use of methods such as
Result.fetchone() that return
one row at a time. Data
from the underlying cursor or other data source will be
CURRENT RELEASE
buffered up to
this many rows in memory, and the buffered collection will then be
Home
| Download this Documentation
yielded out one row at at time or as many rows are requested. Each time
the
buffer clears, it will be refreshed to this many rows or as many
rows remain if
Search terms:
search...
fewer remain.
Working with Engines and Connections num – number of rows to fetch each time the buffer is refilled.
If set to a
value below 1, fetches all rows for the next buffer.
Basic Usage
Configuration
Return all scalar values in a list.
Estimating Cache Performance
Equivalent to Result.all() except that
scalar values, rather than Row objects,
are
Using Logging
returned.
How much memory does the cache
use? method sqlalchemy.engine.ScalarResult. fetchall ()
https://docs.sqlalchemy.org/en/14/core/connections.html 77/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQLAlchemy Core
method sqlalchemy.engine.ScalarResult. one ()
SQL Expression Language Tutorial (1.x API)
Return exactly one object or raise an exception.
SQL Statements and Expressions API
Equivalent to Result.one() except that
scalar values, rather than Row objects,
are
Schema Definition Language
returned.
Column and Data Types
method sqlalchemy.engine.ScalarResult. one_or_none ()
Engine and Connection Use
Using Server Side Cursors (a.k.a. stream The MappingResult object is acquired by calling the
Result.mappings() method.
results)
Class signature
Connectionless Execution, Implicit
Execution class sqlalchemy.engine.MappingResult ( sqlalchemy.engine._WithKeys ,
sqlalchemy.engine.FilterResult )
Translation of Schema Names
https://docs.sqlalchemy.org/en/14/core/connections.html 78/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQL Expression Language Tutorial (1.x API) Equivalent to Result.fetchone() except that
mapping values, rather than Row
objects,
are returned.
SQL Statements and Expressions API
Using Server Side Cursors (a.k.a. stream Return exactly one object or raise an exception.
results)
Equivalent to Result.one() except that
mapping values, rather than Row objects,
Connectionless Execution, Implicit are returned.
Execution
method sqlalchemy.engine.MappingResult. one_or_none ()
Translation of Schema Names
Using Logging
method sqlalchemy.engine.MappingResult. partitions (size=None)
How much memory does the cache
use? Iterate through sub-lists of elements of the size given.
https://docs.sqlalchemy.org/en/14/core/connections.html 79/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search...
class sqlalchemy.engine. CursorResult (context, cursor_strategy,
cursor_description)
Engine and Connection Use Within the scope of the 1.x series of SQLAlchemy, Core SQL results in
version 1.4
return an instance of LegacyCursorResult
which takes the place of the CursorResult
Engine Configuration
class used for the 1.3 series
and previously. This object returns rows as LegacyRow
Working with Engines and Connections objects,
which maintains Python mapping (i.e. dictionary) like behaviors upon the
object itself. Going forward, the Row._mapping attribute should
be used for dictionary
Basic Usage behaviors.
Using Transactions
See also
Nesting of Transaction Blocks
Selecting - introductory material for accessing
CursorResult and Row objects.
Arbitrary Transaction Nesting as
an Antipattern
https://docs.sqlalchemy.org/en/14/core/connections.html 80/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
is
generally an optional method except in the case when discarding a
CursorResult
SQLAlchemy 1.4 Documentation that still has additional rows pending
for fetch.
CURRENT RELEASE After this method is called, it is no longer valid to call upon
the fetch methods,
Home
| Download this Documentation which will raise a ResourceClosedError
on subsequent use.
Search terms:
search... See also
result = connection.execute(statement)
# ...
Using Transactions
Nesting of Transaction Blocks Example of using the column objects from the statement itself:
statement.selected_columns.c.z,
pattern ):
# ...
Library Level (e.g. emulated)
Autocommit
New in version 1.4.
Setting Transaction Isolation Levels
including DBAPI Autocommit
Parameters:
Understanding the DBAPI-Level
Autocommit Isolation Level *col_expressions – indicates columns to be returned. Elements
may be
integer row indexes, string column names, or appropriate
ColumnElement
Using Server Side Cursors (a.k.a. stream
objects corresponding to a select construct.
results)
Returns:
Connectionless Execution, Implicit
Execution
this Result object with the modifications
given.
Configuration
inherited from the Result.fetchall() method of Result
Estimating Cache Performance A synonym for the Result.all() method.
Using Logging
method sqlalchemy.engine.CursorResult. fetchmany (size=None)
How much memory does the cache
use?
inherited from the Result.fetchmany() method of Result
Disabling or using an alternate Fetch many rows.
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 81/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
When all rows are exhausted, returns an empty list.
SQLAlchemy 1.4 Documentation
This method is provided for backwards compatibility with
SQLAlchemy 1.x.x.
CURRENT RELEASE
Home
| Download this Documentation To fetch rows in groups, use the Result.partitions()
method.
Search terms:
search... Returns:
SQL Expression Language Tutorial (1.x API) When all rows are exhausted, returns None.
SQL Statements and Expressions API This method is provided for backwards compatibility with
SQLAlchemy 1.x.x.
Schema Definition Language To fetch the first row of a result only, use the
Result.first() method. To iterate
Column and Data Types through all
rows, iterate the Result object directly.
Using Transactions
inherited from the Result.first() method of Result
Nesting of Transaction Blocks
Fetch the first row or None if no row is present.
Arbitrary Transaction Nesting as
Closes the result set and discards remaining rows.
an Antipattern
Note
Migrating from the “nesting”
pattern This method returns one row, e.g. tuple, by default.
To return
Library Level (e.g. emulated) exactly one single scalar value, that is, the first
column of the
first row, use the Result.scalar() method,
or combine
Autocommit
Result.scalars() and Result.first() .
Setting Transaction Isolation Levels
Additionally, in contrast to the behavior of the legacy ORM
including DBAPI Autocommit
Query.first() method, no limit is applied to the
SQL query
Understanding the DBAPI-Level which was invoked to produce this Result ;
for a DBAPI driver
Autocommit Isolation Level that buffers results in memory before yielding
rows, all rows
will be sent to the Python process and all but
the first row will
Using Server Side Cursors (a.k.a. stream
be discarded.
results)
See also
Connectionless Execution, Implicit
Execution ORM Query Unified with Core Select
https://docs.sqlalchemy.org/en/14/core/connections.html 82/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... The callable object returned is an instance of
FrozenResult .
This is used for result set caching. The method must be called
on the result when
it has been unconsumed, and calling the method
will consume the result fully.
Your new development
career awaits. Check out the When the FrozenResult
is retrieved from a cache, it can be called any number of
latest listings. times where
it will produce a new Result object each time
against its stored set of
rows.
ADS VIA CARBON
SQL Expression Language Tutorial (1.x API) Re-Executing Statements - example usage within the
ORM to implement a result-
set cache.
SQL Statements and Expressions API
Engine and Connection Use inherited from the BaseCursorResult.inserted_primary_key attribute of BaseCursorResult
Engine Configuration Return the primary key for the row just inserted.
Working with Engines and Connections The return value is a Row object representing
a named tuple of primary key values
Basic Usage
in the order in which the
primary key columns are configured in the source
Table .
Using Transactions
Changed in version 1.4.8: - the
CursorResult.inserted_primary_key
value is
Nesting of Transaction Blocks now a named tuple via the Row class,
rather than a plain tuple.
https://docs.sqlalchemy.org/en/14/core/connections.html 83/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
This accessor is added to support dialects that offer the feature
that is currently
SQLAlchemy 1.4 Documentation implemented by the Psycopg2 Fast Execution Helpers
feature, currently only the
psycopg2 dialect, which provides
for many rows to be INSERTed at once while
CURRENT RELEASE
still retaining the
behavior of being able to return server-generated primary key
Home
| Download this Documentation
values.
Search terms:
search... When using the psycopg2 dialect, or other dialects that may support
“fast
executemany” style inserts in upcoming releases : When
invoking an
INSERT statement while passing a list of rows as the
second argument to
Your new development Connection.execute() , this accessor
will then provide a list of rows, where
career awaits. Check out the each row contains the primary
key value for each row that was INSERTed.
latest listings.
When using all other dialects / backends that don’t yet support
this
ADS VIA CARBON
feature: This accessor is only useful for single row INSERT
statements, and
SQLAlchemy Core returns the same information as that of the
CursorResult.inserted_primary_key within a
single-element list. When an
SQL Expression Language Tutorial (1.x API) INSERT statement is executed in
conjunction with a list of rows to be
SQL Statements and Expressions API INSERTed, the list will contain
one row per row inserted in the statement,
however it will contain
None for any server-generated values.
Schema Definition Language
Future releases of SQLAlchemy will further generalize the
“fast execution helper”
Column and Data Types
feature of psycopg2 to suit other dialects,
thus allowing this accessor to be of
Engine and Connection Use more general use.
Engine Configuration
New in version 1.4.
Working with Engines and Connections
Basic Usage
See also
Using Transactions
CursorResult.inserted_primary_key
Nesting of Transaction Blocks
results) Return an iterable view which yields the string keys that would
be represented by
each Row .
Connectionless Execution, Implicit
Execution The keys can represent the labels of the columns returned by a core
statement or
the names of the orm classes returned by an orm
execution.
Translation of Schema Names
SQL Compilation Caching The view also can be tested for key containment using the Python
in operator,
which will test both for the string keys represented
in the view, as well as for
Configuration alternate keys such as column objects.
Estimating Cache Performance
Using Logging Changed in version 1.4: a key view object is returned rather than a
plain list.
https://docs.sqlalchemy.org/en/14/core/connections.html 84/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Return the collection of inserted parameters from this
execution.
SQLAlchemy 1.4 Documentation
Raises InvalidRequestError if the executed
statement is not a compiled
CURRENT RELEASE expression construct
or is not an insert() construct.
Home
| Download this Documentation
method sqlalchemy.engine.CursorResult. last_updated_params ()
Search terms:
search...
SQL Expression Language Tutorial (1.x API) inherited from the BaseCursorResult.lastrow_has_defaults() method of BaseCursorResult
SQL Statements and Expressions API Return lastrow_has_defaults() from the underlying
ExecutionContext .
Working with Engines and Connections Return the ‘lastrowid’ accessor on the DBAPI cursor.
Library Level (e.g. emulated) inherited from the Result.mappings() method of Result
Autocommit Apply a mappings filter to returned rows, returning an instance of
MappingResult .
Execution
https://docs.sqlalchemy.org/en/14/core/connections.html 85/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
SQLAlchemy Core
New in version 1.4.
SQL Expression Language Tutorial (1.x API)
Basic Usage
Result.first()
Using Transactions
Result.one_or_none()
Nesting of Transaction Blocks
Result.scalar_one()
Arbitrary Transaction Nesting as
an Antipattern
https://docs.sqlalchemy.org/en/14/core/connections.html 86/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Iterate through sub-lists of rows of the size given.
SQLAlchemy 1.4 Documentation
Each list will be of the size given, excluding the last list to
be yielded, which may
CURRENT RELEASE have a small number of rows. No empty
lists will be yielded.
Home
| Download this Documentation
The result object is automatically closed when the iterator
is fully consumed.
Search terms:
search...
Note that the backend driver will usually buffer the entire result
ahead of time
unless the
Connection.execution_options.stream_results execution
option is used
indicating that the driver should not pre-buffer
results, if possible. Not all drivers
Your new development
support this option and
the option is silently ignored for those who do not.
career awaits. Check out the
latest listings.
When using the ORM, the Result.partitions() method
is typically more effective
ADS VIA CARBON from a memory perspective when it is
combined with use of the
Result.yield_per() method,
which instructs the ORM loading internals to only
SQLAlchemy Core
build a certain
amount of ORM objects from a result at a time before yielding
them
SQL Expression Language Tutorial (1.x API) out.
an Antipattern
Connection.execution_options.stream_results
Migrating from the “nesting”
Yield Per - in the ORM Querying Guide
pattern
Using Server Side Cursors (a.k.a. stream Raises InvalidRequestError if the executed
statement is not a compiled
results) expression construct
or is not an insert() or update() construct.
https://docs.sqlalchemy.org/en/14/core/connections.html 87/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Return the values of default columns that were fetched using
the
SQLAlchemy 1.4 Documentation ValuesBase.return_defaults() feature.
Search terms:
search...
New in version 0.9.0.
SQLAlchemy Core
attribute sqlalchemy.engine.CursorResult. returned_defaults_rows
SQL Expression Language Tutorial (1.x API)
inherited from the BaseCursorResult.returned_defaults_rows attribute of BaseCursorResult
SQL Statements and Expressions API
Return a list of rows each containing the values of default
columns that were
Schema Definition Language
fetched using
the ValuesBase.return_defaults() feature.
Column and Data Types
The return value is a list of Row objects.
Engine and Connection Use
Setting Transaction Isolation Levels This attribute should be True for all results that are against
SELECT statements,
including DBAPI Autocommit as well as for DML statements INSERT/UPDATE/DELETE
that use RETURNING.
For INSERT/UPDATE/DELETE statements that were
not using RETURNING, the
Understanding the DBAPI-Level
value will usually be False, however
there are some dialect-specific exceptions to
Autocommit Isolation Level this, such as when
using the MSSQL / pyodbc dialect a SELECT is emitted inline in
order to retrieve an inserted primary key value.
Using Server Side Cursors (a.k.a. stream
results)
attribute sqlalchemy.engine.CursorResult. rowcount
Connectionless Execution, Implicit
Execution inherited from the BaseCursorResult.rowcount attribute of BaseCursorResult
SQL Compilation Caching The ‘rowcount’ reports the number of rows matched
by the WHERE criterion of an
UPDATE or DELETE statement.
Configuration
Note
Estimating Cache Performance
Using Logging
Notes regarding CursorResult.rowcount :
How much memory does the cache
This attribute returns the number of rows matched,
use?
which is not necessarily the same as the number of
Disabling or using an alternate rows
that were actually modified - an UPDATE
statement, for example,
may have no net change
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 88/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
on a given row if the SET values
given are the same
SQLAlchemy 1.4 Documentation as those present in the row already.
Such a row
would be matched but not modified.
On backends
CURRENT RELEASE
that feature both styles, such as MySQL,
rowcount
Home
| Download this Documentation
is configured by default to return the match
count
in all cases.
Search terms:
search...
CursorResult.rowcount
is only useful in conjunction
with an UPDATE or DELETE statement. Contrary to
Your new development what the Python
DBAPI says, it does not return the
career awaits. Check out the number of rows available from the results of a
latest listings. SELECT statement
as DBAPIs cannot support this
ADS VIA CARBON functionality when rows are
unbuffered.
Using Transactions
Getting Affected Row Count from UPDATE, DELETE - in the SQLAlchemy 1.4 / 2.0
Nesting of Transaction Blocks Tutorial
Fetch the first column of the first row, and close the result set.
Library Level (e.g. emulated)
Autocommit Returns None if there are no rows to fetch.
Setting Transaction Isolation Levels No validation is performed to test if additional rows remain.
including DBAPI Autocommit
After calling this method, the object is fully closed,
e.g. the CursorResult.close()
Understanding the DBAPI-Level method will have been called.
Autocommit Isolation Level
Returns:
Using Server Side Cursors (a.k.a. stream
a Python scalar value , or None if no rows remain.
results)
use?
Result.scalars()
https://docs.sqlalchemy.org/en/14/core/connections.html 89/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... This is equivalent to calling Result.scalars() and then
Result.one_or_none() .
See also
SQLAlchemy Core
method sqlalchemy.engine.CursorResult. scalars (index=0)
SQL Expression Language Tutorial (1.x API)
SQL Statements and Expressions API inherited from the Result.scalars() method of Result
>>> result.scalars().all()
Using Transactions
When results are fetched from the ScalarResult
filtering object, the single
Nesting of Transaction Blocks column-row that would be returned by the
Result is instead returned as the
column’s value.
Arbitrary Transaction Nesting as
an Antipattern
New in version 1.4.
Migrating from the “nesting”
pattern
Parameters:
Library Level (e.g. emulated)
index – integer or row key indicating the column to be fetched
from each
Autocommit
row, defaults to 0 indicating the first column.
Setting Transaction Isolation Levels
including DBAPI Autocommit
Returns:
Configuration
method sqlalchemy.engine.CursorResult. supports_sane_rowcount ()
Estimating Cache Performance
Using Logging inherited from the BaseCursorResult.supports_sane_rowcount() method of
BaseCursorResult
How much memory does the cache
use? Return supports_sane_rowcount from the dialect.
https://docs.sqlalchemy.org/en/14/core/connections.html 90/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... When this filter is applied with no arguments, the rows or objects
returned will
filtered such that each row is returned uniquely. The
algorithm used to determine
this uniqueness is by default the Python
hashing identity of the whole tuple. In
Your new development some cases a specialized
per-entity hashing scheme may be used, such as when
career awaits. Check out the using the ORM, a
scheme is applied which works against the primary key identity
latest listings. of
returned objects.
ADS VIA CARBON
The unique filter is applied after all other filters, which means
if the columns
SQLAlchemy Core returned have been refined using a method such as the
Result.columns() or
Result.scalars()
method, the uniquing is applied to only the column or columns
SQL Expression Language Tutorial (1.x API) returned. This occurs regardless of the order in which these
methods have been
SQL Statements and Expressions API called upon the Result object.
Schema Definition Language The unique filter also changes the calculus used for methods like
Result.fetchmany() and Result.partitions() .
When using Result.unique() , these
Column and Data Types
methods will continue
to yield the number of rows or objects requested, after
Engine and Connection Use uniquing
has been applied. However, this necessarily impacts the buffering
behavior of the underlying cursor or datasource, such that multiple
underlying
Engine Configuration
calls to cursor.fetchmany() may be necessary in order
to accumulate enough
Working with Engines and Connections objects in order to provide a unique collection
of the requested size.
https://docs.sqlalchemy.org/en/14/core/connections.html 91/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search... Yield Per - in the ORM Querying Guide
Result.partitions()
SQL Expression Language Tutorial (1.x API) This class includes connection “connection autoclose” behavior for use with
SQL Statements and Expressions API
“connectionless” execution, as well as delivers rows using the
LegacyRow row
implementation.
Schema Definition Language
Engine Configuration
Class signature
Working with Engines and Connections
class sqlalchemy.engine.LegacyCursorResult ( sqlalchemy.engine.CursorResult )
Basic Usage
Using Transactions
method sqlalchemy.engine.LegacyCursorResult. close ()
Nesting of Transaction Blocks
an Antipattern
This method has the same behavior as that of
sqlalchemy.engine.CursorResult() ,
Migrating from the “nesting” but it also may close
the underlying Connection for the case of “connectionless”
execution.
pattern
Execution
Connectionless Execution, Implicit Execution
Translation of Schema Names
https://docs.sqlalchemy.org/en/14/core/connections.html 92/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
See also
SQLAlchemy 1.4 Documentation
Selecting Rows with Core or ORM - includes examples of selecting
rows from
CURRENT RELEASE
SELECT statements.
Home
| Download this Documentation
LegacyRow - Compatibility interface introduced in SQLAlchemy
1.4.
Search terms:
search...
Understanding the DBAPI-Level The keys can represent the labels of the columns returned by a core
statement or
Autocommit Isolation Level the names of the orm classes returned by an orm
execution.
Using Server Side Cursors (a.k.a. stream This attribute is analogous to the Python named tuple ._fields
attribute.
results)
Execution
Configuration
See also
SQLAlchemy 1.4 Documentation
Row._fields
CURRENT RELEASE
Home
| Download this Documentation
Search terms:
search... New in version 1.4.
SQLAlchemy Core
method sqlalchemy.engine.Row. keys ()
SQL Expression Language Tutorial (1.x API)
Return the list of keys as strings represented by this
Row .
SQL Statements and Expressions API
Schema Definition Language Deprecated since version 1.4: The Row.keys() method is considered legacy
as of the 1.x series of SQLAlchemy and will be removed in 2.0. Use the
Column and Data Types
namedtuple standard accessor Row._fields , or for full mapping behavior use
Engine and Connection Use row._mapping.keys() (Background on SQLAlchemy 2.0 at: Migrating to
SQLAlchemy 2.0)
Engine Configuration
Using Transactions
This method is analogous to the Python dictionary .keys() method,
except that it
Nesting of Transaction Blocks returns a list, not an iterator.
results)
RowMapping supplies Python mapping (i.e. dictionary) access to
the contents of the
Connectionless Execution, Implicit row. This includes support for testing of
containment of specific keys (string column
Execution
names or objects), as well
as iteration of keys, values, and items:
Configuration
Using Logging
How much memory does the cache New in version 1.4: The RowMapping object replaces the
mapping-like access
use? previously provided by a database result row,
which now seeks to behave
mostly like a named tuple.
Disabling or using an alternate
dictionary to cache some (or all)
https://docs.sqlalchemy.org/en/14/core/connections.html 94/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Class signature
SQLAlchemy 1.4 Documentation
class sqlalchemy.engine.RowMapping ( sqlalchemy.engine.BaseRow ,
CURRENT RELEASE
collections.abc.Mapping )
Home
| Download this Documentation
Search terms:
search...
method sqlalchemy.engine.RowMapping. items ()
Engine Configuration flambé! the dragon and The Alchemist image designs created and generously donated by
Rotem Yaari.
Working with Engines and Connections
Using Transactions
Configuration
https://docs.sqlalchemy.org/en/14/core/connections.html 95/96
30/6/22, 17:10 Working with Engines and Connections — SQLAlchemy 1.4 Documentation
Search terms:
search...
SQLAlchemy Core
SQLAlchemy
Column and DataSponsors
Types
Engine Configuration
Website content copyright © by SQLAlchemy authors and contributors.
SQLAlchemy and its documentation are licensed under the MIT license.
Working with
SQLAlchemy Engines and
is a trademark Connections
of Michael Bayer. mike(&)zzzcomputing.com
All rights reserved.
Website generation by
zeekofile, with
huge thanks to the Blogofile
project.
Basic Usage
Using Transactions
Configuration
https://docs.sqlalchemy.org/en/14/core/connections.html 96/96