1============================== 2What's New in SQLAlchemy 0.9? 3============================== 4 5.. admonition:: About this Document 6 7 This document describes changes between SQLAlchemy version 0.8, 8 undergoing maintenance releases as of May, 2013, 9 and SQLAlchemy version 0.9, which had its first production 10 release on December 30, 2013. 11 12 Document last updated: June 10, 2015 13 14Introduction 15============ 16 17This guide introduces what's new in SQLAlchemy version 0.9, 18and also documents changes which affect users migrating 19their applications from the 0.8 series of SQLAlchemy to 0.9. 20 21Please carefully review 22:ref:`behavioral_changes_orm_09` and :ref:`behavioral_changes_core_09` for 23potentially backwards-incompatible changes. 24 25Platform Support 26================ 27 28Targeting Python 2.6 and Up Now, Python 3 without 2to3 29------------------------------------------------------- 30 31The first achievement of the 0.9 release is to remove the dependency 32on the 2to3 tool for Python 3 compatibility. To make this 33more straightforward, the lowest Python release targeted now 34is 2.6, which features a wide degree of cross-compatibility with 35Python 3. All SQLAlchemy modules and unit tests are now interpreted 36equally well with any Python interpreter from 2.6 forward, including 37the 3.1 and 3.2 interpreters. 38 39:ticket:`2671` 40 41C Extensions Supported on Python 3 42----------------------------------- 43 44The C extensions have been ported to support Python 3 and now build 45in both Python 2 and Python 3 environments. 46 47:ticket:`2161` 48 49.. _behavioral_changes_orm_09: 50 51Behavioral Changes - ORM 52======================== 53 54.. _migration_2824: 55 56Composite attributes are now returned as their object form when queried on a per-attribute basis 57------------------------------------------------------------------------------------------------ 58 59Using a :class:`_query.Query` in conjunction with a composite attribute now returns the object 60type maintained by that composite, rather than being broken out into individual 61columns. Using the mapping setup at :ref:`mapper_composite`:: 62 63 >>> session.query(Vertex.start, Vertex.end).\ 64 ... filter(Vertex.start == Point(3, 4)).all() 65 [(Point(x=3, y=4), Point(x=5, y=6))] 66 67This change is backwards-incompatible with code that expects the individual attribute 68to be expanded into individual columns. To get that behavior, use the ``.clauses`` 69accessor:: 70 71 72 >>> session.query(Vertex.start.clauses, Vertex.end.clauses).\ 73 ... filter(Vertex.start == Point(3, 4)).all() 74 [(3, 4, 5, 6)] 75 76.. seealso:: 77 78 :ref:`change_2824` 79 80:ticket:`2824` 81 82 83.. _migration_2736: 84 85:meth:`_query.Query.select_from` no longer applies the clause to corresponding entities 86---------------------------------------------------------------------------------------- 87The :meth:`_query.Query.select_from` method has been popularized in recent versions 88as a means of controlling the first thing that a :class:`_query.Query` object 89"selects from", typically for the purposes of controlling how a JOIN will 90render. 91 92Consider the following example against the usual ``User`` mapping:: 93 94 select_stmt = select([User]).where(User.id == 7).alias() 95 96 q = session.query(User).\ 97 join(select_stmt, User.id == select_stmt.c.id).\ 98 filter(User.name == 'ed') 99 100The above statement predictably renders SQL like the following:: 101 102 SELECT "user".id AS user_id, "user".name AS user_name 103 FROM "user" JOIN (SELECT "user".id AS id, "user".name AS name 104 FROM "user" 105 WHERE "user".id = :id_1) AS anon_1 ON "user".id = anon_1.id 106 WHERE "user".name = :name_1 107 108If we wanted to reverse the order of the left and right elements of the 109JOIN, the documentation would lead us to believe we could use 110:meth:`_query.Query.select_from` to do so:: 111 112 q = session.query(User).\ 113 select_from(select_stmt).\ 114 join(User, User.id == select_stmt.c.id).\ 115 filter(User.name == 'ed') 116 117However, in version 0.8 and earlier, the above use of :meth:`_query.Query.select_from` 118would apply the ``select_stmt`` to **replace** the ``User`` entity, as it 119selects from the ``user`` table which is compatible with ``User``:: 120 121 -- SQLAlchemy 0.8 and earlier... 122 SELECT anon_1.id AS anon_1_id, anon_1.name AS anon_1_name 123 FROM (SELECT "user".id AS id, "user".name AS name 124 FROM "user" 125 WHERE "user".id = :id_1) AS anon_1 JOIN "user" ON anon_1.id = anon_1.id 126 WHERE anon_1.name = :name_1 127 128The above statement is a mess, the ON clause refers ``anon_1.id = anon_1.id``, 129our WHERE clause has been replaced with ``anon_1`` as well. 130 131This behavior is quite intentional, but has a different use case from that 132which has become popular for :meth:`_query.Query.select_from`. The above behavior 133is now available by a new method known as :meth:`_query.Query.select_entity_from`. 134This is a lesser used behavior that in modern SQLAlchemy is roughly equivalent 135to selecting from a customized :func:`.aliased` construct:: 136 137 select_stmt = select([User]).where(User.id == 7) 138 user_from_stmt = aliased(User, select_stmt.alias()) 139 140 q = session.query(user_from_stmt).filter(user_from_stmt.name == 'ed') 141 142So with SQLAlchemy 0.9, our query that selects from ``select_stmt`` produces 143the SQL we expect:: 144 145 -- SQLAlchemy 0.9 146 SELECT "user".id AS user_id, "user".name AS user_name 147 FROM (SELECT "user".id AS id, "user".name AS name 148 FROM "user" 149 WHERE "user".id = :id_1) AS anon_1 JOIN "user" ON "user".id = id 150 WHERE "user".name = :name_1 151 152The :meth:`_query.Query.select_entity_from` method will be available in SQLAlchemy 153**0.8.2**, so applications which rely on the old behavior can transition 154to this method first, ensure all tests continue to function, then upgrade 155to 0.9 without issue. 156 157:ticket:`2736` 158 159 160.. _migration_2833: 161 162``viewonly=True`` on ``relationship()`` prevents history from taking effect 163--------------------------------------------------------------------------- 164 165The ``viewonly`` flag on :func:`_orm.relationship` is applied to prevent changes 166to the target attribute from having any effect within the flush process. 167This is achieved by eliminating the attribute from being considered during 168the flush. However, up until now, changes to the attribute would still 169register the parent object as "dirty" and trigger a potential flush. The change 170is that the ``viewonly`` flag now prevents history from being set for the 171target attribute as well. Attribute events like backrefs and user-defined events 172still continue to function normally. 173 174The change is illustrated as follows:: 175 176 from sqlalchemy import Column, Integer, ForeignKey, create_engine 177 from sqlalchemy.orm import backref, relationship, Session 178 from sqlalchemy.ext.declarative import declarative_base 179 from sqlalchemy import inspect 180 181 Base = declarative_base() 182 183 class A(Base): 184 __tablename__ = 'a' 185 id = Column(Integer, primary_key=True) 186 187 class B(Base): 188 __tablename__ = 'b' 189 190 id = Column(Integer, primary_key=True) 191 a_id = Column(Integer, ForeignKey('a.id')) 192 a = relationship("A", backref=backref("bs", viewonly=True)) 193 194 e = create_engine("sqlite://") 195 Base.metadata.create_all(e) 196 197 a = A() 198 b = B() 199 200 sess = Session(e) 201 sess.add_all([a, b]) 202 sess.commit() 203 204 b.a = a 205 206 assert b in sess.dirty 207 208 # before 0.9.0 209 # assert a in sess.dirty 210 # assert inspect(a).attrs.bs.history.has_changes() 211 212 # after 0.9.0 213 assert a not in sess.dirty 214 assert not inspect(a).attrs.bs.history.has_changes() 215 216:ticket:`2833` 217 218.. _migration_2751: 219 220Association Proxy SQL Expression Improvements and Fixes 221------------------------------------------------------- 222 223The ``==`` and ``!=`` operators as implemented by an association proxy 224that refers to a scalar value on a scalar relationship now produces 225a more complete SQL expression, intended to take into account 226the "association" row being present or not when the comparison is against 227``None``. 228 229Consider this mapping:: 230 231 class A(Base): 232 __tablename__ = 'a' 233 234 id = Column(Integer, primary_key=True) 235 236 b_id = Column(Integer, ForeignKey('b.id'), primary_key=True) 237 b = relationship("B") 238 b_value = association_proxy("b", "value") 239 240 class B(Base): 241 __tablename__ = 'b' 242 id = Column(Integer, primary_key=True) 243 value = Column(String) 244 245Up through 0.8, a query like the following:: 246 247 s.query(A).filter(A.b_value == None).all() 248 249would produce:: 250 251 SELECT a.id AS a_id, a.b_id AS a_b_id 252 FROM a 253 WHERE EXISTS (SELECT 1 254 FROM b 255 WHERE b.id = a.b_id AND b.value IS NULL) 256 257In 0.9, it now produces:: 258 259 SELECT a.id AS a_id, a.b_id AS a_b_id 260 FROM a 261 WHERE (EXISTS (SELECT 1 262 FROM b 263 WHERE b.id = a.b_id AND b.value IS NULL)) OR a.b_id IS NULL 264 265The difference being, it not only checks ``b.value``, it also checks 266if ``a`` refers to no ``b`` row at all. This will return different 267results versus prior versions, for a system that uses this type of 268comparison where some parent rows have no association row. 269 270More critically, a correct expression is emitted for ``A.b_value != None``. 271In 0.8, this would return ``True`` for ``A`` rows that had no ``b``:: 272 273 SELECT a.id AS a_id, a.b_id AS a_b_id 274 FROM a 275 WHERE NOT (EXISTS (SELECT 1 276 FROM b 277 WHERE b.id = a.b_id AND b.value IS NULL)) 278 279Now in 0.9, the check has been reworked so that it ensures 280the A.b_id row is present, in addition to ``B.value`` being 281non-NULL:: 282 283 SELECT a.id AS a_id, a.b_id AS a_b_id 284 FROM a 285 WHERE EXISTS (SELECT 1 286 FROM b 287 WHERE b.id = a.b_id AND b.value IS NOT NULL) 288 289In addition, the ``has()`` operator is enhanced such that you can 290call it against a scalar column value with no criterion only, 291and it will produce criteria that checks for the association row 292being present or not:: 293 294 s.query(A).filter(A.b_value.has()).all() 295 296output:: 297 298 SELECT a.id AS a_id, a.b_id AS a_b_id 299 FROM a 300 WHERE EXISTS (SELECT 1 301 FROM b 302 WHERE b.id = a.b_id) 303 304This is equivalent to ``A.b.has()``, but allows one to query 305against ``b_value`` directly. 306 307:ticket:`2751` 308 309.. _migration_2810: 310 311Association Proxy Missing Scalar returns None 312--------------------------------------------- 313 314An association proxy from a scalar attribute to a scalar will now return 315``None`` if the proxied object isn't present. This is consistent with the 316fact that missing many-to-ones return None in SQLAlchemy, so should the 317proxied value. E.g.:: 318 319 from sqlalchemy import * 320 from sqlalchemy.orm import * 321 from sqlalchemy.ext.declarative import declarative_base 322 from sqlalchemy.ext.associationproxy import association_proxy 323 324 Base = declarative_base() 325 326 class A(Base): 327 __tablename__ = 'a' 328 329 id = Column(Integer, primary_key=True) 330 b = relationship("B", uselist=False) 331 332 bname = association_proxy("b", "name") 333 334 class B(Base): 335 __tablename__ = 'b' 336 337 id = Column(Integer, primary_key=True) 338 a_id = Column(Integer, ForeignKey('a.id')) 339 name = Column(String) 340 341 a1 = A() 342 343 # this is how m2o's always have worked 344 assert a1.b is None 345 346 # but prior to 0.9, this would raise AttributeError, 347 # now returns None just like the proxied value. 348 assert a1.bname is None 349 350:ticket:`2810` 351 352 353.. _change_2787: 354 355attributes.get_history() will query from the DB by default if value not present 356------------------------------------------------------------------------------- 357 358A bugfix regarding :func:`.attributes.get_history` allows a column-based attribute 359to query out to the database for an unloaded value, assuming the ``passive`` 360flag is left at its default of ``PASSIVE_OFF``. Previously, this flag would 361not be honored. Additionally, a new method :meth:`.AttributeState.load_history` 362is added to complement the :attr:`.AttributeState.history` attribute, which 363will emit loader callables for an unloaded attribute. 364 365This is a small change demonstrated as follows:: 366 367 from sqlalchemy import Column, Integer, String, create_engine, inspect 368 from sqlalchemy.orm import Session, attributes 369 from sqlalchemy.ext.declarative import declarative_base 370 371 Base = declarative_base() 372 373 class A(Base): 374 __tablename__ = 'a' 375 id = Column(Integer, primary_key=True) 376 data = Column(String) 377 378 e = create_engine("sqlite://", echo=True) 379 Base.metadata.create_all(e) 380 381 sess = Session(e) 382 383 a1 = A(data='a1') 384 sess.add(a1) 385 sess.commit() # a1 is now expired 386 387 # history doesn't emit loader callables 388 assert inspect(a1).attrs.data.history == (None, None, None) 389 390 # in 0.8, this would fail to load the unloaded state. 391 assert attributes.get_history(a1, 'data') == ((), ['a1',], ()) 392 393 # load_history() is now equivalent to get_history() with 394 # passive=PASSIVE_OFF ^ INIT_OK 395 assert inspect(a1).attrs.data.load_history() == ((), ['a1',], ()) 396 397:ticket:`2787` 398 399.. _behavioral_changes_core_09: 400 401Behavioral Changes - Core 402========================= 403 404Type objects no longer accept ignored keyword arguments 405------------------------------------------------------- 406 407Up through the 0.8 series, most type objects accepted arbitrary keyword 408arguments which were silently ignored:: 409 410 from sqlalchemy import Date, Integer 411 412 # storage_format argument here has no effect on any backend; 413 # it needs to be on the SQLite-specific type 414 d = Date(storage_format="%(day)02d.%(month)02d.%(year)04d") 415 416 # display_width argument here has no effect on any backend; 417 # it needs to be on the MySQL-specific type 418 i = Integer(display_width=5) 419 420This was a very old bug for which a deprecation warning was added to the 4210.8 series, but because nobody ever runs Python with the "-W" flag, it 422was mostly never seen:: 423 424 425 $ python -W always::DeprecationWarning ~/dev/sqlalchemy/test.py 426 /Users/classic/dev/sqlalchemy/test.py:5: SADeprecationWarning: Passing arguments to 427 type object constructor <class 'sqlalchemy.types.Date'> is deprecated 428 d = Date(storage_format="%(day)02d.%(month)02d.%(year)04d") 429 /Users/classic/dev/sqlalchemy/test.py:9: SADeprecationWarning: Passing arguments to 430 type object constructor <class 'sqlalchemy.types.Integer'> is deprecated 431 i = Integer(display_width=5) 432 433As of the 0.9 series the "catch all" constructor is removed from 434:class:`.TypeEngine`, and these meaningless arguments are no longer accepted. 435 436The correct way to make use of dialect-specific arguments such as 437``storage_format`` and ``display_width`` is to use the appropriate 438dialect-specific types:: 439 440 from sqlalchemy.dialects.sqlite import DATE 441 from sqlalchemy.dialects.mysql import INTEGER 442 443 d = DATE(storage_format="%(day)02d.%(month)02d.%(year)04d") 444 445 i = INTEGER(display_width=5) 446 447What about the case where we want the dialect-agnostic type also? We 448use the :meth:`.TypeEngine.with_variant` method:: 449 450 from sqlalchemy import Date, Integer 451 from sqlalchemy.dialects.sqlite import DATE 452 from sqlalchemy.dialects.mysql import INTEGER 453 454 d = Date().with_variant( 455 DATE(storage_format="%(day)02d.%(month)02d.%(year)04d"), 456 "sqlite" 457 ) 458 459 i = Integer().with_variant( 460 INTEGER(display_width=5), 461 "mysql" 462 ) 463 464:meth:`.TypeEngine.with_variant` isn't new, it was added in SQLAlchemy 4650.7.2. So code that is running on the 0.8 series can be corrected to use 466this approach and tested before upgrading to 0.9. 467 468``None`` can no longer be used as a "partial AND" constructor 469-------------------------------------------------------------- 470 471``None`` can no longer be used as the "backstop" to form an AND condition piecemeal. 472This pattern was not a documented pattern even though some SQLAlchemy internals 473made use of it:: 474 475 condition = None 476 477 for cond in conditions: 478 condition = condition & cond 479 480 if condition is not None: 481 stmt = stmt.where(condition) 482 483The above sequence, when ``conditions`` is non-empty, will on 0.9 produce 484``SELECT .. WHERE <condition> AND NULL``. The ``None`` is no longer implicitly 485ignored, and is instead consistent with when ``None`` is interpreted in other 486contexts besides that of a conjunction. 487 488The correct code for both 0.8 and 0.9 should read:: 489 490 from sqlalchemy.sql import and_ 491 492 if conditions: 493 stmt = stmt.where(and_(*conditions)) 494 495Another variant that works on all backends on 0.9, but on 0.8 only works on 496backends that support boolean constants:: 497 498 from sqlalchemy.sql import true 499 500 condition = true() 501 502 for cond in conditions: 503 condition = cond & condition 504 505 stmt = stmt.where(condition) 506 507On 0.8, this will produce a SELECT statement that always has ``AND true`` 508in the WHERE clause, which is not accepted by backends that don't support 509boolean constants (MySQL, MSSQL). On 0.9, the ``true`` constant will be dropped 510within an ``and_()`` conjunction. 511 512.. seealso:: 513 514 :ref:`migration_2804` 515 516.. _migration_2873: 517 518The "password" portion of a ``create_engine()`` no longer considers the ``+`` sign as an encoded space 519------------------------------------------------------------------------------------------------------ 520 521For whatever reason, the Python function ``unquote_plus()`` was applied to the 522"password" field of a URL, which is an incorrect application of the 523encoding rules described in `RFC 1738 <http://www.ietf.org/rfc/rfc1738.txt>`_ 524in that it escaped spaces as plus signs. The stringification of a URL 525now only encodes ":", "@", or "/" and nothing else, and is now applied to both the 526``username`` and ``password`` fields (previously it only applied to the 527password). On parsing, encoded characters are converted, but plus signs and 528spaces are passed through as is:: 529 530 # password: "pass word + other:words" 531 dbtype://user:pass word + other%3Awords@host/dbname 532 533 # password: "apples/oranges" 534 dbtype://username:apples%2Foranges@hostspec/database 535 536 # password: "apples@oranges@@" 537 dbtype://username:apples%40oranges%40%40@hostspec/database 538 539 # password: '', username is "username@" 540 dbtype://username%40:@hostspec/database 541 542 543:ticket:`2873` 544 545.. _migration_2879: 546 547The precedence rules for COLLATE have been changed 548-------------------------------------------------- 549 550Previously, an expression like the following:: 551 552 print((column('x') == 'somevalue').collate("en_EN")) 553 554would produce an expression like this:: 555 556 -- 0.8 behavior 557 (x = :x_1) COLLATE en_EN 558 559The above is misunderstood by MSSQL and is generally not the syntax suggested 560for any database. The expression will now produce the syntax illustrated 561by that of most database documentation:: 562 563 -- 0.9 behavior 564 x = :x_1 COLLATE en_EN 565 566The potentially backwards incompatible change arises if the 567:meth:`.ColumnOperators.collate` operator is being applied to the right-hand 568column, as follows:: 569 570 print(column('x') == literal('somevalue').collate("en_EN")) 571 572In 0.8, this produces:: 573 574 x = :param_1 COLLATE en_EN 575 576However in 0.9, will now produce the more accurate, but probably not what you 577want, form of:: 578 579 x = (:param_1 COLLATE en_EN) 580 581The :meth:`.ColumnOperators.collate` operator now works more appropriately within an 582``ORDER BY`` expression as well, as a specific precedence has been given to the 583``ASC`` and ``DESC`` operators which will again ensure no parentheses are 584generated:: 585 586 >>> # 0.8 587 >>> print(column('x').collate('en_EN').desc()) 588 (x COLLATE en_EN) DESC 589 590 >>> # 0.9 591 >>> print(column('x').collate('en_EN').desc()) 592 x COLLATE en_EN DESC 593 594:ticket:`2879` 595 596 597 598.. _migration_2878: 599 600PostgreSQL CREATE TYPE <x> AS ENUM now applies quoting to values 601---------------------------------------------------------------- 602 603The :class:`_postgresql.ENUM` type will now apply escaping to single quote 604signs within the enumerated values:: 605 606 >>> from sqlalchemy.dialects import postgresql 607 >>> type = postgresql.ENUM('one', 'two', "three's", name="myenum") 608 >>> from sqlalchemy.dialects.postgresql import base 609 >>> print(base.CreateEnumType(type).compile(dialect=postgresql.dialect())) 610 CREATE TYPE myenum AS ENUM ('one','two','three''s') 611 612Existing workarounds which already escape single quote signs will need to be 613modified, else they will now double-escape. 614 615:ticket:`2878` 616 617New Features 618============ 619 620.. _feature_2268: 621 622Event Removal API 623----------------- 624 625Events established using :func:`.event.listen` or :func:`.event.listens_for` 626can now be removed using the new :func:`.event.remove` function. The ``target``, 627``identifier`` and ``fn`` arguments sent to :func:`.event.remove` need to match 628exactly those which were sent for listening, and the event will be removed 629from all locations in which it had been established:: 630 631 @event.listens_for(MyClass, "before_insert", propagate=True) 632 def my_before_insert(mapper, connection, target): 633 """listen for before_insert""" 634 # ... 635 636 event.remove(MyClass, "before_insert", my_before_insert) 637 638In the example above, the ``propagate=True`` flag is set. This 639means ``my_before_insert()`` is established as a listener for ``MyClass`` 640as well as all subclasses of ``MyClass``. 641The system tracks everywhere that the ``my_before_insert()`` 642listener function had been placed as a result of this call and removes it as 643a result of calling :func:`.event.remove`. 644 645The removal system uses a registry to associate arguments passed to 646:func:`.event.listen` with collections of event listeners, which are in many 647cases wrapped versions of the original user-supplied function. This registry 648makes heavy use of weak references in order to allow all the contained contents, 649such as listener targets, to be garbage collected when they go out of scope. 650 651:ticket:`2268` 652 653.. _feature_1418: 654 655New Query Options API; ``load_only()`` option 656--------------------------------------------- 657 658The system of loader options such as :func:`_orm.joinedload`, 659:func:`_orm.subqueryload`, :func:`_orm.lazyload`, :func:`_orm.defer`, etc. 660all build upon a new system known as :class:`_orm.Load`. :class:`_orm.Load` provides 661a "method chained" (a.k.a. :term:`generative`) approach to loader options, so that 662instead of joining together long paths using dots or multiple attribute names, 663an explicit loader style is given for each path. 664 665While the new way is slightly more verbose, it is simpler to understand 666in that there is no ambiguity in what options are being applied to which paths; 667it simplifies the method signatures of the options and provides greater flexibility 668particularly for column-based options. The old systems are to remain functional 669indefinitely as well and all styles can be mixed. 670 671**Old Way** 672 673To set a certain style of loading along every link in a multi-element path, the ``_all()`` 674option has to be used:: 675 676 query(User).options(joinedload_all("orders.items.keywords")) 677 678**New Way** 679 680Loader options are now chainable, so the same ``joinedload(x)`` method is applied 681equally to each link, without the need to keep straight between 682:func:`_orm.joinedload` and :func:`_orm.joinedload_all`:: 683 684 query(User).options(joinedload("orders").joinedload("items").joinedload("keywords")) 685 686**Old Way** 687 688Setting an option on path that is based on a subclass requires that all 689links in the path be spelled out as class bound attributes, since the 690:meth:`.PropComparator.of_type` method needs to be called:: 691 692 session.query(Company).\ 693 options( 694 subqueryload_all( 695 Company.employees.of_type(Engineer), 696 Engineer.machines 697 ) 698 ) 699 700**New Way** 701 702Only those elements in the path that actually need :meth:`.PropComparator.of_type` 703need to be set as a class-bound attribute, string-based names can be resumed 704afterwards:: 705 706 session.query(Company).\ 707 options( 708 subqueryload(Company.employees.of_type(Engineer)). 709 subqueryload("machines") 710 ) 711 ) 712 713**Old Way** 714 715Setting the loader option on the last link in a long path uses a syntax 716that looks a lot like it should be setting the option for all links in the 717path, causing confusion:: 718 719 query(User).options(subqueryload("orders.items.keywords")) 720 721**New Way** 722 723A path can now be spelled out using :func:`.defaultload` for entries in the 724path where the existing loader style should be unchanged. More verbose 725but the intent is clearer:: 726 727 query(User).options(defaultload("orders").defaultload("items").subqueryload("keywords")) 728 729 730The dotted style can still be taken advantage of, particularly in the case 731of skipping over several path elements:: 732 733 query(User).options(defaultload("orders.items").subqueryload("keywords")) 734 735**Old Way** 736 737The :func:`.defer` option on a path needed to be spelled out with the full 738path for each column:: 739 740 query(User).options(defer("orders.description"), defer("orders.isopen")) 741 742**New Way** 743 744A single :class:`_orm.Load` object that arrives at the target path can have 745:meth:`_orm.Load.defer` called upon it repeatedly:: 746 747 query(User).options(defaultload("orders").defer("description").defer("isopen")) 748 749The Load Class 750^^^^^^^^^^^^^^^ 751 752The :class:`_orm.Load` class can be used directly to provide a "bound" target, 753especially when multiple parent entities are present:: 754 755 from sqlalchemy.orm import Load 756 757 query(User, Address).options(Load(Address).joinedload("entries")) 758 759Load Only 760^^^^^^^^^ 761 762A new option :func:`.load_only` achieves a "defer everything but" style of load, 763loading only the given columns and deferring the rest:: 764 765 from sqlalchemy.orm import load_only 766 767 query(User).options(load_only("name", "fullname")) 768 769 # specify explicit parent entity 770 query(User, Address).options(Load(User).load_only("name", "fullname")) 771 772 # specify path 773 query(User).options(joinedload(User.addresses).load_only("email_address")) 774 775Class-specific Wildcards 776^^^^^^^^^^^^^^^^^^^^^^^^^ 777 778Using :class:`_orm.Load`, a wildcard may be used to set the loading for all 779relationships (or perhaps columns) on a given entity, without affecting any 780others:: 781 782 # lazyload all User relationships 783 query(User).options(Load(User).lazyload("*")) 784 785 # undefer all User columns 786 query(User).options(Load(User).undefer("*")) 787 788 # lazyload all Address relationships 789 query(User).options(defaultload(User.addresses).lazyload("*")) 790 791 # undefer all Address columns 792 query(User).options(defaultload(User.addresses).undefer("*")) 793 794 795:ticket:`1418` 796 797 798.. _feature_2877: 799 800New ``text()`` Capabilities 801--------------------------- 802 803The :func:`_expression.text` construct gains new methods: 804 805* :meth:`_expression.TextClause.bindparams` allows bound parameter types and values 806 to be set flexibly:: 807 808 # setup values 809 stmt = text("SELECT id, name FROM user " 810 "WHERE name=:name AND timestamp=:timestamp").\ 811 bindparams(name="ed", timestamp=datetime(2012, 11, 10, 15, 12, 35)) 812 813 # setup types and/or values 814 stmt = text("SELECT id, name FROM user " 815 "WHERE name=:name AND timestamp=:timestamp").\ 816 bindparams( 817 bindparam("name", value="ed"), 818 bindparam("timestamp", type_=DateTime() 819 ).bindparam(timestamp=datetime(2012, 11, 10, 15, 12, 35)) 820 821* :meth:`_expression.TextClause.columns` supersedes the ``typemap`` option 822 of :func:`_expression.text`, returning a new construct :class:`.TextAsFrom`:: 823 824 # turn a text() into an alias(), with a .c. collection: 825 stmt = text("SELECT id, name FROM user").columns(id=Integer, name=String) 826 stmt = stmt.alias() 827 828 stmt = select([addresses]).select_from( 829 addresses.join(stmt), addresses.c.user_id == stmt.c.id) 830 831 832 # or into a cte(): 833 stmt = text("SELECT id, name FROM user").columns(id=Integer, name=String) 834 stmt = stmt.cte("x") 835 836 stmt = select([addresses]).select_from( 837 addresses.join(stmt), addresses.c.user_id == stmt.c.id) 838 839:ticket:`2877` 840 841.. _feature_722: 842 843INSERT from SELECT 844------------------ 845 846After literally years of pointless procrastination this relatively minor 847syntactical feature has been added, and is also backported to 0.8.3, 848so technically isn't "new" in 0.9. A :func:`_expression.select` construct or other 849compatible construct can be passed to the new method :meth:`_expression.Insert.from_select` 850where it will be used to render an ``INSERT .. SELECT`` construct:: 851 852 >>> from sqlalchemy.sql import table, column 853 >>> t1 = table('t1', column('a'), column('b')) 854 >>> t2 = table('t2', column('x'), column('y')) 855 >>> print(t1.insert().from_select(['a', 'b'], t2.select().where(t2.c.y == 5))) 856 INSERT INTO t1 (a, b) SELECT t2.x, t2.y 857 FROM t2 858 WHERE t2.y = :y_1 859 860The construct is smart enough to also accommodate ORM objects such as classes 861and :class:`_query.Query` objects:: 862 863 s = Session() 864 q = s.query(User.id, User.name).filter_by(name='ed') 865 ins = insert(Address).from_select((Address.id, Address.email_address), q) 866 867rendering:: 868 869 INSERT INTO addresses (id, email_address) 870 SELECT users.id AS users_id, users.name AS users_name 871 FROM users WHERE users.name = :name_1 872 873:ticket:`722` 874 875.. _feature_github_42: 876 877New FOR UPDATE support on ``select()``, ``Query()`` 878--------------------------------------------------- 879 880An attempt is made to simplify the specification of the ``FOR UPDATE`` 881clause on ``SELECT`` statements made within Core and ORM, and support is added 882for the ``FOR UPDATE OF`` SQL supported by PostgreSQL and Oracle. 883 884Using the core :meth:`_expression.GenerativeSelect.with_for_update`, options like ``FOR SHARE`` and 885``NOWAIT`` can be specified individually, rather than linking to arbitrary 886string codes:: 887 888 stmt = select([table]).with_for_update(read=True, nowait=True, of=table) 889 890On Posgtresql the above statement might render like:: 891 892 SELECT table.a, table.b FROM table FOR SHARE OF table NOWAIT 893 894The :class:`_query.Query` object gains a similar method :meth:`_query.Query.with_for_update` 895which behaves in the same way. This method supersedes the existing 896:meth:`_query.Query.with_lockmode` method, which translated ``FOR UPDATE`` clauses 897using a different system. At the moment, the "lockmode" string argument is still 898accepted by the :meth:`.Session.refresh` method. 899 900 901.. _feature_2867: 902 903Floating Point String-Conversion Precision Configurable for Native Floating Point Types 904--------------------------------------------------------------------------------------- 905 906The conversion which SQLAlchemy does whenever a DBAPI returns a Python 907floating point type which is to be converted into a Python ``Decimal()`` 908necessarily involves an intermediary step which converts the floating point 909value to a string. The scale used for this string conversion was previously 910hardcoded to 10, and is now configurable. The setting is available on 911both the :class:`.Numeric` as well as the :class:`.Float` 912type, as well as all SQL- and dialect-specific descendant types, using the 913parameter ``decimal_return_scale``. If the type supports a ``.scale`` parameter, 914as is the case with :class:`.Numeric` and some float types such as 915:class:`.mysql.DOUBLE`, the value of ``.scale`` is used as the default 916for ``.decimal_return_scale`` if it is not otherwise specified. If both 917``.scale`` and ``.decimal_return_scale`` are absent, then the default of 91810 takes place. E.g.:: 919 920 from sqlalchemy.dialects.mysql import DOUBLE 921 import decimal 922 923 data = Table('data', metadata, 924 Column('double_value', 925 mysql.DOUBLE(decimal_return_scale=12, asdecimal=True)) 926 ) 927 928 conn.execute( 929 data.insert(), 930 double_value=45.768392065789, 931 ) 932 result = conn.scalar(select([data.c.double_value])) 933 934 # previously, this would typically be Decimal("45.7683920658"), 935 # e.g. trimmed to 10 decimal places 936 937 # now we get 12, as requested, as MySQL can support this 938 # much precision for DOUBLE 939 assert result == decimal.Decimal("45.768392065789") 940 941 942:ticket:`2867` 943 944 945.. _change_2824: 946 947Column Bundles for ORM queries 948------------------------------ 949 950The :class:`.Bundle` allows for querying of sets of columns, which are then 951grouped into one name under the tuple returned by the query. The initial 952purposes of :class:`.Bundle` are 1. to allow "composite" ORM columns to be 953returned as a single value in a column-based result set, rather than expanding 954them out into individual columns and 2. to allow the creation of custom result-set 955constructs within the ORM, using ad-hoc columns and return types, without involving 956the more heavyweight mechanics of mapped classes. 957 958.. seealso:: 959 960 :ref:`migration_2824` 961 962 :ref:`bundles` 963 964:ticket:`2824` 965 966 967Server Side Version Counting 968----------------------------- 969 970The versioning feature of the ORM (now also documented at :ref:`mapper_version_counter`) 971can now make use of server-side version counting schemes, such as those produced 972by triggers or database system columns, as well as conditional programmatic schemes outside 973of the version_id_counter function itself. By providing the value ``False`` 974to the ``version_id_generator`` parameter, the ORM will use the already-set version 975identifier, or alternatively fetch the version identifier 976from each row at the same time the INSERT or UPDATE is emitted. When using a 977server-generated version identifier, it is strongly 978recommended that this feature be used only on a backend with strong RETURNING 979support (PostgreSQL, SQL Server; Oracle also supports RETURNING but the cx_oracle 980driver has only limited support), else the additional SELECT statements will 981add significant performance 982overhead. The example provided at :ref:`server_side_version_counter` illustrates 983the usage of the PostgreSQL ``xmin`` system column in order to integrate it with 984the ORM's versioning feature. 985 986.. seealso:: 987 988 :ref:`server_side_version_counter` 989 990:ticket:`2793` 991 992.. _feature_1535: 993 994``include_backrefs=False`` option for ``@validates`` 995---------------------------------------------------- 996 997The :func:`.validates` function now accepts an option ``include_backrefs=True``, 998which will bypass firing the validator for the case where the event initiated 999from a backref:: 1000 1001 from sqlalchemy import Column, Integer, ForeignKey 1002 from sqlalchemy.orm import relationship, validates 1003 from sqlalchemy.ext.declarative import declarative_base 1004 1005 Base = declarative_base() 1006 1007 class A(Base): 1008 __tablename__ = 'a' 1009 1010 id = Column(Integer, primary_key=True) 1011 bs = relationship("B", backref="a") 1012 1013 @validates("bs") 1014 def validate_bs(self, key, item): 1015 print("A.bs validator") 1016 return item 1017 1018 class B(Base): 1019 __tablename__ = 'b' 1020 1021 id = Column(Integer, primary_key=True) 1022 a_id = Column(Integer, ForeignKey('a.id')) 1023 1024 @validates("a", include_backrefs=False) 1025 def validate_a(self, key, item): 1026 print("B.a validator") 1027 return item 1028 1029 a1 = A() 1030 a1.bs.append(B()) # prints only "A.bs validator" 1031 1032 1033:ticket:`1535` 1034 1035 1036PostgreSQL JSON Type 1037-------------------- 1038 1039The PostgreSQL dialect now features a :class:`_postgresql.JSON` type to 1040complement the :class:`_postgresql.HSTORE` type. 1041 1042.. seealso:: 1043 1044 :class:`_postgresql.JSON` 1045 1046:ticket:`2581` 1047 1048.. _feature_automap: 1049 1050Automap Extension 1051----------------- 1052 1053A new extension is added in **0.9.1** known as :mod:`sqlalchemy.ext.automap`. This is an 1054**experimental** extension which expands upon the functionality of Declarative 1055as well as the :class:`.DeferredReflection` class. Essentially, the extension 1056provides a base class :class:`.AutomapBase` which automatically generates 1057mapped classes and relationships between them based on given table metadata. 1058 1059The :class:`_schema.MetaData` in use normally might be produced via reflection, but 1060there is no requirement that reflection is used. The most basic usage 1061illustrates how :mod:`sqlalchemy.ext.automap` is able to deliver mapped 1062classes, including relationships, based on a reflected schema:: 1063 1064 from sqlalchemy.ext.automap import automap_base 1065 from sqlalchemy.orm import Session 1066 from sqlalchemy import create_engine 1067 1068 Base = automap_base() 1069 1070 # engine, suppose it has two tables 'user' and 'address' set up 1071 engine = create_engine("sqlite:///mydatabase.db") 1072 1073 # reflect the tables 1074 Base.prepare(engine, reflect=True) 1075 1076 # mapped classes are now created with names matching that of the table 1077 # name. 1078 User = Base.classes.user 1079 Address = Base.classes.address 1080 1081 session = Session(engine) 1082 1083 # rudimentary relationships are produced 1084 session.add(Address(email_address="foo@bar.com", user=User(name="foo"))) 1085 session.commit() 1086 1087 # collection-based relationships are by default named "<classname>_collection" 1088 print(u1.address_collection) 1089 1090Beyond that, the :class:`.AutomapBase` class is a declarative base, and supports 1091all the features that declarative does. The "automapping" feature can be used 1092with an existing, explicitly declared schema to generate relationships and 1093missing classes only. Naming schemes and relationship-production routines 1094can be dropped in using callable functions. 1095 1096It is hoped that the :class:`.AutomapBase` system provides a quick 1097and modernized solution to the problem that the very famous 1098`SQLSoup <https://sqlsoup.readthedocs.io/en/latest/>`_ 1099also tries to solve, that of generating a quick and rudimentary object 1100model from an existing database on the fly. By addressing the issue strictly 1101at the mapper configuration level, and integrating fully with existing 1102Declarative class techniques, :class:`.AutomapBase` seeks to provide 1103a well-integrated approach to the issue of expediently auto-generating ad-hoc 1104mappings. 1105 1106.. seealso:: 1107 1108 :ref:`automap_toplevel` 1109 1110Behavioral Improvements 1111======================= 1112 1113Improvements that should produce no compatibility issues except in exceedingly 1114rare and unusual hypothetical cases, but are good to be aware of in case there are 1115unexpected issues. 1116 1117.. _feature_joins_09: 1118 1119Many JOIN and LEFT OUTER JOIN expressions will no longer be wrapped in (SELECT * FROM ..) AS ANON_1 1120--------------------------------------------------------------------------------------------------- 1121 1122For many years, the SQLAlchemy ORM has been held back from being able to nest 1123a JOIN inside the right side of an existing JOIN (typically a LEFT OUTER JOIN, 1124as INNER JOINs could always be flattened):: 1125 1126 SELECT a.*, b.*, c.* FROM a LEFT OUTER JOIN (b JOIN c ON b.id = c.id) ON a.id 1127 1128This was due to the fact that SQLite up until version **3.7.16** cannot parse a statement of the above format:: 1129 1130 SQLite version 3.7.15.2 2013-01-09 11:53:05 1131 Enter ".help" for instructions 1132 Enter SQL statements terminated with a ";" 1133 sqlite> create table a(id integer); 1134 sqlite> create table b(id integer); 1135 sqlite> create table c(id integer); 1136 sqlite> select a.id, b.id, c.id from a left outer join (b join c on b.id=c.id) on b.id=a.id; 1137 Error: no such column: b.id 1138 1139Right-outer-joins are of course another way to work around right-side 1140parenthesization; this would be significantly complicated and visually unpleasant 1141to implement, but fortunately SQLite doesn't support RIGHT OUTER JOIN either :):: 1142 1143 sqlite> select a.id, b.id, c.id from b join c on b.id=c.id 1144 ...> right outer join a on b.id=a.id; 1145 Error: RIGHT and FULL OUTER JOINs are not currently supported 1146 1147Back in 2005, it wasn't clear if other databases had trouble with this form, 1148but today it seems clear every database tested except SQLite now supports it 1149(Oracle 8, a very old database, doesn't support the JOIN keyword at all, 1150but SQLAlchemy has always had a simple rewriting scheme in place for Oracle's syntax). 1151To make matters worse, SQLAlchemy's usual workaround of applying a 1152SELECT often degrades performance on platforms like PostgreSQL and MySQL:: 1153 1154 SELECT a.*, anon_1.* FROM a LEFT OUTER JOIN ( 1155 SELECT b.id AS b_id, c.id AS c_id 1156 FROM b JOIN c ON b.id = c.id 1157 ) AS anon_1 ON a.id=anon_1.b_id 1158 1159A JOIN like the above form is commonplace when working with joined-table inheritance structures; 1160any time :meth:`_query.Query.join` is used to join from some parent to a joined-table subclass, or 1161when :func:`_orm.joinedload` is used similarly, SQLAlchemy's ORM would always make sure a nested 1162JOIN was never rendered, lest the query wouldn't be able to run on SQLite. Even though 1163the Core has always supported a JOIN of the more compact form, the ORM had to avoid it. 1164 1165An additional issue would arise when producing joins across many-to-many relationships 1166where special criteria is present in the ON clause. Consider an eager load join like the following:: 1167 1168 session.query(Order).outerjoin(Order.items) 1169 1170Assuming a many-to-many from ``Order`` to ``Item`` which actually refers to a subclass 1171like ``Subitem``, the SQL for the above would look like:: 1172 1173 SELECT order.id, order.name 1174 FROM order LEFT OUTER JOIN order_item ON order.id = order_item.order_id 1175 LEFT OUTER JOIN item ON order_item.item_id = item.id AND item.type = 'subitem' 1176 1177What's wrong with the above query? Basically, that it will load many ``order`` / 1178``order_item`` rows where the criteria of ``item.type == 'subitem'`` is not true. 1179 1180As of SQLAlchemy 0.9, an entirely new approach has been taken. The ORM no longer 1181worries about nesting JOINs in the right side of an enclosing JOIN, and it now will 1182render these as often as possible while still returning the correct results. When 1183the SQL statement is passed to be compiled, the **dialect compiler** will **rewrite the join** 1184to suit the target backend, if that backend is known to not support a right-nested 1185JOIN (which currently is only SQLite - if other backends have this issue please 1186let us know!). 1187 1188So a regular ``query(Parent).join(Subclass)`` will now usually produce a simpler 1189expression:: 1190 1191 SELECT parent.id AS parent_id 1192 FROM parent JOIN ( 1193 base_table JOIN subclass_table 1194 ON base_table.id = subclass_table.id) ON parent.id = base_table.parent_id 1195 1196Joined eager loads like ``query(Parent).options(joinedload(Parent.subclasses))`` 1197will alias the individual tables instead of wrapping in an ``ANON_1``:: 1198 1199 SELECT parent.*, base_table_1.*, subclass_table_1.* FROM parent 1200 LEFT OUTER JOIN ( 1201 base_table AS base_table_1 JOIN subclass_table AS subclass_table_1 1202 ON base_table_1.id = subclass_table_1.id) 1203 ON parent.id = base_table_1.parent_id 1204 1205Many-to-many joins and eagerloads will right nest the "secondary" and "right" tables:: 1206 1207 SELECT order.id, order.name 1208 FROM order LEFT OUTER JOIN 1209 (order_item JOIN item ON order_item.item_id = item.id AND item.type = 'subitem') 1210 ON order_item.order_id = order.id 1211 1212All of these joins, when rendered with a :class:`_expression.Select` statement that specifically 1213specifies ``use_labels=True``, which is true for all the queries the ORM emits, 1214are candidates for "join rewriting", which is the process of rewriting all those right-nested 1215joins into nested SELECT statements, while maintaining the identical labeling used by 1216the :class:`_expression.Select`. So SQLite, the one database that won't support this very 1217common SQL syntax even in 2013, shoulders the extra complexity itself, 1218with the above queries rewritten as:: 1219 1220 -- sqlite only! 1221 SELECT parent.id AS parent_id 1222 FROM parent JOIN ( 1223 SELECT base_table.id AS base_table_id, 1224 base_table.parent_id AS base_table_parent_id, 1225 subclass_table.id AS subclass_table_id 1226 FROM base_table JOIN subclass_table ON base_table.id = subclass_table.id 1227 ) AS anon_1 ON parent.id = anon_1.base_table_parent_id 1228 1229 -- sqlite only! 1230 SELECT parent.id AS parent_id, anon_1.subclass_table_1_id AS subclass_table_1_id, 1231 anon_1.base_table_1_id AS base_table_1_id, 1232 anon_1.base_table_1_parent_id AS base_table_1_parent_id 1233 FROM parent LEFT OUTER JOIN ( 1234 SELECT base_table_1.id AS base_table_1_id, 1235 base_table_1.parent_id AS base_table_1_parent_id, 1236 subclass_table_1.id AS subclass_table_1_id 1237 FROM base_table AS base_table_1 1238 JOIN subclass_table AS subclass_table_1 ON base_table_1.id = subclass_table_1.id 1239 ) AS anon_1 ON parent.id = anon_1.base_table_1_parent_id 1240 1241 -- sqlite only! 1242 SELECT "order".id AS order_id 1243 FROM "order" LEFT OUTER JOIN ( 1244 SELECT order_item_1.order_id AS order_item_1_order_id, 1245 order_item_1.item_id AS order_item_1_item_id, 1246 item.id AS item_id, item.type AS item_type 1247 FROM order_item AS order_item_1 1248 JOIN item ON item.id = order_item_1.item_id AND item.type IN (?) 1249 ) AS anon_1 ON "order".id = anon_1.order_item_1_order_id 1250 1251.. note:: 1252 1253 As of SQLAlchemy 1.1, the workarounds present in this feature for SQLite 1254 will automatically disable themselves when SQLite version **3.7.16** 1255 or greater is detected, as SQLite has repaired support for right-nested joins. 1256 1257The :meth:`_expression.Join.alias`, :func:`.aliased` and :func:`.with_polymorphic` functions now 1258support a new argument, ``flat=True``, which is used to construct aliases of joined-table 1259entities without embedding into a SELECT. This flag is not on by default, to help with 1260backwards compatibility - but now a "polymorphic" selectable can be joined as a target 1261without any subqueries generated:: 1262 1263 employee_alias = with_polymorphic(Person, [Engineer, Manager], flat=True) 1264 1265 session.query(Company).join( 1266 Company.employees.of_type(employee_alias) 1267 ).filter( 1268 or_( 1269 Engineer.primary_language == 'python', 1270 Manager.manager_name == 'dilbert' 1271 ) 1272 ) 1273 1274Generates (everywhere except SQLite):: 1275 1276 SELECT companies.company_id AS companies_company_id, companies.name AS companies_name 1277 FROM companies JOIN ( 1278 people AS people_1 1279 LEFT OUTER JOIN engineers AS engineers_1 ON people_1.person_id = engineers_1.person_id 1280 LEFT OUTER JOIN managers AS managers_1 ON people_1.person_id = managers_1.person_id 1281 ) ON companies.company_id = people_1.company_id 1282 WHERE engineers.primary_language = %(primary_language_1)s 1283 OR managers.manager_name = %(manager_name_1)s 1284 1285:ticket:`2369` :ticket:`2587` 1286 1287.. _feature_2976: 1288 1289Right-nested inner joins available in joined eager loads 1290--------------------------------------------------------- 1291 1292As of version 0.9.4, the above mentioned right-nested joining can be enabled 1293in the case of a joined eager load where an "outer" join is linked to an "inner" 1294on the right side. 1295 1296Normally, a joined eager load chain like the following:: 1297 1298 query(User).options(joinedload("orders", innerjoin=False).joinedload("items", innerjoin=True)) 1299 1300Would not produce an inner join; because of the LEFT OUTER JOIN from user->order, 1301joined eager loading could not use an INNER join from order->items without changing 1302the user rows that are returned, and would instead ignore the "chained" ``innerjoin=True`` 1303directive. How 0.9.0 should have delivered this would be that instead of:: 1304 1305 FROM users LEFT OUTER JOIN orders ON <onclause> LEFT OUTER JOIN items ON <onclause> 1306 1307the new "right-nested joins are OK" logic would kick in, and we'd get:: 1308 1309 FROM users LEFT OUTER JOIN (orders JOIN items ON <onclause>) ON <onclause> 1310 1311Since we missed the boat on that, to avoid further regressions we've added the above 1312functionality by specifying the string ``"nested"`` to :paramref:`_orm.joinedload.innerjoin`:: 1313 1314 query(User).options(joinedload("orders", innerjoin=False).joinedload("items", innerjoin="nested")) 1315 1316This feature is new in 0.9.4. 1317 1318:ticket:`2976` 1319 1320 1321 1322ORM can efficiently fetch just-generated INSERT/UPDATE defaults using RETURNING 1323------------------------------------------------------------------------------- 1324 1325The :class:`_orm.Mapper` has long supported an undocumented flag known as 1326``eager_defaults=True``. The effect of this flag is that when an INSERT or UPDATE 1327proceeds, and the row is known to have server-generated default values, 1328a SELECT would immediately follow it in order to "eagerly" load those new values. 1329Normally, the server-generated columns are marked as "expired" on the object, 1330so that no overhead is incurred unless the application actually accesses these 1331columns soon after the flush. The ``eager_defaults`` flag was therefore not 1332of much use as it could only decrease performance, and was present only to support 1333exotic event schemes where users needed default values to be available 1334immediately within the flush process. 1335 1336In 0.9, as a result of the version id enhancements, ``eager_defaults`` can now 1337emit a RETURNING clause for these values, so on a backend with strong RETURNING 1338support in particular PostgreSQL, the ORM can fetch newly generated default 1339and SQL expression values inline with the INSERT or UPDATE. ``eager_defaults``, 1340when enabled, makes use of RETURNING automatically when the target backend 1341and :class:`_schema.Table` supports "implicit returning". 1342 1343.. _change_2836: 1344 1345Subquery Eager Loading will apply DISTINCT to the innermost SELECT for some queries 1346------------------------------------------------------------------------------------ 1347 1348In an effort to reduce the number of duplicate rows that can be generated 1349by subquery eager loading when a many-to-one relationship is involved, a 1350DISTINCT keyword will be applied to the innermost SELECT when the join is 1351targeting columns that do not comprise the primary key, as in when loading 1352along a many to one. 1353 1354That is, when subquery loading on a many-to-one from A->B:: 1355 1356 SELECT b.id AS b_id, b.name AS b_name, anon_1.b_id AS a_b_id 1357 FROM (SELECT DISTINCT a_b_id FROM a) AS anon_1 1358 JOIN b ON b.id = anon_1.a_b_id 1359 1360Since ``a.b_id`` is a non-distinct foreign key, DISTINCT is applied so that 1361redundant ``a.b_id`` are eliminated. The behavior can be turned on or off 1362unconditionally for a particular :func:`_orm.relationship` using the flag 1363``distinct_target_key``, setting the value to ``True`` for unconditionally 1364on, ``False`` for unconditionally off, and ``None`` for the feature to take 1365effect when the target SELECT is against columns that do not comprise a full 1366primary key. In 0.9, ``None`` is the default. 1367 1368The option is also backported to 0.8 where the ``distinct_target_key`` 1369option defaults to ``False``. 1370 1371While the feature here is designed to help performance by eliminating 1372duplicate rows, the ``DISTINCT`` keyword in SQL itself can have a negative 1373performance impact. If columns in the SELECT are not indexed, ``DISTINCT`` 1374will likely perform an ``ORDER BY`` on the rowset which can be expensive. 1375By keeping the feature limited just to foreign keys which are hopefully 1376indexed in any case, it's expected that the new defaults are reasonable. 1377 1378The feature also does not eliminate every possible dupe-row scenario; if 1379a many-to-one is present elsewhere in the chain of joins, dupe rows may still 1380be present. 1381 1382:ticket:`2836` 1383 1384.. _migration_2789: 1385 1386Backref handlers can now propagate more than one level deep 1387----------------------------------------------------------- 1388 1389The mechanism by which attribute events pass along their "initiator", that is 1390the object associated with the start of the event, has been changed; instead 1391of a :class:`.AttributeImpl` being passed, a new object :class:`.attributes.Event` 1392is passed instead; this object refers to the :class:`.AttributeImpl` as well as 1393to an "operation token", representing if the operation is an append, remove, 1394or replace operation. 1395 1396The attribute event system no longer looks at this "initiator" object in order to halt a 1397recursive series of attribute events. Instead, the system of preventing endless 1398recursion due to mutually-dependent backref handlers has been moved 1399to the ORM backref event handlers specifically, which now take over the role 1400of ensuring that a chain of mutually-dependent events (such as append to collection 1401A.bs, set many-to-one attribute B.a in response) doesn't go into an endless recursion 1402stream. The rationale here is that the backref system, given more detail and control 1403over event propagation, can finally allow operations more than one level deep 1404to occur; the typical scenario is when a collection append results in a many-to-one 1405replacement operation, which in turn should cause the item to be removed from a 1406previous collection:: 1407 1408 class Parent(Base): 1409 __tablename__ = 'parent' 1410 1411 id = Column(Integer, primary_key=True) 1412 children = relationship("Child", backref="parent") 1413 1414 class Child(Base): 1415 __tablename__ = 'child' 1416 1417 id = Column(Integer, primary_key=True) 1418 parent_id = Column(ForeignKey('parent.id')) 1419 1420 p1 = Parent() 1421 p2 = Parent() 1422 c1 = Child() 1423 1424 p1.children.append(c1) 1425 1426 assert c1.parent is p1 # backref event establishes c1.parent as p1 1427 1428 p2.children.append(c1) 1429 1430 assert c1.parent is p2 # backref event establishes c1.parent as p2 1431 assert c1 not in p1.children # second backref event removes c1 from p1.children 1432 1433Above, prior to this change, the ``c1`` object would still have been present 1434in ``p1.children``, even though it is also present in ``p2.children`` at the 1435same time; the backref handlers would have stopped at replacing ``c1.parent`` with 1436``p2`` instead of ``p1``. In 0.9, using the more detailed :class:`.Event` 1437object as well as letting the backref handlers make more detailed decisions about 1438these objects, the propagation can continue onto removing ``c1`` from ``p1.children`` 1439while maintaining a check against the propagation from going into an endless 1440recursive loop. 1441 1442End-user code which a. makes use of the :meth:`.AttributeEvents.set`, 1443:meth:`.AttributeEvents.append`, or :meth:`.AttributeEvents.remove` events, 1444and b. initiates further attribute modification operations as a result of these 1445events may need to be modified to prevent recursive loops, as the attribute system 1446no longer stops a chain of events from propagating endlessly in the absence of the backref 1447event handlers. Additionally, code which depends upon the value of the ``initiator`` 1448will need to be adjusted to the new API, and furthermore must be ready for the 1449value of ``initiator`` to change from its original value within a string of 1450backref-initiated events, as the backref handlers may now swap in a 1451new ``initiator`` value for some operations. 1452 1453:ticket:`2789` 1454 1455.. _change_2838: 1456 1457The typing system now handles the task of rendering "literal bind" values 1458------------------------------------------------------------------------- 1459 1460A new method is added to :class:`.TypeEngine` :meth:`.TypeEngine.literal_processor` 1461as well as :meth:`.TypeDecorator.process_literal_param` for :class:`.TypeDecorator` 1462which take on the task of rendering so-called "inline literal parameters" - parameters 1463that normally render as "bound" values, but are instead being rendered inline 1464into the SQL statement due to the compiler configuration. This feature is used 1465when generating DDL for constructs such as :class:`.CheckConstraint`, as well 1466as by Alembic when using constructs such as ``op.inline_literal()``. Previously, 1467a simple "isinstance" check checked for a few basic types, and the "bind processor" 1468was used unconditionally, leading to such issues as strings being encoded into utf-8 1469prematurely. 1470 1471Custom types written with :class:`.TypeDecorator` should continue to work in 1472"inline literal" scenarios, as the :meth:`.TypeDecorator.process_literal_param` 1473falls back to :meth:`.TypeDecorator.process_bind_param` by default, as these methods 1474usually handle a data manipulation, not as much how the data is presented to the 1475database. :meth:`.TypeDecorator.process_literal_param` can be specified to 1476specifically produce a string representing how a value should be rendered 1477into an inline DDL statement. 1478 1479:ticket:`2838` 1480 1481 1482.. _change_2812: 1483 1484Schema identifiers now carry along their own quoting information 1485--------------------------------------------------------------------- 1486 1487This change simplifies the Core's usage of so-called "quote" flags, such 1488as the ``quote`` flag passed to :class:`_schema.Table` and :class:`_schema.Column`. The flag 1489is now internalized within the string name itself, which is now represented 1490as an instance of :class:`.quoted_name`, a string subclass. The 1491:class:`.IdentifierPreparer` now relies solely on the quoting preferences 1492reported by the :class:`.quoted_name` object rather than checking for any 1493explicit ``quote`` flags in most cases. The issue resolved here includes 1494that various case-sensitive methods such as :meth:`_engine.Engine.has_table` as well 1495as similar methods within dialects now function with explicitly quoted names, 1496without the need to complicate or introduce backwards-incompatible changes 1497to those APIs (many of which are 3rd party) with the details of quoting flags - 1498in particular, a wider range of identifiers now function correctly with the 1499so-called "uppercase" backends like Oracle, Firebird, and DB2 (backends that 1500store and report upon table and column names using all uppercase for case 1501insensitive names). 1502 1503The :class:`.quoted_name` object is used internally as needed; however if 1504other keywords require fixed quoting preferences, the class is available 1505publicly. 1506 1507:ticket:`2812` 1508 1509.. _migration_2804: 1510 1511Improved rendering of Boolean constants, NULL constants, conjunctions 1512---------------------------------------------------------------------- 1513 1514New capabilities have been added to the :func:`.true` and :func:`.false` 1515constants, in particular in conjunction with :func:`.and_` and :func:`.or_` 1516functions as well as the behavior of the WHERE/HAVING clauses in conjunction 1517with these types, boolean types overall, and the :func:`.null` constant. 1518 1519Starting with a table such as this:: 1520 1521 from sqlalchemy import Table, Boolean, Integer, Column, MetaData 1522 1523 t1 = Table('t', MetaData(), Column('x', Boolean()), Column('y', Integer)) 1524 1525A select construct will now render the boolean column as a binary expression 1526on backends that don't feature ``true``/``false`` constant behavior:: 1527 1528 >>> from sqlalchemy import select, and_, false, true 1529 >>> from sqlalchemy.dialects import mysql, postgresql 1530 1531 >>> print(select([t1]).where(t1.c.x).compile(dialect=mysql.dialect())) 1532 SELECT t.x, t.y FROM t WHERE t.x = 1 1533 1534The :func:`.and_` and :func:`.or_` constructs will now exhibit quasi 1535"short circuit" behavior, that is truncating a rendered expression, when a 1536:func:`.true` or :func:`.false` constant is present:: 1537 1538 >>> print(select([t1]).where(and_(t1.c.y > 5, false())).compile( 1539 ... dialect=postgresql.dialect())) 1540 SELECT t.x, t.y FROM t WHERE false 1541 1542:func:`.true` can be used as the base to build up an expression:: 1543 1544 >>> expr = true() 1545 >>> expr = expr & (t1.c.y > 5) 1546 >>> print(select([t1]).where(expr)) 1547 SELECT t.x, t.y FROM t WHERE t.y > :y_1 1548 1549The boolean constants :func:`.true` and :func:`.false` themselves render as 1550``0 = 1`` and ``1 = 1`` for a backend with no boolean constants:: 1551 1552 >>> print(select([t1]).where(and_(t1.c.y > 5, false())).compile( 1553 ... dialect=mysql.dialect())) 1554 SELECT t.x, t.y FROM t WHERE 0 = 1 1555 1556Interpretation of ``None``, while not particularly valid SQL, is at least 1557now consistent:: 1558 1559 >>> print(select([t1.c.x]).where(None)) 1560 SELECT t.x FROM t WHERE NULL 1561 1562 >>> print(select([t1.c.x]).where(None).where(None)) 1563 SELECT t.x FROM t WHERE NULL AND NULL 1564 1565 >>> print(select([t1.c.x]).where(and_(None, None))) 1566 SELECT t.x FROM t WHERE NULL AND NULL 1567 1568:ticket:`2804` 1569 1570.. _migration_1068: 1571 1572Label constructs can now render as their name alone in an ORDER BY 1573------------------------------------------------------------------ 1574 1575For the case where a :class:`.Label` is used in both the columns clause 1576as well as the ORDER BY clause of a SELECT, the label will render as 1577just its name in the ORDER BY clause, assuming the underlying dialect 1578reports support of this feature. 1579 1580E.g. an example like:: 1581 1582 from sqlalchemy.sql import table, column, select, func 1583 1584 t = table('t', column('c1'), column('c2')) 1585 expr = (func.foo(t.c.c1) + t.c.c2).label("expr") 1586 1587 stmt = select([expr]).order_by(expr) 1588 1589 print(stmt) 1590 1591Prior to 0.9 would render as:: 1592 1593 SELECT foo(t.c1) + t.c2 AS expr 1594 FROM t ORDER BY foo(t.c1) + t.c2 1595 1596And now renders as:: 1597 1598 SELECT foo(t.c1) + t.c2 AS expr 1599 FROM t ORDER BY expr 1600 1601The ORDER BY only renders the label if the label isn't further 1602embedded into an expression within the ORDER BY, other than a simple 1603``ASC`` or ``DESC``. 1604 1605The above format works on all databases tested, but might have 1606compatibility issues with older database versions (MySQL 4? Oracle 8? 1607etc.). Based on user reports we can add rules that will disable the 1608feature based on database version detection. 1609 1610:ticket:`1068` 1611 1612.. _migration_2848: 1613 1614``RowProxy`` now has tuple-sorting behavior 1615------------------------------------------- 1616 1617The :class:`.RowProxy` object acts much like a tuple, but up until now 1618would not sort as a tuple if a list of them were sorted using ``sorted()``. 1619The ``__eq__()`` method now compares both sides as a tuple and also 1620an ``__lt__()`` method has been added:: 1621 1622 users.insert().execute( 1623 dict(user_id=1, user_name='foo'), 1624 dict(user_id=2, user_name='bar'), 1625 dict(user_id=3, user_name='def'), 1626 ) 1627 1628 rows = users.select().order_by(users.c.user_name).execute().fetchall() 1629 1630 eq_(rows, [(2, 'bar'), (3, 'def'), (1, 'foo')]) 1631 1632 eq_(sorted(rows), [(1, 'foo'), (2, 'bar'), (3, 'def')]) 1633 1634:ticket:`2848` 1635 1636.. _migration_2850: 1637 1638A bindparam() construct with no type gets upgraded via copy when a type is available 1639------------------------------------------------------------------------------------ 1640 1641The logic which "upgrades" a :func:`.bindparam` construct to take on the 1642type of the enclosing expression has been improved in two ways. First, the 1643:func:`.bindparam` object is **copied** before the new type is assigned, so that 1644the given :func:`.bindparam` is not mutated in place. Secondly, this same 1645operation occurs when an :class:`_expression.Insert` or :class:`_expression.Update` construct is compiled, 1646regarding the "values" that were set in the statement via the :meth:`.ValuesBase.values` 1647method. 1648 1649If given an untyped :func:`.bindparam`:: 1650 1651 bp = bindparam("some_col") 1652 1653If we use this parameter as follows:: 1654 1655 expr = mytable.c.col == bp 1656 1657The type for ``bp`` remains as ``NullType``, however if ``mytable.c.col`` 1658is of type ``String``, then ``expr.right``, that is the right side of the 1659binary expression, will take on the ``String`` type. Previously, ``bp`` itself 1660would have been changed in place to have ``String`` as its type. 1661 1662Similarly, this operation occurs in an :class:`_expression.Insert` or :class:`_expression.Update`:: 1663 1664 stmt = mytable.update().values(col=bp) 1665 1666Above, ``bp`` remains unchanged, but the ``String`` type will be used when 1667the statement is executed, which we can see by examining the ``binds`` dictionary:: 1668 1669 >>> compiled = stmt.compile() 1670 >>> compiled.binds['some_col'].type 1671 String 1672 1673The feature allows custom types to take their expected effect within INSERT/UPDATE 1674statements without needing to explicitly specify those types within every 1675:func:`.bindparam` expression. 1676 1677The potentially backwards-compatible changes involve two unlikely 1678scenarios. Since the bound parameter is 1679**cloned**, users should not be relying upon making in-place changes to a 1680:func:`.bindparam` construct once created. Additionally, code which uses 1681:func:`.bindparam` within an :class:`_expression.Insert` or :class:`_expression.Update` statement 1682which is relying on the fact that the :func:`.bindparam` is not typed according 1683to the column being assigned towards will no longer function in that way. 1684 1685:ticket:`2850` 1686 1687 1688.. _migration_1765: 1689 1690Columns can reliably get their type from a column referred to via ForeignKey 1691---------------------------------------------------------------------------- 1692 1693There's a long standing behavior which says that a :class:`_schema.Column` can be 1694declared without a type, as long as that :class:`_schema.Column` is referred to 1695by a :class:`_schema.ForeignKeyConstraint`, and the type from the referenced column 1696will be copied into this one. The problem has been that this feature never 1697worked very well and wasn't maintained. The core issue was that the 1698:class:`_schema.ForeignKey` object doesn't know what target :class:`_schema.Column` it 1699refers to until it is asked, typically the first time the foreign key is used 1700to construct a :class:`_expression.Join`. So until that time, the parent :class:`_schema.Column` 1701would not have a type, or more specifically, it would have a default type 1702of :class:`.NullType`. 1703 1704While it's taken a long time, the work to reorganize the initialization of 1705:class:`_schema.ForeignKey` objects has been completed such that this feature can 1706finally work acceptably. At the core of the change is that the :attr:`_schema.ForeignKey.column` 1707attribute no longer lazily initializes the location of the target :class:`_schema.Column`; 1708the issue with this system was that the owning :class:`_schema.Column` would be stuck 1709with :class:`.NullType` as its type until the :class:`_schema.ForeignKey` happened to 1710be used. 1711 1712In the new version, the :class:`_schema.ForeignKey` coordinates with the eventual 1713:class:`_schema.Column` it will refer to using internal attachment events, so that the 1714moment the referencing :class:`_schema.Column` is associated with the 1715:class:`_schema.MetaData`, all :class:`_schema.ForeignKey` objects that 1716refer to it will be sent a message that they need to initialize their parent 1717column. This system is more complicated but works more solidly; as a bonus, 1718there are now tests in place for a wide variety of :class:`_schema.Column` / 1719:class:`_schema.ForeignKey` configuration scenarios and error messages have been 1720improved to be very specific to no less than seven different error conditions. 1721 1722Scenarios which now work correctly include: 1723 17241. The type on a :class:`_schema.Column` is immediately present as soon as the 1725 target :class:`_schema.Column` becomes associated with the same :class:`_schema.MetaData`; 1726 this works no matter which side is configured first:: 1727 1728 >>> from sqlalchemy import Table, MetaData, Column, Integer, ForeignKey 1729 >>> metadata = MetaData() 1730 >>> t2 = Table('t2', metadata, Column('t1id', ForeignKey('t1.id'))) 1731 >>> t2.c.t1id.type 1732 NullType() 1733 >>> t1 = Table('t1', metadata, Column('id', Integer, primary_key=True)) 1734 >>> t2.c.t1id.type 1735 Integer() 1736 17372. The system now works with :class:`_schema.ForeignKeyConstraint` as well:: 1738 1739 >>> from sqlalchemy import Table, MetaData, Column, Integer, ForeignKeyConstraint 1740 >>> metadata = MetaData() 1741 >>> t2 = Table('t2', metadata, 1742 ... Column('t1a'), Column('t1b'), 1743 ... ForeignKeyConstraint(['t1a', 't1b'], ['t1.a', 't1.b'])) 1744 >>> t2.c.t1a.type 1745 NullType() 1746 >>> t2.c.t1b.type 1747 NullType() 1748 >>> t1 = Table('t1', metadata, 1749 ... Column('a', Integer, primary_key=True), 1750 ... Column('b', Integer, primary_key=True)) 1751 >>> t2.c.t1a.type 1752 Integer() 1753 >>> t2.c.t1b.type 1754 Integer() 1755 17563. It even works for "multiple hops" - that is, a :class:`_schema.ForeignKey` that refers to a 1757 :class:`_schema.Column` that refers to another :class:`_schema.Column`:: 1758 1759 >>> from sqlalchemy import Table, MetaData, Column, Integer, ForeignKey 1760 >>> metadata = MetaData() 1761 >>> t2 = Table('t2', metadata, Column('t1id', ForeignKey('t1.id'))) 1762 >>> t3 = Table('t3', metadata, Column('t2t1id', ForeignKey('t2.t1id'))) 1763 >>> t2.c.t1id.type 1764 NullType() 1765 >>> t3.c.t2t1id.type 1766 NullType() 1767 >>> t1 = Table('t1', metadata, Column('id', Integer, primary_key=True)) 1768 >>> t2.c.t1id.type 1769 Integer() 1770 >>> t3.c.t2t1id.type 1771 Integer() 1772 1773:ticket:`1765` 1774 1775 1776Dialect Changes 1777=============== 1778 1779Firebird ``fdb`` is now the default Firebird dialect. 1780----------------------------------------------------- 1781 1782The ``fdb`` dialect is now used if an engine is created without a dialect 1783specifier, i.e. ``firebird://``. ``fdb`` is a ``kinterbasdb`` compatible 1784DBAPI which per the Firebird project is now their official Python driver. 1785 1786:ticket:`2504` 1787 1788Firebird ``fdb`` and ``kinterbasdb`` set ``retaining=False`` by default 1789----------------------------------------------------------------------- 1790 1791Both the ``fdb`` and ``kinterbasdb`` DBAPIs support a flag ``retaining=True`` 1792which can be passed to the ``commit()`` and ``rollback()`` methods of its 1793connection. The documented rationale for this flag is so that the DBAPI 1794can re-use internal transaction state for subsequent transactions, for the 1795purposes of improving performance. However, newer documentation refers 1796to analyses of Firebird's "garbage collection" which expresses that this flag 1797can have a negative effect on the database's ability to process cleanup 1798tasks, and has been reported as *lowering* performance as a result. 1799 1800It's not clear how this flag is actually usable given this information, 1801and as it appears to be only a performance enhancing feature, it now defaults 1802to ``False``. The value can be controlled by passing the flag ``retaining=True`` 1803to the :func:`_sa.create_engine` call. This is a new flag which is added as of 18040.8.2, so applications on 0.8.2 can begin setting this to ``True`` or ``False`` 1805as desired. 1806 1807.. seealso:: 1808 1809 :mod:`sqlalchemy.dialects.firebird.fdb` 1810 1811 :mod:`sqlalchemy.dialects.firebird.kinterbasdb` 1812 1813 http://pythonhosted.org/fdb/usage-guide.html#retaining-transactions - information 1814 on the "retaining" flag. 1815 1816:ticket:`2763` 1817 1818 1819 1820 1821 1822