1=============================
2What's New in SQLAlchemy 1.1?
3=============================
4
5.. admonition:: About this Document
6
7    This document describes changes between SQLAlchemy version 1.0
8    and SQLAlchemy version 1.1.
9
10Introduction
11============
12
13This guide introduces what's new in SQLAlchemy version 1.1,
14and also documents changes which affect users migrating
15their applications from the 1.0 series of SQLAlchemy to 1.1.
16
17Please carefully review the sections on behavioral changes for
18potentially backwards-incompatible changes in behavior.
19
20Platform / Installer Changes
21============================
22
23Setuptools is now required for install
24--------------------------------------
25
26SQLAlchemy's ``setup.py`` file has for many years supported operation
27both with Setuptools installed and without; supporting a "fallback" mode
28that uses straight Distutils.  As a Setuptools-less Python environment is
29now unheard of, and in order to support the featureset of Setuptools
30more fully, in particular to support py.test's integration with it as well
31as things like "extras", ``setup.py`` now depends on Setuptools fully.
32
33.. seealso::
34
35    :ref:`installation`
36
37:ticket:`3489`
38
39Enabling / Disabling C Extension builds is only via environment variable
40------------------------------------------------------------------------
41
42The C Extensions build by default during install as long as it is possible.
43To disable C extension builds, the ``DISABLE_SQLALCHEMY_CEXT`` environment
44variable was made available as of SQLAlchemy 0.8.6 / 0.9.4.  The previous
45approach of using the ``--without-cextensions`` argument has been removed,
46as it relies on deprecated features of setuptools.
47
48.. seealso::
49
50    :ref:`c_extensions`
51
52:ticket:`3500`
53
54
55New Features and Improvements - ORM
56===================================
57
58.. _change_2677:
59
60New Session lifecycle events
61----------------------------
62
63The :class:`.Session` has long supported events that allow some degree
64of tracking of state changes to objects, including
65:meth:`.SessionEvents.before_attach`, :meth:`.SessionEvents.after_attach`,
66and :meth:`.SessionEvents.before_flush`.  The Session documentation also
67documents major object states at :ref:`session_object_states`.  However,
68there has never been system of tracking objects specifically as they
69pass through these transitions.  Additionally, the status of "deleted" objects
70has historically been murky as the objects act somewhere between
71the "persistent" and "detached" states.
72
73To clean up this area and allow the realm of session state transition
74to be fully transparent, a new series of events have been added that
75are intended to cover every possible way that an object might transition
76between states, and additionally the "deleted" status has been given
77its own official state name within the realm of session object states.
78
79New State Transition Events
80^^^^^^^^^^^^^^^^^^^^^^^^^^^
81
82Transitions between all states of an object such as :term:`persistent`,
83:term:`pending` and others can now be intercepted in terms of a
84session-level event intended to cover a specific transition.
85Transitions as objects move into a :class:`.Session`, move out of a
86:class:`.Session`, and even all the transitions which occur when the
87transaction is rolled back using :meth:`.Session.rollback`
88are explicitly present in the interface of :class:`.SessionEvents`.
89
90In total, there are **ten new events**.  A summary of these events is in a
91newly written documentation section :ref:`session_lifecycle_events`.
92
93
94New Object State "deleted" is added, deleted objects no longer "persistent"
95^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
96
97The :term:`persistent` state of an object in the :class:`.Session` has
98always been documented as an object that has a valid database identity;
99however in the case of objects that were deleted within a flush, they
100have always been in a grey area where they are not really "detached"
101from the :class:`.Session` yet, because they can still be restored
102within a rollback, but are not really "persistent" because their database
103identity has been deleted and they aren't present in the identity map.
104
105To resolve this grey area given the new events, a new object state
106:term:`deleted` is introduced.  This state exists between the "persistent" and
107"detached" states.  An object that is marked for deletion via
108:meth:`.Session.delete` remains in the "persistent" state until a flush
109proceeds; at that point, it is removed from the identity map, moves
110to the "deleted" state, and the :meth:`.SessionEvents.persistent_to_deleted`
111hook is invoked.  If the :class:`.Session` object's transaction is rolled
112back, the object is restored as persistent; the
113:meth:`.SessionEvents.deleted_to_persistent` transition is called.  Otherwise
114if the :class:`.Session` object's transaction is committed,
115the :meth:`.SessionEvents.deleted_to_detached` transition is invoked.
116
117Additionally, the :attr:`.InstanceState.persistent` accessor **no longer returns
118True** for an object that is in the new "deleted" state; instead, the
119:attr:`.InstanceState.deleted` accessor has been enhanced to reliably
120report on this new state.   When the object is detached, the :attr:`.InstanceState.deleted`
121returns False and the :attr:`.InstanceState.detached` accessor is True
122instead.  To determine if an object was deleted either in the current
123transaction or in a previous transaction, use the
124:attr:`.InstanceState.was_deleted` accessor.
125
126Strong Identity Map is Deprecated
127^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128
129One of the inspirations for the new series of transition events was to enable
130leak-proof tracking of objects as they move in and out of the identity map,
131so that a "strong reference" may be maintained mirroring the object
132moving in and out of this map.  With this new capability, there is no longer
133any need for the :paramref:`.Session.weak_identity_map` parameter and the
134corresponding :class:`.StrongIdentityMap` object.  This option has remained
135in SQLAlchemy for many years as the "strong-referencing" behavior used to be
136the only behavior available, and many applications were written to assume
137this behavior.   It has long been recommended that strong-reference tracking
138of objects not be an intrinsic job of the :class:`.Session` and instead
139be an application-level construct built as needed by the application; the
140new event model allows even the exact behavior of the strong identity map
141to be replicated.   See :ref:`session_referencing_behavior` for a new
142recipe illustrating how to replace the strong identity map.
143
144:ticket:`2677`
145
146.. _change_1311:
147
148New init_scalar() event intercepts default values at ORM level
149--------------------------------------------------------------
150
151The ORM produces a value of ``None`` when an attribute that has not been
152set is first accessed, for a non-persistent object::
153
154    >>> obj = MyObj()
155    >>> obj.some_value
156    None
157
158There's a use case for this in-Python value to correspond to that of a
159Core-generated default value, even before the object is persisted.
160To suit this use case a new event :meth:`.AttributeEvents.init_scalar`
161is added.   The new example ``active_column_defaults.py`` at
162:ref:`examples_instrumentation` illustrates a sample use, so the effect
163can instead be::
164
165    >>> obj = MyObj()
166    >>> obj.some_value
167    "my default"
168
169:ticket:`1311`
170
171.. _change_3499:
172
173Changes regarding "unhashable" types, impacts deduping of ORM rows
174------------------------------------------------------------------
175
176The :class:`_query.Query` object has a well-known behavior of "deduping"
177returned rows that contain at least one ORM-mapped entity (e.g., a
178full mapped object, as opposed to individual column values). The
179primary purpose of this is so that the handling of entities works
180smoothly in conjunction with the identity map, including to
181accommodate for the duplicate entities normally represented within
182joined eager loading, as well as when joins are used for the purposes
183of filtering on additional columns.
184
185This deduplication relies upon the hashability of the elements within
186the row.  With the introduction of PostgreSQL's special types like
187:class:`_postgresql.ARRAY`, :class:`_postgresql.HSTORE` and
188:class:`_postgresql.JSON`, the experience of types within rows being
189unhashable and encountering problems here is more prevalent than
190it was previously.
191
192In fact, SQLAlchemy has since version 0.8 included a flag on datatypes that
193are noted as "unhashable", however this flag was not used consistently
194on built in types.  As described in :ref:`change_3499_postgresql`, this
195flag is now set consistently for all of PostgreSQL's "structural" types.
196
197The "unhashable" flag is also set on the :class:`.NullType` type,
198as :class:`.NullType` is used to refer to any expression of unknown
199type.
200
201Since :class:`.NullType` is applied to most
202usages of :attr:`.func`, as :attr:`.func` doesn't actually know anything
203about the function names given in most cases, **using func() will
204often disable row deduping unless explicit typing is applied**.
205The following examples illustrate ``func.substr()`` applied to a string
206expression, and ``func.date()`` applied to a datetime expression; both
207examples will return duplicate rows due to the joined eager load unless
208explicit typing is applied::
209
210    result = session.query(
211        func.substr(A.some_thing, 0, 4), A
212    ).options(joinedload(A.bs)).all()
213
214    users = session.query(
215        func.date(
216            User.date_created, 'start of month'
217        ).label('month'),
218        User,
219    ).options(joinedload(User.orders)).all()
220
221The above examples, in order to retain deduping, should be specified as::
222
223    result = session.query(
224        func.substr(A.some_thing, 0, 4, type_=String), A
225    ).options(joinedload(A.bs)).all()
226
227    users = session.query(
228        func.date(
229            User.date_created, 'start of month', type_=DateTime
230        ).label('month'),
231        User,
232    ).options(joinedload(User.orders)).all()
233
234Additionally, the treatment of a so-called "unhashable" type is slightly
235different than its been in previous releases; internally we are using
236the ``id()`` function to get a "hash value" from these structures, just
237as we would any ordinary mapped object.   This replaces the previous
238approach which applied a counter to the object.
239
240:ticket:`3499`
241
242.. _change_3321:
243
244Specific checks added for passing mapped classes, instances as SQL literals
245---------------------------------------------------------------------------
246
247The typing system now has specific checks for passing of SQLAlchemy
248"inspectable" objects in contexts where they would otherwise be handled as
249literal values.   Any SQLAlchemy built-in object that is legal to pass as a
250SQL value (which is not already a :class:`_expression.ClauseElement` instance)
251includes a method ``__clause_element__()`` which provides a
252valid SQL expression for that object.  For SQLAlchemy objects that
253don't provide this, such as mapped classes, mappers, and mapped
254instances, a more informative error message is emitted rather than
255allowing the DBAPI to receive the object and fail later.  An example
256is illustrated below, where a string-based attribute ``User.name`` is
257compared to a full instance of ``User()``, rather than against a
258string value::
259
260    >>> some_user = User()
261    >>> q = s.query(User).filter(User.name == some_user)
262    ...
263    sqlalchemy.exc.ArgumentError: Object <__main__.User object at 0x103167e90> is not legal as a SQL literal value
264
265The exception is now immediate when the comparison is made between
266``User.name == some_user``.  Previously, a comparison like the above
267would produce a SQL expression that would only fail once resolved
268into a DBAPI execution call; the mapped ``User`` object would
269ultimately become a bound parameter that would be rejected by the
270DBAPI.
271
272Note that in the above example, the expression fails because
273``User.name`` is a string-based (e.g. column oriented) attribute.
274The change does *not* impact the usual case of comparing a many-to-one
275relationship attribute to an object, which is handled distinctly::
276
277    >>> # Address.user refers to the User mapper, so
278    >>> # this is of course still OK!
279    >>> q = s.query(Address).filter(Address.user == some_user)
280
281
282:ticket:`3321`
283
284.. _feature_indexable:
285
286New Indexable ORM extension
287---------------------------
288
289The :ref:`indexable_toplevel` extension is an extension to the hybrid
290attribute feature which allows the construction of attributes which
291refer to specific elements of an "indexable" data type, such as an array
292or JSON field::
293
294    class Person(Base):
295        __tablename__ = 'person'
296
297        id = Column(Integer, primary_key=True)
298        data = Column(JSON)
299
300        name = index_property('data', 'name')
301
302Above, the ``name`` attribute will read/write the field ``"name"``
303from the JSON column ``data``, after initializing it to an
304empty dictionary::
305
306    >>> person = Person(name='foobar')
307    >>> person.name
308    foobar
309
310The extension also triggers a change event when the attribute is modified,
311so that there's no need to use :class:`~.mutable.MutableDict` in order
312to track this change.
313
314.. seealso::
315
316    :ref:`indexable_toplevel`
317
318.. _change_3250:
319
320New options allowing explicit persistence of NULL over a default
321----------------------------------------------------------------
322
323Related to the new JSON-NULL support added to PostgreSQL as part of
324:ref:`change_3514`, the base :class:`.TypeEngine` class now supports
325a method :meth:`.TypeEngine.evaluates_none` which allows a positive set
326of the ``None`` value on an attribute to be persisted as NULL, rather than
327omitting the column from the INSERT statement, which has the effect of using
328the column-level default.  This allows a mapper-level
329configuration of the existing object-level technique of assigning
330:func:`_expression.null` to the attribute.
331
332.. seealso::
333
334    :ref:`session_forcing_null`
335
336:ticket:`3250`
337
338
339.. _change_3582:
340
341Further Fixes to single-table inheritance querying
342--------------------------------------------------
343
344Continuing from 1.0's :ref:`migration_3177`, the :class:`_query.Query` should
345no longer inappropriately add the "single inheritance" criteria when the
346query is against a subquery expression such as an exists::
347
348    class Widget(Base):
349        __tablename__ = 'widget'
350        id = Column(Integer, primary_key=True)
351        type = Column(String)
352        data = Column(String)
353        __mapper_args__ = {'polymorphic_on': type}
354
355
356    class FooWidget(Widget):
357        __mapper_args__ = {'polymorphic_identity': 'foo'}
358
359    q = session.query(FooWidget).filter(FooWidget.data == 'bar').exists()
360
361    session.query(q).all()
362
363Produces::
364
365    SELECT EXISTS (SELECT 1
366    FROM widget
367    WHERE widget.data = :data_1 AND widget.type IN (:type_1)) AS anon_1
368
369The IN clause on the inside is appropriate, in order to limit to FooWidget
370objects, however previously the IN clause would also be generated a second
371time on the outside of the subquery.
372
373:ticket:`3582`
374
375.. _change_3680:
376
377Improved Session state when a SAVEPOINT is cancelled by the database
378--------------------------------------------------------------------
379
380A common case with MySQL is that a SAVEPOINT is cancelled when a deadlock
381occurs within the transaction.  The :class:`.Session` has been modified
382to deal with this failure mode slightly more gracefully, such that the
383outer, non-savepoint transaction still remains usable::
384
385    s = Session()
386    s.begin_nested()
387
388    s.add(SomeObject())
389
390    try:
391        # assume the flush fails, flush goes to rollback to the
392        # savepoint and that also fails
393        s.flush()
394    except Exception as err:
395        print("Something broke, and our SAVEPOINT vanished too")
396
397    # this is the SAVEPOINT transaction, marked as
398    # DEACTIVE so the rollback() call succeeds
399    s.rollback()
400
401    # this is the outermost transaction, remains ACTIVE
402    # so rollback() or commit() can succeed
403    s.rollback()
404
405This issue is a continuation of :ticket:`2696` where we emit a warning
406so that the original error can be seen when running on Python 2, even though
407the SAVEPOINT exception takes precedence.  On Python 3, exceptions are chained
408so both failures are reported individually.
409
410
411:ticket:`3680`
412
413.. _change_3677:
414
415Erroneous "new instance X conflicts with persistent instance Y" flush errors fixed
416----------------------------------------------------------------------------------
417
418The :meth:`.Session.rollback` method is responsible for removing objects
419that were INSERTed into the database, e.g. moved from pending to persistent,
420within that now rolled-back transaction.   Objects that make this state
421change are tracked in a weak-referencing collection, and if an object is
422garbage collected from that collection, the :class:`.Session` no longer worries
423about it (it would otherwise not scale for operations that insert many new
424objects within a transaction).  However, an issue arises if the application
425re-loads that same garbage-collected row within the transaction, before the
426rollback occurs; if a strong reference to this object remains into the next
427transaction, the fact that this object was not inserted and should be
428removed would be lost, and the flush would incorrectly raise an error::
429
430    from sqlalchemy import Column, create_engine
431    from sqlalchemy.orm import Session
432    from sqlalchemy.ext.declarative import declarative_base
433
434    Base = declarative_base()
435
436    class A(Base):
437        __tablename__ = 'a'
438        id = Column(Integer, primary_key=True)
439
440    e = create_engine("sqlite://", echo=True)
441    Base.metadata.create_all(e)
442
443    s = Session(e)
444
445    # persist an object
446    s.add(A(id=1))
447    s.flush()
448
449    # rollback buffer loses reference to A
450
451    # load it again, rollback buffer knows nothing
452    # about it
453    a1 = s.query(A).first()
454
455    # roll back the transaction; all state is expired but the
456    # "a1" reference remains
457    s.rollback()
458
459    # previous "a1" conflicts with the new one because we aren't
460    # checking that it never got committed
461    s.add(A(id=1))
462    s.commit()
463
464The above program would raise::
465
466    FlushError: New instance <User at 0x7f0287eca4d0> with identity key
467    (<class 'test.orm.test_transaction.User'>, ('u1',)) conflicts
468    with persistent instance <User at 0x7f02889c70d0>
469
470The bug is that when the above exception is raised, the unit of work
471is operating upon the original object assuming it's a live row, when in
472fact the object is expired and upon testing reveals that it's gone.  The
473fix tests this condition now, so in the SQL log we see:
474
475.. sourcecode:: sql
476
477    BEGIN (implicit)
478
479    INSERT INTO a (id) VALUES (?)
480    (1,)
481
482    SELECT a.id AS a_id FROM a LIMIT ? OFFSET ?
483    (1, 0)
484
485    ROLLBACK
486
487    BEGIN (implicit)
488
489    SELECT a.id AS a_id FROM a WHERE a.id = ?
490    (1,)
491
492    INSERT INTO a (id) VALUES (?)
493    (1,)
494
495    COMMIT
496
497Above, the unit of work now does a SELECT for the row we're about to report
498as a conflict for, sees that it doesn't exist, and proceeds normally.
499The expense of this SELECT is only incurred in the case when we would have
500erroneously raised an exception in any case.
501
502
503:ticket:`3677`
504
505.. _change_2349:
506
507passive_deletes feature for joined-inheritance mappings
508-------------------------------------------------------
509
510A joined-table inheritance mapping may now allow a DELETE to proceed
511as a result of :meth:`.Session.delete`, which only emits DELETE for the
512base table, and not the subclass table, allowing configured ON DELETE CASCADE
513to take place for the configured foreign keys.  This is configured using
514the :paramref:`.orm.mapper.passive_deletes` option::
515
516    from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
517    from sqlalchemy.orm import Session
518    from sqlalchemy.ext.declarative import declarative_base
519
520    Base = declarative_base()
521
522
523    class A(Base):
524        __tablename__ = "a"
525        id = Column('id', Integer, primary_key=True)
526        type = Column(String)
527
528        __mapper_args__ = {
529            'polymorphic_on': type,
530            'polymorphic_identity': 'a',
531            'passive_deletes': True
532        }
533
534
535    class B(A):
536        __tablename__ = 'b'
537        b_table_id = Column('b_table_id', Integer, primary_key=True)
538        bid = Column('bid', Integer, ForeignKey('a.id', ondelete="CASCADE"))
539        data = Column('data', String)
540
541        __mapper_args__ = {
542            'polymorphic_identity': 'b'
543        }
544
545With the above mapping, the :paramref:`.orm.mapper.passive_deletes` option
546is configured on the base mapper; it takes effect for all non-base mappers
547that are descendants of the mapper with the option set.  A DELETE for
548an object of type ``B`` no longer needs to retrieve the primary key value
549of ``b_table_id`` if unloaded, nor does it need to emit a DELETE statement
550for the table itself::
551
552    session.delete(some_b)
553    session.commit()
554
555Will emit SQL as::
556
557    DELETE FROM a WHERE a.id = %(id)s
558    {'id': 1}
559    COMMIT
560
561As always, the target database must have foreign key support with
562ON DELETE CASCADE enabled.
563
564:ticket:`2349`
565
566.. _change_3630:
567
568Same-named backrefs will not raise an error when applied to concrete inheritance subclasses
569-------------------------------------------------------------------------------------------
570
571The following mapping has always been possible without issue::
572
573    class A(Base):
574        __tablename__ = 'a'
575        id = Column(Integer, primary_key=True)
576        b = relationship("B", foreign_keys="B.a_id", backref="a")
577
578    class A1(A):
579        __tablename__ = 'a1'
580        id = Column(Integer, primary_key=True)
581        b = relationship("B", foreign_keys="B.a1_id", backref="a1")
582        __mapper_args__ = {'concrete': True}
583
584    class B(Base):
585        __tablename__ = 'b'
586        id = Column(Integer, primary_key=True)
587
588        a_id = Column(ForeignKey('a.id'))
589        a1_id = Column(ForeignKey('a1.id'))
590
591Above, even though class ``A`` and class ``A1`` have a relationship
592named ``b``, no conflict warning or error occurs because class ``A1`` is
593marked as "concrete".
594
595However, if the relationships were configured the other way, an error
596would occur::
597
598    class A(Base):
599        __tablename__ = 'a'
600        id = Column(Integer, primary_key=True)
601
602
603    class A1(A):
604        __tablename__ = 'a1'
605        id = Column(Integer, primary_key=True)
606        __mapper_args__ = {'concrete': True}
607
608
609    class B(Base):
610        __tablename__ = 'b'
611        id = Column(Integer, primary_key=True)
612
613        a_id = Column(ForeignKey('a.id'))
614        a1_id = Column(ForeignKey('a1.id'))
615
616        a = relationship("A", backref="b")
617        a1 = relationship("A1", backref="b")
618
619The fix enhances the backref feature so that an error is not emitted,
620as well as an additional check within the mapper logic to bypass warning
621for an attribute being replaced.
622
623:ticket:`3630`
624
625.. _change_3749:
626
627Same-named relationships on inheriting mappers no longer warn
628-------------------------------------------------------------
629
630When creating two mappers in an inheritance scenario, placing a relationship
631on both with the same name would emit the warning
632"relationship '<name>' on mapper <name> supersedes the same relationship
633on inherited mapper '<name>'; this can cause dependency issues during flush".
634An example is as follows::
635
636    class A(Base):
637        __tablename__ = 'a'
638        id = Column(Integer, primary_key=True)
639        bs = relationship("B")
640
641
642    class ASub(A):
643        __tablename__ = 'a_sub'
644        id = Column(Integer, ForeignKey('a.id'), primary_key=True)
645        bs = relationship("B")
646
647
648    class B(Base):
649        __tablename__ = 'b'
650        id = Column(Integer, primary_key=True)
651        a_id = Column(ForeignKey('a.id'))
652
653
654This warning dates back to the 0.4 series in 2007 and is based on a version of
655the unit of work code that has since been entirely rewritten. Currently, there
656is no known issue with the same-named relationships being placed on a base
657class and a descendant class, so the warning is lifted.   However, note that
658this use case is likely not prevalent in real world use due to the warning.
659While rudimentary test support is added for this use case, it is possible that
660some new issue with this pattern may be identified.
661
662.. versionadded:: 1.1.0b3
663
664:ticket:`3749`
665
666.. _change_3653:
667
668Hybrid properties and methods now propagate the docstring as well as .info
669--------------------------------------------------------------------------
670
671A hybrid method or property will now reflect the ``__doc__`` value
672present in the original docstring::
673
674    class A(Base):
675        __tablename__ = 'a'
676        id = Column(Integer, primary_key=True)
677
678        name = Column(String)
679
680        @hybrid_property
681        def some_name(self):
682            """The name field"""
683            return self.name
684
685The above value of ``A.some_name.__doc__`` is now honored::
686
687    >>> A.some_name.__doc__
688    The name field
689
690However, to accomplish this, the mechanics of hybrid properties necessarily
691becomes more complex.  Previously, the class-level accessor for a hybrid
692would be a simple pass-thru, that is, this test would succeed::
693
694    >>> assert A.name is A.some_name
695
696With the change, the expression returned by ``A.some_name`` is wrapped inside
697of its own ``QueryableAttribute`` wrapper::
698
699    >>> A.some_name
700    <sqlalchemy.orm.attributes.hybrid_propertyProxy object at 0x7fde03888230>
701
702A lot of testing went into making sure this wrapper works correctly, including
703for elaborate schemes like that of the
704`Custom Value Object <http://techspot.zzzeek.org/2011/10/21/hybrids-and-value-agnostic-types/>`_
705recipe, however we'll be looking to see that no other regressions occur for
706users.
707
708As part of this change, the :attr:`.hybrid_property.info` collection is now
709also propagated from the hybrid descriptor itself, rather than from the underlying
710expression.  That is, accessing ``A.some_name.info`` now returns the same
711dictionary that you'd get from ``inspect(A).all_orm_descriptors['some_name'].info``::
712
713    >>> A.some_name.info['foo'] = 'bar'
714    >>> from sqlalchemy import inspect
715    >>> inspect(A).all_orm_descriptors['some_name'].info
716    {'foo': 'bar'}
717
718Note that this ``.info`` dictionary is **separate** from that of a mapped attribute
719which the hybrid descriptor may be proxying directly; this is a behavioral
720change from 1.0.   The wrapper will still proxy other useful attributes
721of a mirrored attribute such as :attr:`.QueryableAttribute.property` and
722:attr:`.QueryableAttribute.class_`.
723
724:ticket:`3653`
725
726.. _change_3601:
727
728Session.merge resolves pending conflicts the same as persistent
729---------------------------------------------------------------
730
731The :meth:`.Session.merge` method will now track the identities of objects given
732within a graph to maintain primary key uniqueness before emitting an INSERT.
733When duplicate objects of the same identity are encountered, non-primary-key
734attributes are **overwritten** as the objects are encountered, which is
735essentially non-deterministic.   This behavior matches that of how persistent
736objects, that is objects that are already located in the database via
737primary key, are already treated, so this behavior is more internally
738consistent.
739
740Given::
741
742    u1 = User(id=7, name='x')
743    u1.orders = [
744        Order(description='o1', address=Address(id=1, email_address='a')),
745        Order(description='o2', address=Address(id=1, email_address='b')),
746        Order(description='o3', address=Address(id=1, email_address='c'))
747    ]
748
749    sess = Session()
750    sess.merge(u1)
751
752Above, we merge a ``User`` object with three new ``Order`` objects, each referring to
753a distinct ``Address`` object, however each is given the same primary key.
754The current behavior of :meth:`.Session.merge` is to look in the identity
755map for this ``Address`` object, and use that as the target.   If the object
756is present, meaning that the database already has a row for ``Address`` with
757primary key "1", we can see that the ``email_address`` field of the ``Address``
758will be overwritten three times, in this case with the values a, b and finally
759c.
760
761However, if the ``Address`` row for primary key "1" were not present, :meth:`.Session.merge`
762would instead create three separate ``Address`` instances, and we'd then get
763a primary key conflict upon INSERT.  The new behavior is that the proposed
764primary key for these ``Address`` objects are tracked in a separate dictionary
765so that we merge the state of the three proposed ``Address`` objects onto
766one ``Address`` object to be inserted.
767
768It may have been preferable if the original case emitted some kind of warning
769that conflicting data were present in a single merge-tree, however the
770non-deterministic merging of values has been the behavior for many
771years for the persistent case; it now matches for the pending case.   A
772feature that warns for conflicting values could still be feasible for both
773cases but would add considerable performance overhead as each column value
774would have to be compared during the merge.
775
776
777:ticket:`3601`
778
779.. _change_3708:
780
781Fix involving many-to-one object moves with user-initiated foreign key manipulations
782------------------------------------------------------------------------------------
783
784A bug has been fixed involving the mechanics of replacing a many-to-one
785reference to an object with another object.   During the attribute operation,
786the location of the object that was previously referred to now makes use of the
787database-committed foreign key value, rather than the current foreign key
788value.  The main effect of the fix is that a backref event towards a collection
789will fire off more accurately when a many-to-one change is made, even if the
790foreign key attribute was manually moved to the new value beforehand.  Assume a
791mapping of the classes ``Parent`` and ``SomeClass``, where ``SomeClass.parent``
792refers to ``Parent`` and ``Parent.items`` refers to the collection of
793``SomeClass`` objects::
794
795    some_object = SomeClass()
796    session.add(some_object)
797    some_object.parent_id = some_parent.id
798    some_object.parent = some_parent
799
800Above, we've made a pending object ``some_object``, manipulated its foreign key
801towards ``Parent`` to refer to it, *then* we actually set up the relationship.
802Before the bug fix, the backref would not have fired off::
803
804    # before the fix
805    assert some_object not in some_parent.items
806
807The fix now is that when we seek to locate the previous value of
808``some_object.parent``, we disregard the parent id that's been manually set,
809and we look for the database-committed value.  In this case, it's None because
810the object is pending, so the event system logs ``some_object.parent``
811as a net change::
812
813    # after the fix, backref fired off for some_object.parent = some_parent
814    assert some_object in some_parent.items
815
816While it is discouraged to manipulate foreign key attributes that are managed
817by relationships, there is limited support for this use case.  Applications
818that manipulate foreign keys in order to allow loads to proceed will often make
819use of the :meth:`.Session.enable_relationship_loading` and
820:attr:`.RelationshipProperty.load_on_pending` features, which cause
821relationships to emit lazy loads based on in-memory foreign key values that
822aren't persisted.   Whether or not these features are in use, this behavioral
823improvement will now be apparent.
824
825:ticket:`3708`
826
827.. _change_3662:
828
829Improvements to the Query.correlate method with polymorphic entities
830--------------------------------------------------------------------
831
832In recent SQLAlchemy versions, the SQL generated by many forms of
833"polymorphic" queries has a more "flat" form than it used to, where
834a JOIN of several tables is no longer bundled into a subquery unconditionally.
835To accommodate this, the :meth:`_query.Query.correlate` method now extracts the
836individual tables from such a polymorphic selectable and ensures that all
837are part of the "correlate" for the subquery.  Assuming the
838``Person/Manager/Engineer->Company`` setup from the mapping documentation,
839using with_polymorphic::
840
841    sess.query(Person.name)
842                .filter(
843                    sess.query(Company.name).
844                    filter(Company.company_id == Person.company_id).
845                    correlate(Person).as_scalar() == "Elbonia, Inc.")
846
847The above query now produces::
848
849    SELECT people.name AS people_name
850    FROM people
851    LEFT OUTER JOIN engineers ON people.person_id = engineers.person_id
852    LEFT OUTER JOIN managers ON people.person_id = managers.person_id
853    WHERE (SELECT companies.name
854    FROM companies
855    WHERE companies.company_id = people.company_id) = ?
856
857Before the fix, the call to ``correlate(Person)`` would inadvertently
858attempt to correlate to the join of ``Person``, ``Engineer`` and ``Manager``
859as a single unit, so ``Person`` wouldn't be correlated::
860
861    -- old, incorrect query
862    SELECT people.name AS people_name
863    FROM people
864    LEFT OUTER JOIN engineers ON people.person_id = engineers.person_id
865    LEFT OUTER JOIN managers ON people.person_id = managers.person_id
866    WHERE (SELECT companies.name
867    FROM companies, people
868    WHERE companies.company_id = people.company_id) = ?
869
870Using correlated subqueries against polymorphic mappings still has some
871unpolished edges.  If for example ``Person`` is polymorphically linked
872to a so-called "concrete polymorphic union" query, the above subquery
873may not correctly refer to this subquery.  In all cases, a way to refer
874to the "polymorphic" entity fully is to create an :func:`.aliased` object
875from it first::
876
877    # works with all SQLAlchemy versions and all types of polymorphic
878    # aliasing.
879
880    paliased = aliased(Person)
881    sess.query(paliased.name)
882                .filter(
883                    sess.query(Company.name).
884                    filter(Company.company_id == paliased.company_id).
885                    correlate(paliased).as_scalar() == "Elbonia, Inc.")
886
887The :func:`.aliased` construct guarantees that the "polymorphic selectable"
888is wrapped in a subquery.  By referring to it explicitly in the correlated
889subquery, the polymorphic form is correctly used.
890
891:ticket:`3662`
892
893.. _change_3081:
894
895Stringify of Query will consult the Session for the correct dialect
896-------------------------------------------------------------------
897
898Calling ``str()`` on a :class:`_query.Query` object will consult the :class:`.Session`
899for the correct "bind" to use, in order to render the SQL that would be
900passed to the database.  In particular this allows a :class:`_query.Query` that
901refers to dialect-specific SQL constructs to be renderable, assuming the
902:class:`_query.Query` is associated with an appropriate :class:`.Session`.
903Previously, this behavior would only take effect if the :class:`_schema.MetaData`
904to which the mappings were associated were itself bound to the target
905:class:`_engine.Engine`.
906
907If neither the underlying :class:`_schema.MetaData` nor the :class:`.Session` are
908associated with any bound :class:`_engine.Engine`, then the fallback to the
909"default" dialect is used to generate the SQL string.
910
911.. seealso::
912
913    :ref:`change_3631`
914
915:ticket:`3081`
916
917.. _change_3431:
918
919Joined eager loading where the same entity is present multiple times in one row
920-------------------------------------------------------------------------------
921
922A fix has been made to the case has been made whereby an attribute will be
923loaded via joined eager loading, even if the entity was already loaded from the
924row on a different "path" that doesn't include the attribute.  This is a
925deep use case that's hard to reproduce, but the general idea is as follows::
926
927    class A(Base):
928        __tablename__ = 'a'
929        id = Column(Integer, primary_key=True)
930        b_id = Column(ForeignKey('b.id'))
931        c_id = Column(ForeignKey('c.id'))
932
933        b = relationship("B")
934        c = relationship("C")
935
936
937    class B(Base):
938        __tablename__ = 'b'
939        id = Column(Integer, primary_key=True)
940        c_id = Column(ForeignKey('c.id'))
941
942        c = relationship("C")
943
944
945    class C(Base):
946        __tablename__ = 'c'
947        id = Column(Integer, primary_key=True)
948        d_id = Column(ForeignKey('d.id'))
949        d = relationship("D")
950
951
952    class D(Base):
953        __tablename__ = 'd'
954        id = Column(Integer, primary_key=True)
955
956
957    c_alias_1 = aliased(C)
958    c_alias_2 = aliased(C)
959
960    q = s.query(A)
961    q = q.join(A.b).join(c_alias_1, B.c).join(c_alias_1.d)
962    q = q.options(contains_eager(A.b).contains_eager(B.c, alias=c_alias_1).contains_eager(C.d))
963    q = q.join(c_alias_2, A.c)
964    q = q.options(contains_eager(A.c, alias=c_alias_2))
965
966The above query emits SQL like this::
967
968    SELECT
969        d.id AS d_id,
970        c_1.id AS c_1_id, c_1.d_id AS c_1_d_id,
971        b.id AS b_id, b.c_id AS b_c_id,
972        c_2.id AS c_2_id, c_2.d_id AS c_2_d_id,
973        a.id AS a_id, a.b_id AS a_b_id, a.c_id AS a_c_id
974    FROM
975        a
976        JOIN b ON b.id = a.b_id
977        JOIN c AS c_1 ON c_1.id = b.c_id
978        JOIN d ON d.id = c_1.d_id
979        JOIN c AS c_2 ON c_2.id = a.c_id
980
981We can see that the ``c`` table is selected from twice; once in the context
982of ``A.b.c -> c_alias_1`` and another in the context of ``A.c -> c_alias_2``.
983Also, we can see that it is quite possible that the ``C`` identity for a
984single row is the **same** for both ``c_alias_1`` and ``c_alias_2``, meaning
985two sets of columns in one row result in only one new object being added
986to the identity map.
987
988The query options above only call for the attribute ``C.d`` to be loaded
989in the context of ``c_alias_1``, and not ``c_alias_2``.  So whether or not
990the final ``C`` object we get in the identity map has the ``C.d`` attribute
991loaded depends on how the mappings are traversed, which while not completely
992random, is essentially non-deterministic.   The fix is that even if the
993loader for ``c_alias_1`` is processed after that of ``c_alias_2`` for a
994single row where they both refer to the same identity, the ``C.d``
995element will still be loaded.  Previously, the loader did not seek to
996modify the load of an entity that was already loaded via a different path.
997The loader that reaches the entity first has always been non-deterministic,
998so this fix may be detectable as a behavioral change in some situations and
999not others.
1000
1001The fix includes tests for two variants of the "multiple paths to one entity"
1002case, and the fix should hopefully cover all other scenarios of this nature.
1003
1004:ticket:`3431`
1005
1006
1007New MutableList and MutableSet helpers added to the mutation tracking extension
1008-------------------------------------------------------------------------------
1009
1010New helper classes :class:`.MutableList` and :class:`.MutableSet` have been
1011added to the :ref:`mutable_toplevel` extension, to complement the existing
1012:class:`.MutableDict` helper.
1013
1014:ticket:`3297`
1015
1016.. _change_3512:
1017
1018New "raise" / "raise_on_sql" loader strategies
1019----------------------------------------------
1020
1021To assist with the use case of preventing unwanted lazy loads from occurring
1022after a series of objects are loaded, the new "lazy='raise'" and
1023"lazy='raise_on_sql'" strategies and
1024corresponding loader option :func:`_orm.raiseload` may be applied to a
1025relationship attribute which will cause it to raise ``InvalidRequestError``
1026when a non-eagerly-loaded attribute is accessed for read.  The two variants
1027test for either a lazy load of any variety, including those that would
1028only return None or retrieve from the identity map::
1029
1030    >>> from sqlalchemy.orm import raiseload
1031    >>> a1 = s.query(A).options(raiseload(A.some_b)).first()
1032    >>> a1.some_b
1033    Traceback (most recent call last):
1034    ...
1035    sqlalchemy.exc.InvalidRequestError: 'A.some_b' is not available due to lazy='raise'
1036
1037Or a lazy load only where SQL would be emitted::
1038
1039    >>> from sqlalchemy.orm import raiseload
1040    >>> a1 = s.query(A).options(raiseload(A.some_b, sql_only=True)).first()
1041    >>> a1.some_b
1042    Traceback (most recent call last):
1043    ...
1044    sqlalchemy.exc.InvalidRequestError: 'A.bs' is not available due to lazy='raise_on_sql'
1045
1046:ticket:`3512`
1047
1048.. _change_3394:
1049
1050Mapper.order_by is deprecated
1051-----------------------------
1052
1053This old parameter from the very first versions of SQLAlchemy was part of
1054the original design of the ORM which featured the :class:`_orm.Mapper` object
1055as a public-facing query structure.   This role has long since been replaced
1056by the :class:`_query.Query` object, where we use :meth:`_query.Query.order_by` to
1057indicate the ordering of results in a way that works consistently for any
1058combination of SELECT statements, entities and SQL expressions.   There are
1059many areas in which :paramref:`_orm.Mapper.order_by` doesn't work as expected
1060(or what would be expected is not clear), such as when queries are combined
1061into unions; these cases are not supported.
1062
1063
1064:ticket:`3394`
1065
1066New Features and Improvements - Core
1067====================================
1068
1069.. _change_3803:
1070
1071Engines now invalidate connections, run error handlers for BaseException
1072------------------------------------------------------------------------
1073
1074.. versionadded:: 1.1 this change is a late add to the 1.1 series just
1075   prior to 1.1 final, and is not present in the 1.1 beta releases.
1076
1077The Python ``BaseException`` class is below that of ``Exception`` but is the
1078identifiable base for system-level exceptions such as ``KeyboardInterrupt``,
1079``SystemExit``, and notably the ``GreenletExit`` exception that's used by
1080eventlet and gevent. This exception class is now intercepted by the exception-
1081handling routines of :class:`_engine.Connection`, and includes handling by the
1082:meth:`_events.ConnectionEvents.handle_error` event.  The :class:`_engine.Connection` is now
1083**invalidated** by default in the case of a system level exception that is not
1084a subclass of ``Exception``, as it is assumed an operation was interrupted and
1085the connection may be in an unusable state.  The MySQL drivers are most
1086targeted by this change however the change is across all DBAPIs.
1087
1088Note that upon invalidation, the immediate DBAPI connection used by
1089:class:`_engine.Connection` is disposed, and the :class:`_engine.Connection`, if still
1090being used subsequent to the exception raise, will use a new
1091DBAPI connection for subsequent operations upon next use; however, the state of
1092any transaction in progress is lost and the appropriate ``.rollback()`` method
1093must be called if applicable before this re-use can proceed.
1094
1095In order to identify this change, it was straightforward to demonstrate a pymysql or
1096mysqlclient / MySQL-Python connection moving into a corrupted state when
1097these exceptions occur in the middle of the connection doing its work;
1098the connection would then be returned to the connection pool where subsequent
1099uses would fail, or even before returning to the pool would cause secondary
1100failures in context managers that call ``.rollback()`` upon the exception
1101catch.   The behavior here is expected to reduce
1102the incidence of the MySQL error "commands out of sync", as well as the
1103``ResourceClosedError`` which can occur when the MySQL driver fails to
1104report ``cursor.description`` correctly, when running under greenlet
1105conditions where greenlets are killed, or where ``KeyboardInterrupt`` exceptions
1106are handled without exiting the program entirely.
1107
1108The behavior is distinct from the usual auto-invalidation feature, in that it
1109does not assume that the backend database itself has been shut down or
1110restarted; it does not recycle the entire connection pool as is the case
1111for usual DBAPI disconnect exceptions.
1112
1113This change should be a net improvement for all users with the exception
1114of **any application that currently intercepts ``KeyboardInterrupt`` or
1115``GreenletExit`` and wishes to continue working within the same transaction**.
1116Such an operation is theoretically possible with other DBAPIs that do not appear to be
1117impacted by ``KeyboardInterrupt`` such as psycopg2.  For these DBAPIs,
1118the following workaround will disable the connection from being recycled
1119for specific exceptions::
1120
1121
1122        engine = create_engine("postgresql+psycopg2://")
1123
1124        @event.listens_for(engine, "handle_error")
1125        def cancel_disconnect(ctx):
1126            if isinstance(ctx.original_exception, KeyboardInterrupt):
1127                ctx.is_disconnect = False
1128
1129:ticket:`3803`
1130
1131
1132.. _change_2551:
1133
1134CTE Support for INSERT, UPDATE, DELETE
1135--------------------------------------
1136
1137One of the most widely requested features is support for common table
1138expressions (CTE) that work with INSERT, UPDATE, DELETE, and is now implemented.
1139An INSERT/UPDATE/DELETE can both draw from a WITH clause that's stated at the
1140top of the SQL, as well as can be used as a CTE itself in the context of
1141a larger statement.
1142
1143As part of this change, an INSERT from SELECT that includes a CTE will now
1144render the CTE at the top of the entire statement, rather than nested
1145in the SELECT statement as was the case in 1.0.
1146
1147Below is an example that renders UPDATE, INSERT and SELECT all in one
1148statement::
1149
1150    >>> from sqlalchemy import table, column, select, literal, exists
1151    >>> orders = table(
1152    ...     'orders',
1153    ...     column('region'),
1154    ...     column('amount'),
1155    ...     column('product'),
1156    ...     column('quantity')
1157    ... )
1158    >>>
1159    >>> upsert = (
1160    ...     orders.update()
1161    ...     .where(orders.c.region == 'Region1')
1162    ...     .values(amount=1.0, product='Product1', quantity=1)
1163    ...     .returning(*(orders.c._all_columns)).cte('upsert'))
1164    >>>
1165    >>> insert = orders.insert().from_select(
1166    ...     orders.c.keys(),
1167    ...     select([
1168    ...         literal('Region1'), literal(1.0),
1169    ...         literal('Product1'), literal(1)
1170    ...     ]).where(~exists(upsert.select()))
1171    ... )
1172    >>>
1173    >>> print(insert)  # note formatting added for clarity
1174    WITH upsert AS
1175    (UPDATE orders SET amount=:amount, product=:product, quantity=:quantity
1176     WHERE orders.region = :region_1
1177     RETURNING orders.region, orders.amount, orders.product, orders.quantity
1178    )
1179    INSERT INTO orders (region, amount, product, quantity)
1180    SELECT
1181        :param_1 AS anon_1, :param_2 AS anon_2,
1182        :param_3 AS anon_3, :param_4 AS anon_4
1183    WHERE NOT (
1184        EXISTS (
1185            SELECT upsert.region, upsert.amount,
1186                   upsert.product, upsert.quantity
1187            FROM upsert))
1188
1189:ticket:`2551`
1190
1191.. _change_3049:
1192
1193Support for RANGE and ROWS specification within window functions
1194----------------------------------------------------------------
1195
1196New :paramref:`.expression.over.range_` and :paramref:`.expression.over.rows` parameters allow
1197RANGE and ROWS expressions for window functions::
1198
1199    >>> from sqlalchemy import func
1200
1201    >>> print(func.row_number().over(order_by='x', range_=(-5, 10)))
1202    row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND :param_2 FOLLOWING)
1203
1204    >>> print(func.row_number().over(order_by='x', rows=(None, 0)))
1205    row_number() OVER (ORDER BY x ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
1206
1207    >>> print(func.row_number().over(order_by='x', range_=(-2, None)))
1208    row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND UNBOUNDED FOLLOWING)
1209
1210:paramref:`.expression.over.range_` and :paramref:`.expression.over.rows` are specified as
12112-tuples and indicate negative and positive values for specific ranges,
12120 for "CURRENT ROW", and None for UNBOUNDED.
1213
1214.. seealso::
1215
1216    :ref:`window_functions`
1217
1218:ticket:`3049`
1219
1220.. _change_2857:
1221
1222Support for the SQL LATERAL keyword
1223-----------------------------------
1224
1225The LATERAL keyword is currently known to only be supported by PostgreSQL 9.3
1226and greater, however as it is part of the SQL standard support for this keyword
1227is added to Core.   The implementation of :meth:`_expression.Select.lateral` employs
1228special logic beyond just rendering the LATERAL keyword to allow for
1229correlation of tables that are derived from the same FROM clause as the
1230selectable, e.g. lateral correlation::
1231
1232    >>> from sqlalchemy import table, column, select, true
1233    >>> people = table('people', column('people_id'), column('age'), column('name'))
1234    >>> books = table('books', column('book_id'), column('owner_id'))
1235    >>> subq = select([books.c.book_id]).\
1236    ...      where(books.c.owner_id == people.c.people_id).lateral("book_subq")
1237    >>> print(select([people]).select_from(people.join(subq, true())))
1238    SELECT people.people_id, people.age, people.name
1239    FROM people JOIN LATERAL (SELECT books.book_id AS book_id
1240    FROM books WHERE books.owner_id = people.people_id)
1241    AS book_subq ON true
1242
1243.. seealso::
1244
1245    :ref:`lateral_selects`
1246
1247    :class:`_expression.Lateral`
1248
1249    :meth:`_expression.Select.lateral`
1250
1251
1252:ticket:`2857`
1253
1254.. _change_3718:
1255
1256Support for TABLESAMPLE
1257-----------------------
1258
1259The SQL standard TABLESAMPLE can be rendered using the
1260:meth:`_expression.FromClause.tablesample` method, which returns a :class:`_expression.TableSample`
1261construct similar to an alias::
1262
1263    from sqlalchemy import func
1264
1265    selectable = people.tablesample(
1266                func.bernoulli(1),
1267                name='alias',
1268                seed=func.random())
1269    stmt = select([selectable.c.people_id])
1270
1271Assuming ``people`` with a column ``people_id``, the above
1272statement would render as::
1273
1274    SELECT alias.people_id FROM
1275    people AS alias TABLESAMPLE bernoulli(:bernoulli_1)
1276    REPEATABLE (random())
1277
1278:ticket:`3718`
1279
1280.. _change_3216:
1281
1282The ``.autoincrement`` directive is no longer implicitly enabled for a composite primary key column
1283---------------------------------------------------------------------------------------------------
1284
1285SQLAlchemy has always had the convenience feature of enabling the backend database's
1286"autoincrement" feature for a single-column integer primary key; by "autoincrement"
1287we mean that the database column will include whatever DDL directives the
1288database provides in order to indicate an auto-incrementing integer identifier,
1289such as the SERIAL keyword on PostgreSQL or AUTO_INCREMENT on MySQL, and additionally
1290that the dialect will receive these generated values from the execution
1291of a :meth:`_schema.Table.insert` construct using techniques appropriate to that
1292backend.
1293
1294What's changed is that this feature no longer turns on automatically for a
1295*composite* primary key; previously, a table definition such as::
1296
1297    Table(
1298        'some_table', metadata,
1299        Column('x', Integer, primary_key=True),
1300        Column('y', Integer, primary_key=True)
1301    )
1302
1303Would have "autoincrement" semantics applied to the ``'x'`` column, only
1304because it's first in the list of primary key columns.  In order to
1305disable this, one would have to turn off ``autoincrement`` on all columns::
1306
1307    # old way
1308    Table(
1309        'some_table', metadata,
1310        Column('x', Integer, primary_key=True, autoincrement=False),
1311        Column('y', Integer, primary_key=True, autoincrement=False)
1312    )
1313
1314With the new behavior, the composite primary key will not have autoincrement
1315semantics unless a column is marked explicitly with ``autoincrement=True``::
1316
1317    # column 'y' will be SERIAL/AUTO_INCREMENT/ auto-generating
1318    Table(
1319        'some_table', metadata,
1320        Column('x', Integer, primary_key=True),
1321        Column('y', Integer, primary_key=True, autoincrement=True)
1322    )
1323
1324In order to anticipate some potential backwards-incompatible scenarios,
1325the :meth:`_schema.Table.insert` construct will perform more thorough checks
1326for missing primary key values on composite primary key columns that don't
1327have autoincrement set up; given a table such as::
1328
1329    Table(
1330        'b', metadata,
1331        Column('x', Integer, primary_key=True),
1332        Column('y', Integer, primary_key=True)
1333    )
1334
1335An INSERT emitted with no values for this table will produce this warning::
1336
1337    SAWarning: Column 'b.x' is marked as a member of the primary
1338    key for table 'b', but has no Python-side or server-side default
1339    generator indicated, nor does it indicate 'autoincrement=True',
1340    and no explicit value is passed.  Primary key columns may not
1341    store NULL. Note that as of SQLAlchemy 1.1, 'autoincrement=True'
1342    must be indicated explicitly for composite (e.g. multicolumn)
1343    primary keys if AUTO_INCREMENT/SERIAL/IDENTITY behavior is
1344    expected for one of the columns in the primary key. CREATE TABLE
1345    statements are impacted by this change as well on most backends.
1346
1347For a column that is receiving primary key values from a server-side
1348default or something less common such as a trigger, the presence of a
1349value generator can be indicated using :class:`.FetchedValue`::
1350
1351    Table(
1352        'b', metadata,
1353        Column('x', Integer, primary_key=True, server_default=FetchedValue()),
1354        Column('y', Integer, primary_key=True, server_default=FetchedValue())
1355    )
1356
1357For the very unlikely case where a composite primary key is actually intended
1358to store NULL in one or more of its columns (only supported on SQLite and MySQL),
1359specify the column with ``nullable=True``::
1360
1361    Table(
1362        'b', metadata,
1363        Column('x', Integer, primary_key=True),
1364        Column('y', Integer, primary_key=True, nullable=True)
1365    )
1366
1367In a related change, the ``autoincrement`` flag may be set to True
1368on a column that has a client-side or server-side default.  This typically
1369will not have much impact on the behavior of the column during an INSERT.
1370
1371
1372.. seealso::
1373
1374    :ref:`change_mysql_3216`
1375
1376:ticket:`3216`
1377
1378.. _change_is_distinct_from:
1379
1380Support for IS DISTINCT FROM and IS NOT DISTINCT FROM
1381-----------------------------------------------------
1382
1383New operators :meth:`.ColumnOperators.is_distinct_from` and
1384:meth:`.ColumnOperators.isnot_distinct_from` allow the IS DISTINCT
1385FROM and IS NOT DISTINCT FROM sql operation::
1386
1387    >>> print(column('x').is_distinct_from(None))
1388    x IS DISTINCT FROM NULL
1389
1390Handling is provided for NULL, True and False::
1391
1392    >>> print(column('x').isnot_distinct_from(False))
1393    x IS NOT DISTINCT FROM false
1394
1395For SQLite, which doesn't have this operator, "IS" / "IS NOT" is rendered,
1396which on SQLite works for NULL unlike other backends::
1397
1398    >>> from sqlalchemy.dialects import sqlite
1399    >>> print(column('x').is_distinct_from(None).compile(dialect=sqlite.dialect()))
1400    x IS NOT NULL
1401
1402.. _change_1957:
1403
1404Core and ORM support for FULL OUTER JOIN
1405----------------------------------------
1406
1407The new flag :paramref:`.FromClause.outerjoin.full`, available at the Core
1408and ORM level, instructs the compiler to render ``FULL OUTER JOIN``
1409where it would normally render ``LEFT OUTER JOIN``::
1410
1411    stmt = select([t1]).select_from(t1.outerjoin(t2, full=True))
1412
1413The flag also works at the ORM level::
1414
1415    q = session.query(MyClass).outerjoin(MyOtherClass, full=True)
1416
1417:ticket:`1957`
1418
1419.. _change_3501:
1420
1421ResultSet column matching enhancements; positional column setup for textual SQL
1422-------------------------------------------------------------------------------
1423
1424A series of improvements were made to the :class:`_engine.ResultProxy` system
1425in the 1.0 series as part of :ticket:`918`, which reorganizes the internals
1426to match cursor-bound result columns with table/ORM metadata positionally,
1427rather than by matching names, for compiled SQL constructs that contain full
1428information about the result rows to be returned.   This allows a dramatic savings
1429on Python overhead as well as much greater accuracy in linking ORM and Core
1430SQL expressions to result rows.  In 1.1, this reorganization has been taken
1431further internally, and also has been made available to pure-text SQL
1432constructs via the use of the recently added :meth:`_expression.TextClause.columns` method.
1433
1434TextAsFrom.columns() now works positionally
1435^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1436
1437The :meth:`_expression.TextClause.columns` method, added in 0.9, accepts column-based arguments
1438positionally; in 1.1, when all columns are passed positionally, the correlation
1439of these columns to the ultimate result set is also performed positionally.
1440The key advantage here is that textual SQL can now be linked to an ORM-
1441level result set without the need to deal with ambiguous or duplicate column
1442names, or with having to match labeling schemes to ORM-level labeling schemes.  All
1443that's needed now is the same ordering of columns within the textual SQL
1444and the column arguments passed to :meth:`_expression.TextClause.columns`::
1445
1446
1447    from sqlalchemy import text
1448    stmt = text("SELECT users.id, addresses.id, users.id, "
1449         "users.name, addresses.email_address AS email "
1450         "FROM users JOIN addresses ON users.id=addresses.user_id "
1451         "WHERE users.id = 1").columns(
1452            User.id,
1453            Address.id,
1454            Address.user_id,
1455            User.name,
1456            Address.email_address
1457         )
1458
1459    query = session.query(User).from_statement(stmt).\
1460        options(contains_eager(User.addresses))
1461    result = query.all()
1462
1463Above, the textual SQL contains the column "id" three times, which would
1464normally be ambiguous.  Using the new feature, we can apply the mapped
1465columns from the ``User`` and ``Address`` class directly, even linking
1466the ``Address.user_id`` column to the ``users.id`` column in textual SQL
1467for fun, and the :class:`_query.Query` object will receive rows that are correctly
1468targetable as needed, including for an eager load.
1469
1470This change is **backwards incompatible** with code that passes the columns
1471to the method with a different ordering than is present in the textual statement.
1472It is hoped that this impact will be low due to the fact that this
1473method has always been documented illustrating the columns being passed in the same order as that of the
1474textual SQL statement, as would seem intuitive, even though the internals
1475weren't checking for this.  The method itself was only added as of 0.9 in
1476any case and may not yet have widespread use.  Notes on exactly how to handle
1477this behavioral change for applications using it are at :ref:`behavior_change_3501`.
1478
1479.. seealso::
1480
1481    :ref:`sqlexpression_text_columns` - in the Core tutorial
1482
1483    :ref:`behavior_change_3501` - backwards compatibility remarks
1484
1485Positional matching is trusted over name-based matching for Core/ORM SQL constructs
1486^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1487
1488Another aspect of this change is that the rules for matching columns have also been modified
1489to rely upon "positional" matching more fully for compiled SQL constructs
1490as well.   Given a statement like the following::
1491
1492    ua = users.alias('ua')
1493    stmt = select([users.c.user_id, ua.c.user_id])
1494
1495The above statement will compile to::
1496
1497    SELECT users.user_id, ua.user_id FROM users, users AS ua
1498
1499In 1.0, the above statement when executed would be matched to its original
1500compiled construct using positional matching, however because the statement
1501contains the ``'user_id'`` label duplicated, the "ambiguous column" rule
1502would still get involved and prevent the columns from being fetched from a row.
1503As of 1.1, the "ambiguous column" rule does not affect an exact match from
1504a column construct to the SQL column, which is what the ORM uses to
1505fetch columns::
1506
1507    result = conn.execute(stmt)
1508    row = result.first()
1509
1510    # these both match positionally, so no error
1511    user_id = row[users.c.user_id]
1512    ua_id = row[ua.c.user_id]
1513
1514    # this still raises, however
1515    user_id = row['user_id']
1516
1517Much less likely to get an "ambiguous column" error message
1518^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1519
1520As part of this change, the wording of the error message ``Ambiguous column
1521name '<name>' in result set! try 'use_labels' option on select statement.``
1522has been dialed back; as this message should now be extremely rare when using
1523the ORM or Core compiled SQL constructs, it merely states
1524``Ambiguous column name '<name>' in result set column descriptions``, and
1525only when a result column is retrieved using the string name that is actually
1526ambiguous, e.g. ``row['user_id']`` in the above example.  It also now refers
1527to the actual ambiguous name from the rendered SQL statement itself,
1528rather than indicating the key or name that was local to the construct being
1529used for the fetch.
1530
1531:ticket:`3501`
1532
1533.. _change_3292:
1534
1535Support for Python's native ``enum`` type and compatible forms
1536--------------------------------------------------------------
1537
1538The :class:`.Enum` type can now be constructed using any
1539PEP-435 compliant enumerated type.   When using this mode, input values
1540and return values are the actual enumerated objects, not the
1541string/integer/etc values::
1542
1543    import enum
1544    from sqlalchemy import Table, MetaData, Column, Enum, create_engine
1545
1546
1547    class MyEnum(enum.Enum):
1548        one = 1
1549        two = 2
1550        three = 3
1551
1552
1553    t = Table(
1554        'data', MetaData(),
1555        Column('value', Enum(MyEnum))
1556    )
1557
1558    e = create_engine("sqlite://")
1559    t.create(e)
1560
1561    e.execute(t.insert(), {"value": MyEnum.two})
1562    assert e.scalar(t.select()) is MyEnum.two
1563
1564The ``Enum.enums`` collection is now a list instead of a tuple
1565^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1566
1567As part of the changes to :class:`.Enum`, the :attr:`.Enum.enums` collection
1568of elements is now a list instead of a tuple.  This because lists
1569are appropriate for variable length sequences of homogeneous items where
1570the position of the element is not semantically significant.
1571
1572:ticket:`3292`
1573
1574.. _change_gh_231:
1575
1576Negative integer indexes accommodated by Core result rows
1577---------------------------------------------------------
1578
1579The :class:`.RowProxy` object now accommodates single negative integer indexes
1580like a regular Python sequence, both in the pure Python and C-extension
1581version.  Previously, negative values would only work in slices::
1582
1583    >>> from sqlalchemy import create_engine
1584    >>> e = create_engine("sqlite://")
1585    >>> row = e.execute("select 1, 2, 3").first()
1586    >>> row[-1], row[-2], row[1], row[-2:2]
1587    3 2 2 (2,)
1588
1589.. _change_3095:
1590
1591The ``Enum`` type now does in-Python validation of values
1592---------------------------------------------------------
1593
1594To accommodate for Python native enumerated objects, as well as for edge
1595cases such as that of where a non-native ENUM type is used within an ARRAY
1596and a CHECK constraint is infeasible, the :class:`.Enum` datatype now adds
1597in-Python validation of input values when the :paramref:`.Enum.validate_strings`
1598flag is used (1.1.0b2)::
1599
1600
1601    >>> from sqlalchemy import Table, MetaData, Column, Enum, create_engine
1602    >>> t = Table(
1603    ...     'data', MetaData(),
1604    ...     Column('value', Enum("one", "two", "three", validate_strings=True))
1605    ... )
1606    >>> e = create_engine("sqlite://")
1607    >>> t.create(e)
1608    >>> e.execute(t.insert(), {"value": "four"})
1609    Traceback (most recent call last):
1610      ...
1611    sqlalchemy.exc.StatementError: (exceptions.LookupError)
1612    "four" is not among the defined enum values
1613    [SQL: u'INSERT INTO data (value) VALUES (?)']
1614    [parameters: [{'value': 'four'}]]
1615
1616This validation is turned off by default as there are already use cases
1617identified where users don't want such validation (such as string comparisons).
1618For non-string types, it necessarily takes place in all cases.  The
1619check also occurs unconditionally on the result-handling side as well, when
1620values coming from the database are returned.
1621
1622This validation is in addition to the existing behavior of creating a
1623CHECK constraint when a non-native enumerated type is used.  The creation of
1624this CHECK constraint can now be disabled using the new
1625:paramref:`.Enum.create_constraint` flag.
1626
1627:ticket:`3095`
1628
1629.. _change_3730:
1630
1631Non-native boolean integer values coerced to zero/one/None in all cases
1632-----------------------------------------------------------------------
1633
1634The :class:`.Boolean` datatype coerces Python booleans to integer values
1635for backends that don't have a native boolean type, such as SQLite and
1636MySQL.  On these backends, a CHECK constraint is normally set up which
1637ensures the values in the database are in fact one of these two values.
1638However, MySQL ignores CHECK constraints, the constraint is optional, and
1639an existing database might not have this constraint.  The :class:`.Boolean`
1640datatype has been repaired such that an incoming Python-side value that is
1641already an integer value is coerced to zero or one, not just passed as-is;
1642additionally, the C-extension version of the int-to-boolean processor for
1643results now uses the same Python boolean interpretation of the value,
1644rather than asserting an exact one or zero value.  This is now consistent
1645with the pure-Python int-to-boolean processor and is more forgiving of
1646existing data already within the database.   Values of None/NULL are as before
1647retained as None/NULL.
1648
1649.. note::
1650
1651   this change had an unintended side effect that the interpretation of non-
1652   integer values, such as strings, also changed in behavior such that the
1653   string value ``"0"`` would be interpreted as "true", but only on backends
1654   that don't have a native boolean datatype - on "native boolean" backends
1655   like PostgreSQL, the string value ``"0"`` is passed directly to the driver
1656   and is interpreted as "false".  This is an inconsistency that did not occur
1657   with the previous implementation. It should be noted that passing strings or
1658   any other value outside of ``None``, ``True``, ``False``, ``1``, ``0`` to
1659   the :class:`.Boolean` datatype is **not supported** and version 1.2 will
1660   raise an error for this scenario (or possibly just emit a warning, TBD).
1661   See also :ticket:`4102`.
1662
1663
1664:ticket:`3730`
1665
1666.. _change_2837:
1667
1668Large parameter and row values are now truncated in logging and exception displays
1669----------------------------------------------------------------------------------
1670
1671A large value present as a bound parameter for a SQL statement, as well as a
1672large value present in a result row, will now be truncated during display
1673within logging, exception reporting, as well as ``repr()`` of the row itself::
1674
1675    >>> from sqlalchemy import create_engine
1676    >>> import random
1677    >>> e = create_engine("sqlite://", echo='debug')
1678    >>> some_value = ''.join(chr(random.randint(52, 85)) for i in range(5000))
1679    >>> row = e.execute("select ?", [some_value]).first()
1680    ... (lines are wrapped for clarity) ...
1681    2016-02-17 13:23:03,027 INFO sqlalchemy.engine.base.Engine select ?
1682    2016-02-17 13:23:03,027 INFO sqlalchemy.engine.base.Engine
1683    ('E6@?>9HPOJB<<BHR:@=TS:5ILU=;JLM<4?B9<S48PTNG9>:=TSTLA;9K;9FPM4M8M@;NM6GU
1684    LUAEBT9QGHNHTHR5EP75@OER4?SKC;D:TFUMD:M>;C6U:JLM6R67GEK<A6@S@C@J7>4=4:P
1685    GJ7HQ6 ... (4702 characters truncated) ... J6IK546AJMB4N6S9L;;9AKI;=RJP
1686    HDSSOTNBUEEC9@Q:RCL:I@5?FO<9K>KJAGAO@E6@A7JI8O:J7B69T6<8;F:S;4BEIJS9HM
1687    K:;5OLPM@JR;R:J6<SOTTT=>Q>7T@I::OTDC:CC<=NGP6C>BC8N',)
1688    2016-02-17 13:23:03,027 DEBUG sqlalchemy.engine.base.Engine Col ('?',)
1689    2016-02-17 13:23:03,027 DEBUG sqlalchemy.engine.base.Engine
1690    Row (u'E6@?>9HPOJB<<BHR:@=TS:5ILU=;JLM<4?B9<S48PTNG9>:=TSTLA;9K;9FPM4M8M@;
1691    NM6GULUAEBT9QGHNHTHR5EP75@OER4?SKC;D:TFUMD:M>;C6U:JLM6R67GEK<A6@S@C@J7
1692    >4=4:PGJ7HQ ... (4703 characters truncated) ... J6IK546AJMB4N6S9L;;9AKI;=
1693    RJPHDSSOTNBUEEC9@Q:RCL:I@5?FO<9K>KJAGAO@E6@A7JI8O:J7B69T6<8;F:S;4BEIJS9HM
1694    K:;5OLPM@JR;R:J6<SOTTT=>Q>7T@I::OTDC:CC<=NGP6C>BC8N',)
1695    >>> print(row)
1696    (u'E6@?>9HPOJB<<BHR:@=TS:5ILU=;JLM<4?B9<S48PTNG9>:=TSTLA;9K;9FPM4M8M@;NM6
1697    GULUAEBT9QGHNHTHR5EP75@OER4?SKC;D:TFUMD:M>;C6U:JLM6R67GEK<A6@S@C@J7>4
1698    =4:PGJ7HQ ... (4703 characters truncated) ... J6IK546AJMB4N6S9L;;9AKI;
1699    =RJPHDSSOTNBUEEC9@Q:RCL:I@5?FO<9K>KJAGAO@E6@A7JI8O:J7B69T6<8;F:S;4BEIJS9H
1700    MK:;5OLPM@JR;R:J6<SOTTT=>Q>7T@I::OTDC:CC<=NGP6C>BC8N',)
1701
1702
1703:ticket:`2837`
1704
1705
1706.. _change_3619:
1707
1708JSON support added to Core
1709--------------------------
1710
1711As MySQL now has a JSON datatype in addition to the PostgreSQL JSON datatype,
1712the core now gains a :class:`sqlalchemy.types.JSON` datatype that is the basis
1713for both of these.  Using this type allows access to the "getitem" operator
1714as well as the "getpath" operator in a way that is agnostic across PostgreSQL
1715and MySQL.
1716
1717The new datatype also has a series of improvements to the handling of
1718NULL values as well as expression handling.
1719
1720.. seealso::
1721
1722    :ref:`change_3547`
1723
1724    :class:`_types.JSON`
1725
1726    :class:`_postgresql.JSON`
1727
1728    :class:`.mysql.JSON`
1729
1730:ticket:`3619`
1731
1732.. _change_3514:
1733
1734JSON "null" is inserted as expected with ORM operations, omitted when not present
1735^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1736
1737The :class:`_types.JSON` type and its descendant types :class:`_postgresql.JSON`
1738and :class:`.mysql.JSON` have a flag :paramref:`.types.JSON.none_as_null` which
1739when set to True indicates that the Python value ``None`` should translate
1740into a SQL NULL rather than a JSON NULL value.  This flag defaults to False,
1741which means that the Python value ``None`` should result in a JSON NULL value.
1742
1743This logic would fail, and is now corrected, in the following circumstances:
1744
17451. When the column also contained a default or server_default value,
1746a positive value of ``None`` on the mapped attribute that expects to persist
1747JSON "null" would still result in the column-level default being triggered,
1748replacing the ``None`` value::
1749
1750    class MyObject(Base):
1751        # ...
1752
1753        json_value = Column(JSON(none_as_null=False), default="some default")
1754
1755    # would insert "some default" instead of "'null'",
1756    # now will insert "'null'"
1757    obj = MyObject(json_value=None)
1758    session.add(obj)
1759    session.commit()
1760
17612. When the column *did not* contain a default or server_default value, a missing
1762value on a JSON column configured with none_as_null=False would still render
1763JSON NULL rather than falling back to not inserting any value, behaving
1764inconsistently vs. all other datatypes::
1765
1766    class MyObject(Base):
1767        # ...
1768
1769        some_other_value = Column(String(50))
1770        json_value = Column(JSON(none_as_null=False))
1771
1772    # would result in NULL for some_other_value,
1773    # but json "'null'" for json_value.  Now results in NULL for both
1774    # (the json_value is omitted from the INSERT)
1775    obj = MyObject()
1776    session.add(obj)
1777    session.commit()
1778
1779This is a behavioral change that is backwards incompatible for an application
1780that was relying upon this to default a missing value as JSON null.  This
1781essentially establishes that a **missing value is distinguished from a present
1782value of None**.  See :ref:`behavior_change_3514` for further detail.
1783
17843. When the :meth:`.Session.bulk_insert_mappings` method were used, ``None``
1785would be ignored in all cases::
1786
1787    # would insert SQL NULL and/or trigger defaults,
1788    # now inserts "'null'"
1789    session.bulk_insert_mappings(
1790        MyObject,
1791        [{"json_value": None}])
1792
1793The :class:`_types.JSON` type now implements the
1794:attr:`.TypeEngine.should_evaluate_none` flag,
1795indicating that ``None`` should not be ignored here; it is configured
1796automatically based on the value of :paramref:`.types.JSON.none_as_null`.
1797Thanks to :ticket:`3061`, we can differentiate when the value ``None`` is actively
1798set by the user versus when it was never set at all.
1799
1800The feature applies as well to the new base :class:`_types.JSON` type
1801and its descendant types.
1802
1803:ticket:`3514`
1804
1805.. _change_3514_jsonnull:
1806
1807New JSON.NULL Constant Added
1808^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1809
1810To ensure that an application can always have full control at the value level
1811of whether a :class:`_types.JSON`, :class:`_postgresql.JSON`, :class:`.mysql.JSON`,
1812or :class:`_postgresql.JSONB` column
1813should receive a SQL NULL or JSON ``"null"`` value, the constant
1814:attr:`.types.JSON.NULL` has been added, which in conjunction with
1815:func:`.null` can be used to determine fully between SQL NULL and
1816JSON ``"null"``, regardless of what :paramref:`.types.JSON.none_as_null` is set
1817to::
1818
1819    from sqlalchemy import null
1820    from sqlalchemy.dialects.postgresql import JSON
1821
1822    obj1 = MyObject(json_value=null())  # will *always* insert SQL NULL
1823    obj2 = MyObject(json_value=JSON.NULL)  # will *always* insert JSON string "null"
1824
1825    session.add_all([obj1, obj2])
1826    session.commit()
1827
1828The feature applies as well to the new base :class:`_types.JSON` type
1829and its descendant types.
1830
1831:ticket:`3514`
1832
1833.. _change_3516:
1834
1835Array support added to Core; new ANY and ALL operators
1836------------------------------------------------------
1837
1838Along with the enhancements made to the PostgreSQL :class:`_postgresql.ARRAY`
1839type described in :ref:`change_3503`, the base class of :class:`_postgresql.ARRAY`
1840itself has been moved to Core in a new class :class:`_types.ARRAY`.
1841
1842Arrays are part of the SQL standard, as are several array-oriented functions
1843such as ``array_agg()`` and ``unnest()``.  In support of these constructs
1844for not just PostgreSQL but also potentially for other array-capable backends
1845in the future such as DB2, the majority of array logic for SQL expressions
1846is now in Core.   The :class:`_types.ARRAY` type still **only works on
1847PostgreSQL**, however it can be used directly, supporting special array
1848use cases such as indexed access, as well as support for the ANY and ALL::
1849
1850    mytable = Table("mytable", metadata,
1851            Column("data", ARRAY(Integer, dimensions=2))
1852        )
1853
1854    expr = mytable.c.data[5][6]
1855
1856    expr = mytable.c.data[5].any(12)
1857
1858In support of ANY and ALL, the :class:`_types.ARRAY` type retains the same
1859:meth:`.types.ARRAY.Comparator.any` and :meth:`.types.ARRAY.Comparator.all` methods
1860from the PostgreSQL type, but also exports these operations to new
1861standalone operator functions :func:`_expression.any_` and
1862:func:`_expression.all_`.  These two functions work in more
1863of the traditional SQL way, allowing a right-side expression form such
1864as::
1865
1866    from sqlalchemy import any_, all_
1867
1868    select([mytable]).where(12 == any_(mytable.c.data[5]))
1869
1870For the PostgreSQL-specific operators "contains", "contained_by", and
1871"overlaps", one should continue to use the :class:`_postgresql.ARRAY`
1872type directly, which provides all functionality of the :class:`_types.ARRAY`
1873type as well.
1874
1875The :func:`_expression.any_` and :func:`_expression.all_` operators
1876are open-ended at the Core level, however their interpretation by backend
1877databases is limited.  On the PostgreSQL backend, the two operators
1878**only accept array values**.  Whereas on the MySQL backend, they
1879**only accept subquery values**.  On MySQL, one can use an expression
1880such as::
1881
1882    from sqlalchemy import any_, all_
1883
1884    subq = select([mytable.c.value])
1885    select([mytable]).where(12 > any_(subq))
1886
1887
1888:ticket:`3516`
1889
1890.. _change_3132:
1891
1892New Function features, "WITHIN GROUP", array_agg and set aggregate functions
1893----------------------------------------------------------------------------
1894
1895With the new :class:`_types.ARRAY` type we can also implement a pre-typed
1896function for the ``array_agg()`` SQL function that returns an array,
1897which is now available using :class:`_functions.array_agg`::
1898
1899    from sqlalchemy import func
1900    stmt = select([func.array_agg(table.c.value)])
1901
1902A PostgreSQL element for an aggregate ORDER BY is also added via
1903:class:`_postgresql.aggregate_order_by`::
1904
1905    from sqlalchemy.dialects.postgresql import aggregate_order_by
1906    expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
1907    stmt = select([expr])
1908
1909Producing::
1910
1911    SELECT array_agg(table1.a ORDER BY table1.b DESC) AS array_agg_1 FROM table1
1912
1913The PG dialect itself also provides an :func:`_postgresql.array_agg` wrapper to
1914ensure the :class:`_postgresql.ARRAY` type::
1915
1916    from sqlalchemy.dialects.postgresql import array_agg
1917    stmt = select([array_agg(table.c.value).contains('foo')])
1918
1919
1920Additionally, functions like ``percentile_cont()``, ``percentile_disc()``,
1921``rank()``, ``dense_rank()`` and others that require an ordering via
1922``WITHIN GROUP (ORDER BY <expr>)`` are now available via the
1923:meth:`.FunctionElement.within_group` modifier::
1924
1925    from sqlalchemy import func
1926    stmt = select([
1927        department.c.id,
1928        func.percentile_cont(0.5).within_group(
1929            department.c.salary.desc()
1930        )
1931    ])
1932
1933The above statement would produce SQL similar to::
1934
1935  SELECT department.id, percentile_cont(0.5)
1936  WITHIN GROUP (ORDER BY department.salary DESC)
1937
1938Placeholders with correct return types are now provided for these functions,
1939and include :class:`.percentile_cont`, :class:`.percentile_disc`,
1940:class:`.rank`, :class:`.dense_rank`, :class:`.mode`, :class:`.percent_rank`,
1941and :class:`.cume_dist`.
1942
1943:ticket:`3132` :ticket:`1370`
1944
1945.. _change_2919:
1946
1947TypeDecorator now works with Enum, Boolean, "schema" types automatically
1948------------------------------------------------------------------------
1949
1950The :class:`.SchemaType` types include types such as :class:`.Enum`
1951and :class:`.Boolean` which, in addition to corresponding to a database
1952type, also generate either a CHECK constraint or in the case of PostgreSQL
1953ENUM a new CREATE TYPE statement, will now work automatically with
1954:class:`.TypeDecorator` recipes.  Previously, a :class:`.TypeDecorator` for
1955an :class:`_postgresql.ENUM` had to look like this::
1956
1957    # old way
1958    class MyEnum(TypeDecorator, SchemaType):
1959        impl = postgresql.ENUM('one', 'two', 'three', name='myenum')
1960
1961        def _set_table(self, table):
1962            self.impl._set_table(table)
1963
1964The :class:`.TypeDecorator` now propagates those additional events so it
1965can be done like any other type::
1966
1967    # new way
1968    class MyEnum(TypeDecorator):
1969        impl = postgresql.ENUM('one', 'two', 'three', name='myenum')
1970
1971
1972:ticket:`2919`
1973
1974.. _change_2685:
1975
1976Multi-Tenancy Schema Translation for Table objects
1977--------------------------------------------------
1978
1979To support the use case of an application that uses the same set of
1980:class:`_schema.Table` objects in many schemas, such as schema-per-user, a new
1981execution option :paramref:`.Connection.execution_options.schema_translate_map`
1982is added.  Using this mapping, a set of :class:`_schema.Table`
1983objects can be made on a per-connection basis to refer to any set of schemas
1984instead of the :paramref:`_schema.Table.schema` to which they were assigned.  The
1985translation works for DDL and SQL generation, as well as with the ORM.
1986
1987For example, if the ``User`` class were assigned the schema "per_user"::
1988
1989    class User(Base):
1990        __tablename__ = 'user'
1991        id = Column(Integer, primary_key=True)
1992
1993        __table_args__ = {'schema': 'per_user'}
1994
1995On each request, the :class:`.Session` can be set up to refer to a
1996different schema each time::
1997
1998    session = Session()
1999    session.connection(execution_options={
2000        "schema_translate_map": {"per_user": "account_one"}})
2001
2002    # will query from the ``account_one.user`` table
2003    session.query(User).get(5)
2004
2005.. seealso::
2006
2007    :ref:`schema_translating`
2008
2009:ticket:`2685`
2010
2011.. _change_3631:
2012
2013"Friendly" stringification of Core SQL constructs without a dialect
2014-------------------------------------------------------------------
2015
2016Calling ``str()`` on a Core SQL construct will now produce a string
2017in more cases than before, supporting various SQL constructs not normally
2018present in default SQL such as RETURNING, array indexes, and non-standard
2019datatypes::
2020
2021    >>> from sqlalchemy import table, column
2022    t>>> t = table('x', column('a'), column('b'))
2023    >>> print(t.insert().returning(t.c.a, t.c.b))
2024    INSERT INTO x (a, b) VALUES (:a, :b) RETURNING x.a, x.b
2025
2026The ``str()`` function now calls upon an entirely separate dialect / compiler
2027intended just for plain string printing without a specific dialect set up,
2028so as more "just show me a string!" cases come up, these can be added
2029to this dialect/compiler without impacting behaviors on real dialects.
2030
2031.. seealso::
2032
2033    :ref:`change_3081`
2034
2035:ticket:`3631`
2036
2037.. _change_3531:
2038
2039The type_coerce function is now a persistent SQL element
2040--------------------------------------------------------
2041
2042The :func:`_expression.type_coerce` function previously would return
2043an object either of type :class:`.BindParameter` or :class:`.Label`, depending
2044on the input.  An effect this would have was that in the case where expression
2045transformations were used, such as the conversion of an element from a
2046:class:`_schema.Column` to a :class:`.BindParameter` that's critical to ORM-level
2047lazy loading, the type coercion information would not be used since it would
2048have been lost already.
2049
2050To improve this behavior, the function now returns a persistent
2051:class:`.TypeCoerce` container around the given expression, which itself
2052remains unaffected; this construct is evaluated explicitly by the
2053SQL compiler.  This allows for the coercion of the inner expression
2054to be maintained no matter how the statement is modified, including if
2055the contained element is replaced with a different one, as is common
2056within the ORM's lazy loading feature.
2057
2058The test case illustrating the effect makes use of a heterogeneous
2059primaryjoin condition in conjunction with custom types and lazy loading.
2060Given a custom type that applies a CAST as a "bind expression"::
2061
2062    class StringAsInt(TypeDecorator):
2063        impl = String
2064
2065        def column_expression(self, col):
2066            return cast(col, Integer)
2067
2068        def bind_expression(self, value):
2069            return cast(value, String)
2070
2071Then, a mapping where we are equating a string "id" column on one
2072table to an integer "id" column on the other::
2073
2074    class Person(Base):
2075        __tablename__ = 'person'
2076        id = Column(StringAsInt, primary_key=True)
2077
2078        pets = relationship(
2079            'Pets',
2080            primaryjoin=(
2081                'foreign(Pets.person_id)'
2082                '==cast(type_coerce(Person.id, Integer), Integer)'
2083            )
2084        )
2085
2086    class Pets(Base):
2087        __tablename__ = 'pets'
2088        id = Column('id', Integer, primary_key=True)
2089        person_id = Column('person_id', Integer)
2090
2091Above, in the :paramref:`_orm.relationship.primaryjoin` expression, we are
2092using :func:`.type_coerce` to handle bound parameters passed via
2093lazyloading as integers, since we already know these will come from
2094our ``StringAsInt`` type which maintains the value as an integer in
2095Python. We are then using :func:`.cast` so that as a SQL expression,
2096the VARCHAR "id"  column will be CAST to an integer for a regular non-
2097converted join as with :meth:`_query.Query.join` or :func:`_orm.joinedload`.
2098That is, a joinedload of ``.pets`` looks like::
2099
2100    SELECT person.id AS person_id, pets_1.id AS pets_1_id,
2101           pets_1.person_id AS pets_1_person_id
2102    FROM person
2103    LEFT OUTER JOIN pets AS pets_1
2104    ON pets_1.person_id = CAST(person.id AS INTEGER)
2105
2106Without the CAST in the ON clause of the join, strongly-typed databases
2107such as PostgreSQL will refuse to implicitly compare the integer and fail.
2108
2109The lazyload case of ``.pets`` relies upon replacing
2110the ``Person.id`` column at load time with a bound parameter, which receives
2111a Python-loaded value.  This replacement is specifically where the intent
2112of our :func:`.type_coerce` function would be lost.  Prior to the change,
2113this lazy load comes out as::
2114
2115    SELECT pets.id AS pets_id, pets.person_id AS pets_person_id
2116    FROM pets
2117    WHERE pets.person_id = CAST(CAST(%(param_1)s AS VARCHAR) AS INTEGER)
2118    {'param_1': 5}
2119
2120Where above, we see that our in-Python value of ``5`` is CAST first
2121to a VARCHAR, then back to an INTEGER in SQL; a double CAST which works,
2122but is nevertheless not what we asked for.
2123
2124With the change, the :func:`.type_coerce` function maintains a wrapper
2125even after the column is swapped out for a bound parameter, and the query now
2126looks like::
2127
2128    SELECT pets.id AS pets_id, pets.person_id AS pets_person_id
2129    FROM pets
2130    WHERE pets.person_id = CAST(%(param_1)s AS INTEGER)
2131    {'param_1': 5}
2132
2133Where our outer CAST that's in our primaryjoin still takes effect, but the
2134needless CAST that's in part of the ``StringAsInt`` custom type is removed
2135as intended by the :func:`.type_coerce` function.
2136
2137
2138:ticket:`3531`
2139
2140Key Behavioral Changes - ORM
2141============================
2142
2143.. _behavior_change_3514:
2144
2145JSON Columns will not insert JSON NULL if no value is supplied and no default is established
2146--------------------------------------------------------------------------------------------
2147
2148As detailed in :ref:`change_3514`, :class:`_types.JSON` will not render
2149a JSON "null" value if the value is missing entirely.  To prevent SQL NULL,
2150a default should be set up.  Given the following mapping::
2151
2152    class MyObject(Base):
2153        # ...
2154
2155        json_value = Column(JSON(none_as_null=False), nullable=False)
2156
2157The following flush operation will fail with an integrity error::
2158
2159    obj = MyObject()  # note no json_value
2160    session.add(obj)
2161    session.commit()  # will fail with integrity error
2162
2163If the default for the column should be JSON NULL, set this on the
2164Column::
2165
2166    class MyObject(Base):
2167        # ...
2168
2169        json_value = Column(
2170            JSON(none_as_null=False), nullable=False, default=JSON.NULL)
2171
2172Or, ensure the value is present on the object::
2173
2174    obj = MyObject(json_value=None)
2175    session.add(obj)
2176    session.commit()  # will insert JSON NULL
2177
2178Note that setting ``None`` for the default is the same as omitting it entirely;
2179the :paramref:`.types.JSON.none_as_null` flag does not impact the value of ``None``
2180passed to :paramref:`_schema.Column.default` or :paramref:`_schema.Column.server_default`::
2181
2182    # default=None is the same as omitting it entirely, does not apply JSON NULL
2183    json_value = Column(JSON(none_as_null=False), nullable=False, default=None)
2184
2185
2186.. seealso::
2187
2188    :ref:`change_3514`
2189
2190.. _change_3641:
2191
2192Columns no longer added redundantly with DISTINCT + ORDER BY
2193------------------------------------------------------------
2194
2195A query such as the following will now augment only those columns
2196that are missing from the SELECT list, without duplicates::
2197
2198    q = session.query(User.id, User.name.label('name')).\
2199        distinct().\
2200        order_by(User.id, User.name, User.fullname)
2201
2202Produces::
2203
2204    SELECT DISTINCT user.id AS a_id, user.name AS name,
2205     user.fullname AS a_fullname
2206    FROM a ORDER BY user.id, user.name, user.fullname
2207
2208Previously, it would produce::
2209
2210    SELECT DISTINCT user.id AS a_id, user.name AS name, user.name AS a_name,
2211      user.fullname AS a_fullname
2212    FROM a ORDER BY user.id, user.name, user.fullname
2213
2214Where above, the ``user.name`` column is added unnecessarily.  The results
2215would not be affected, as the additional columns are not included in the
2216result in any case, but the columns are unnecessary.
2217
2218Additionally, when the PostgreSQL DISTINCT ON format is used by passing
2219expressions to :meth:`_query.Query.distinct`, the above "column adding" logic
2220is disabled entirely.
2221
2222When the query is being bundled into a subquery for the purposes of
2223joined eager loading, the "augment column list" rules are necessarily
2224more aggressive so that the ORDER BY can still be satisfied, so this case
2225remains unchanged.
2226
2227:ticket:`3641`
2228
2229.. _change_3776:
2230
2231Same-named @validates decorators will now raise an exception
2232------------------------------------------------------------
2233
2234The :func:`_orm.validates` decorator is only intended to be created once
2235per class for a particular attribute name.   Creating more than one
2236now raises an error, whereas previously it would silently pick only the
2237last defined validator::
2238
2239    class A(Base):
2240        __tablename__ = 'a'
2241        id = Column(Integer, primary_key=True)
2242
2243        data = Column(String)
2244
2245        @validates("data")
2246        def _validate_data_one(self):
2247            assert "x" in data
2248
2249        @validates("data")
2250        def _validate_data_two(self):
2251            assert "y" in data
2252
2253    configure_mappers()
2254
2255Will raise::
2256
2257    sqlalchemy.exc.InvalidRequestError: A validation function for mapped attribute 'data' on mapper Mapper|A|a already exists.
2258
2259:ticket:`3776`
2260
2261Key Behavioral Changes - Core
2262=============================
2263
2264.. _behavior_change_3501:
2265
2266TextClause.columns() will match columns positionally, not by name, when passed positionally
2267-------------------------------------------------------------------------------------------
2268
2269The new behavior of the :meth:`_expression.TextClause.columns` method, which itself
2270was recently added as of the 0.9 series, is that when
2271columns are passed positionally without any additional keyword arguments,
2272they are linked to the ultimate result set
2273columns positionally, and no longer on name.   It is hoped that the impact
2274of this change will be low due to the fact that the method has always been documented
2275illustrating the columns being passed in the same order as that of the
2276textual SQL statement, as would seem intuitive, even though the internals
2277weren't checking for this.
2278
2279An application that is using this method by passing :class:`_schema.Column` objects
2280to it positionally must ensure that the position of those :class:`_schema.Column`
2281objects matches the position in which these columns are stated in the
2282textual SQL.
2283
2284E.g., code like the following::
2285
2286    stmt = text("SELECT id, name, description FROM table")
2287
2288    # no longer matches by name
2289    stmt = stmt.columns(my_table.c.name, my_table.c.description, my_table.c.id)
2290
2291Would no longer work as expected; the order of the columns given is now
2292significant::
2293
2294    # correct version
2295    stmt = stmt.columns(my_table.c.id, my_table.c.name, my_table.c.description)
2296
2297Possibly more likely, a statement that worked like this::
2298
2299    stmt = text("SELECT * FROM table")
2300    stmt = stmt.columns(my_table.c.id, my_table.c.name, my_table.c.description)
2301
2302is now slightly risky, as the "*" specification will generally deliver columns
2303in the order in which they are present in the table itself.  If the structure
2304of the table changes due to schema changes, this ordering may no longer be the same.
2305Therefore when using :meth:`_expression.TextClause.columns`, it's advised to list out
2306the desired columns explicitly in the textual SQL, though it's no longer
2307necessary to worry about the names themselves in the textual SQL.
2308
2309.. seealso::
2310
2311    :ref:`change_3501`
2312
2313.. _change_3809:
2314
2315String server_default now literal quoted
2316----------------------------------------
2317
2318A server default passed to :paramref:`_schema.Column.server_default` as a plain
2319Python string that has quotes embedded is now
2320passed through the literal quoting system::
2321
2322    >>> from sqlalchemy.schema import MetaData, Table, Column, CreateTable
2323    >>> from sqlalchemy.types import String
2324    >>> t = Table('t', MetaData(), Column('x', String(), server_default="hi ' there"))
2325    >>> print(CreateTable(t))
2326
2327    CREATE TABLE t (
2328        x VARCHAR DEFAULT 'hi '' there'
2329    )
2330
2331Previously the quote would render directly.     This change may be backwards
2332incompatible for applications with such a use case who were working around
2333the issue.
2334
2335
2336:ticket:`3809`
2337
2338.. _change_2528:
2339
2340A UNION or similar of SELECTs with LIMIT/OFFSET/ORDER BY now parenthesizes the embedded selects
2341-----------------------------------------------------------------------------------------------
2342
2343An issue that, like others, was long driven by SQLite's lack of capabilities
2344has now been enhanced to work on all supporting backends.   We refer to a query that
2345is a UNION of SELECT statements that themselves contain row-limiting or ordering
2346features which include LIMIT, OFFSET, and/or ORDER BY::
2347
2348    (SELECT x FROM table1 ORDER BY y LIMIT 1) UNION
2349    (SELECT x FROM table2 ORDER BY y LIMIT 2)
2350
2351The above query requires parenthesis within each sub-select in order to
2352group the sub-results correctly.  Production of the above statement in
2353SQLAlchemy Core looks like::
2354
2355    stmt1 = select([table1.c.x]).order_by(table1.c.y).limit(1)
2356    stmt2 = select([table1.c.x]).order_by(table2.c.y).limit(2)
2357
2358    stmt = union(stmt1, stmt2)
2359
2360Previously, the above construct would not produce parenthesization for the
2361inner SELECT statements, producing a query that fails on all backends.
2362
2363The above formats will **continue to fail on SQLite**; additionally, the format
2364that includes ORDER BY but no LIMIT/SELECT will **continue to fail on Oracle**.
2365This is not a backwards-incompatible change, because the queries fail without
2366the parentheses as well; with the fix, the queries at least work on all other
2367databases.
2368
2369In all cases, in order to produce a UNION of limited SELECT statements that
2370also works on SQLite and in all cases on Oracle, the
2371subqueries must be a SELECT of an ALIAS::
2372
2373    stmt1 = select([table1.c.x]).order_by(table1.c.y).limit(1).alias().select()
2374    stmt2 = select([table2.c.x]).order_by(table2.c.y).limit(2).alias().select()
2375
2376    stmt = union(stmt1, stmt2)
2377
2378This workaround works on all SQLAlchemy versions.  In the ORM, it looks like::
2379
2380    stmt1 = session.query(Model1).order_by(Model1.y).limit(1).subquery().select()
2381    stmt2 = session.query(Model2).order_by(Model2.y).limit(1).subquery().select()
2382
2383    stmt = session.query(Model1).from_statement(stmt1.union(stmt2))
2384
2385The behavior here has many parallels to the "join rewriting" behavior
2386introduced in SQLAlchemy 0.9 in :ref:`feature_joins_09`; however in this case
2387we have opted not to add new rewriting behavior to accommodate this
2388case for SQLite.
2389The existing rewriting behavior is very complicated already, and the case of
2390UNIONs with parenthesized SELECT statements is much less common than the
2391"right-nested-join" use case of that feature.
2392
2393:ticket:`2528`
2394
2395
2396Dialect Improvements and Changes - PostgreSQL
2397=============================================
2398
2399.. _change_3529:
2400
2401Support for INSERT..ON CONFLICT (DO UPDATE | DO NOTHING)
2402--------------------------------------------------------
2403
2404The ``ON CONFLICT`` clause of ``INSERT`` added to PostgreSQL as of
2405version 9.5 is now supported using a PostgreSQL-specific version of the
2406:class:`_expression.Insert` object, via :func:`sqlalchemy.dialects.postgresql.dml.insert`.
2407This :class:`_expression.Insert` subclass adds two new methods :meth:`_expression.Insert.on_conflict_do_update`
2408and :meth:`_expression.Insert.on_conflict_do_nothing` which implement the full syntax
2409supported by PostgreSQL 9.5 in this area::
2410
2411    from sqlalchemy.dialects.postgresql import insert
2412
2413    insert_stmt = insert(my_table). \\
2414        values(id='some_id', data='some data to insert')
2415
2416    do_update_stmt = insert_stmt.on_conflict_do_update(
2417        index_elements=[my_table.c.id],
2418        set_=dict(data='some data to update')
2419    )
2420
2421    conn.execute(do_update_stmt)
2422
2423The above will render::
2424
2425    INSERT INTO my_table (id, data)
2426    VALUES (:id, :data)
2427    ON CONFLICT id DO UPDATE SET data=:data_2
2428
2429.. seealso::
2430
2431    :ref:`postgresql_insert_on_conflict`
2432
2433:ticket:`3529`
2434
2435.. _change_3499_postgresql:
2436
2437ARRAY and JSON types now correctly specify "unhashable"
2438-------------------------------------------------------
2439
2440As described in :ref:`change_3499`, the ORM relies upon being able to
2441produce a hash function for column values when a query's selected entities
2442mixes full ORM entities with column expressions.   The ``hashable=False``
2443flag is now correctly set on all of PG's "data structure" types, including
2444:class:`_postgresql.ARRAY` and :class:`_postgresql.JSON`.
2445The :class:`_postgresql.JSONB` and :class:`.HSTORE`
2446types already included this flag.  For :class:`_postgresql.ARRAY`,
2447this is conditional based on the :paramref:`.postgresql.ARRAY.as_tuple`
2448flag, however it should no longer be necessary to set this flag
2449in order to have an array value present in a composed ORM row.
2450
2451.. seealso::
2452
2453    :ref:`change_3499`
2454
2455    :ref:`change_3503`
2456
2457:ticket:`3499`
2458
2459.. _change_3503:
2460
2461Correct SQL Types are Established from Indexed Access of ARRAY, JSON, HSTORE
2462----------------------------------------------------------------------------
2463
2464For all three of :class:`_postgresql.ARRAY`, :class:`_postgresql.JSON` and :class:`.HSTORE`,
2465the SQL type assigned to the expression returned by indexed access, e.g.
2466``col[someindex]``, should be correct in all cases.
2467
2468This includes:
2469
2470* The SQL type assigned to indexed access of an :class:`_postgresql.ARRAY` takes into
2471  account the number of dimensions configured.   An :class:`_postgresql.ARRAY` with three
2472  dimensions will return a SQL expression with a type of :class:`_postgresql.ARRAY` of
2473  one less dimension.  Given a column with type ``ARRAY(Integer, dimensions=3)``,
2474  we can now perform this expression::
2475
2476      int_expr = col[5][6][7]   # returns an Integer expression object
2477
2478  Previously, the indexed access to ``col[5]`` would return an expression of
2479  type :class:`.Integer` where we could no longer perform indexed access
2480  for the remaining dimensions, unless we used :func:`.cast` or :func:`.type_coerce`.
2481
2482* The :class:`_postgresql.JSON` and :class:`_postgresql.JSONB` types now mirror what PostgreSQL
2483  itself does for indexed access.  This means that all indexed access for
2484  a :class:`_postgresql.JSON` or :class:`_postgresql.JSONB` type returns an expression that itself
2485  is *always* :class:`_postgresql.JSON` or :class:`_postgresql.JSONB` itself, unless the
2486  :attr:`~.postgresql.JSON.Comparator.astext` modifier is used.   This means that whether
2487  the indexed access of the JSON structure ultimately refers to a string,
2488  list, number, or other JSON structure, PostgreSQL always considers it
2489  to be JSON itself unless it is explicitly cast differently.   Like
2490  the :class:`_postgresql.ARRAY` type, this means that it is now straightforward
2491  to produce JSON expressions with multiple levels of indexed access::
2492
2493    json_expr = json_col['key1']['attr1'][5]
2494
2495* The "textual" type that is returned by indexed access of :class:`.HSTORE`
2496  as well as the "textual" type that is returned by indexed access of
2497  :class:`_postgresql.JSON` and :class:`_postgresql.JSONB` in conjunction with the
2498  :attr:`~.postgresql.JSON.Comparator.astext` modifier is now configurable; it defaults
2499  to :class:`_expression.TextClause` in both cases but can be set to a user-defined
2500  type using the :paramref:`.postgresql.JSON.astext_type` or
2501  :paramref:`.postgresql.HSTORE.text_type` parameters.
2502
2503.. seealso::
2504
2505  :ref:`change_3503_cast`
2506
2507:ticket:`3499`
2508:ticket:`3487`
2509
2510.. _change_3503_cast:
2511
2512The JSON cast() operation now requires ``.astext`` is called explicitly
2513-----------------------------------------------------------------------
2514
2515As part of the changes in :ref:`change_3503`, the workings of the
2516:meth:`_expression.ColumnElement.cast` operator on :class:`_postgresql.JSON` and
2517:class:`_postgresql.JSONB` no longer implicitly invoke the
2518:attr:`.postgresql.JSON.Comparator.astext` modifier; PostgreSQL's JSON/JSONB types
2519support CAST operations to each other without the "astext" aspect.
2520
2521This means that in most cases, an application that was doing this::
2522
2523    expr = json_col['somekey'].cast(Integer)
2524
2525Will now need to change to this::
2526
2527    expr = json_col['somekey'].astext.cast(Integer)
2528
2529
2530.. _change_2729:
2531
2532ARRAY with ENUM will now emit CREATE TYPE for the ENUM
2533------------------------------------------------------
2534
2535A table definition like the following will now emit CREATE TYPE
2536as expected::
2537
2538    enum = Enum(
2539        'manager', 'place_admin', 'carwash_admin',
2540        'parking_admin', 'service_admin', 'tire_admin',
2541        'mechanic', 'carwasher', 'tire_mechanic', name="work_place_roles")
2542
2543    class WorkPlacement(Base):
2544        __tablename__ = 'work_placement'
2545        id = Column(Integer, primary_key=True)
2546        roles = Column(ARRAY(enum))
2547
2548
2549    e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
2550    Base.metadata.create_all(e)
2551
2552emits::
2553
2554    CREATE TYPE work_place_roles AS ENUM (
2555        'manager', 'place_admin', 'carwash_admin', 'parking_admin',
2556        'service_admin', 'tire_admin', 'mechanic', 'carwasher',
2557        'tire_mechanic')
2558
2559    CREATE TABLE work_placement (
2560        id SERIAL NOT NULL,
2561        roles work_place_roles[],
2562        PRIMARY KEY (id)
2563    )
2564
2565
2566:ticket:`2729`
2567
2568Check constraints now reflect
2569-----------------------------
2570
2571The PostgreSQL dialect now supports reflection of CHECK constraints
2572both within the method :meth:`_reflection.Inspector.get_check_constraints` as well
2573as within :class:`_schema.Table` reflection within the :attr:`_schema.Table.constraints`
2574collection.
2575
2576"Plain" and "Materialized" views can be inspected separately
2577------------------------------------------------------------
2578
2579The new argument :paramref:`.PGInspector.get_view_names.include`
2580allows specification of which sub-types of views should be returned::
2581
2582    from sqlalchemy import inspect
2583    insp = inspect(engine)
2584
2585    plain_views = insp.get_view_names(include='plain')
2586    all_views = insp.get_view_names(include=('plain', 'materialized'))
2587
2588:ticket:`3588`
2589
2590
2591Added tablespace option to Index
2592--------------------------------
2593
2594The :class:`.Index` object now accepts the argument ``postgresql_tablespace``
2595in order to specify TABLESPACE, the same way as accepted by the
2596:class:`_schema.Table` object.
2597
2598.. seealso::
2599
2600    :ref:`postgresql_index_storage`
2601
2602:ticket:`3720`
2603
2604Support for PyGreSQL
2605--------------------
2606
2607The `PyGreSQL <https://pypi.python.org/pypi/PyGreSQL>`_ DBAPI is now supported.
2608
2609.. seealso::
2610
2611    :ref:`dialect-postgresql-pygresql`
2612
2613The "postgres" module is removed
2614--------------------------------
2615
2616The ``sqlalchemy.dialects.postgres`` module, long deprecated, is
2617removed; this has emitted a warning for many years and projects
2618should be calling upon ``sqlalchemy.dialects.postgresql``.
2619Engine URLs of the form ``postgres://`` will still continue to function,
2620however.
2621
2622Support for FOR UPDATE SKIP LOCKED  / FOR NO KEY UPDATE / FOR KEY SHARE
2623-----------------------------------------------------------------------
2624
2625The new parameters :paramref:`.GenerativeSelect.with_for_update.skip_locked`
2626and :paramref:`.GenerativeSelect.with_for_update.key_share`
2627in both Core and ORM apply a modification to a "SELECT...FOR UPDATE"
2628or "SELECT...FOR SHARE" query on the PostgreSQL backend:
2629
2630* SELECT FOR NO KEY UPDATE::
2631
2632    stmt = select([table]).with_for_update(key_share=True)
2633
2634* SELECT FOR UPDATE SKIP LOCKED::
2635
2636    stmt = select([table]).with_for_update(skip_locked=True)
2637
2638* SELECT FOR KEY SHARE::
2639
2640    stmt = select([table]).with_for_update(read=True, key_share=True)
2641
2642Dialect Improvements and Changes - MySQL
2643========================================
2644
2645.. _change_3547:
2646
2647MySQL JSON Support
2648------------------
2649
2650A new type :class:`.mysql.JSON` is added to the MySQL dialect supporting
2651the JSON type newly added to MySQL 5.7.   This type provides both persistence
2652of JSON as well as rudimentary indexed-access using the ``JSON_EXTRACT``
2653function internally.  An indexable JSON column that works across MySQL
2654and PostgreSQL can be achieved by using the :class:`_types.JSON` datatype
2655common to both MySQL and PostgreSQL.
2656
2657.. seealso::
2658
2659    :ref:`change_3619`
2660
2661:ticket:`3547`
2662
2663.. _change_3332:
2664
2665Added support for AUTOCOMMIT "isolation level"
2666----------------------------------------------
2667
2668The MySQL dialect now accepts the value "AUTOCOMMIT" for the
2669:paramref:`_sa.create_engine.isolation_level` and
2670:paramref:`.Connection.execution_options.isolation_level`
2671parameters::
2672
2673    connection = engine.connect()
2674    connection = connection.execution_options(
2675        isolation_level="AUTOCOMMIT"
2676    )
2677
2678The isolation level makes use of the various "autocommit" attributes
2679provided by most MySQL DBAPIs.
2680
2681:ticket:`3332`
2682
2683.. _change_mysql_3216:
2684
2685No more generation of an implicit KEY for composite primary key w/ AUTO_INCREMENT
2686---------------------------------------------------------------------------------
2687
2688The MySQL dialect had the behavior such that if a composite primary key
2689on an InnoDB table featured AUTO_INCREMENT on one of its columns which was
2690not the first column, e.g.::
2691
2692    t = Table(
2693        'some_table', metadata,
2694        Column('x', Integer, primary_key=True, autoincrement=False),
2695        Column('y', Integer, primary_key=True, autoincrement=True),
2696        mysql_engine='InnoDB'
2697    )
2698
2699DDL such as the following would be generated::
2700
2701    CREATE TABLE some_table (
2702        x INTEGER NOT NULL,
2703        y INTEGER NOT NULL AUTO_INCREMENT,
2704        PRIMARY KEY (x, y),
2705        KEY idx_autoinc_y (y)
2706    )ENGINE=InnoDB
2707
2708Note the above "KEY" with an auto-generated name; this is a change that
2709found its way into the dialect many years ago in response to the issue that
2710the AUTO_INCREMENT would otherwise fail on InnoDB without this additional KEY.
2711
2712This workaround has been removed and replaced with the much better system
2713of just stating the AUTO_INCREMENT column *first* within the primary key::
2714
2715    CREATE TABLE some_table (
2716        x INTEGER NOT NULL,
2717        y INTEGER NOT NULL AUTO_INCREMENT,
2718        PRIMARY KEY (y, x)
2719    )ENGINE=InnoDB
2720
2721To maintain explicit control of the ordering of primary key columns,
2722use the :class:`.PrimaryKeyConstraint` construct explicitly (1.1.0b2)
2723(along with a KEY for the autoincrement column as required by MySQL), e.g.::
2724
2725    t = Table(
2726        'some_table', metadata,
2727        Column('x', Integer, primary_key=True),
2728        Column('y', Integer, primary_key=True, autoincrement=True),
2729        PrimaryKeyConstraint('x', 'y'),
2730        UniqueConstraint('y'),
2731        mysql_engine='InnoDB'
2732    )
2733
2734Along with the change :ref:`change_3216`, composite primary keys with
2735or without auto increment are now easier to specify;
2736:paramref:`_schema.Column.autoincrement`
2737now defaults to the value ``"auto"`` and the ``autoincrement=False``
2738directives are no longer needed::
2739
2740    t = Table(
2741        'some_table', metadata,
2742        Column('x', Integer, primary_key=True),
2743        Column('y', Integer, primary_key=True, autoincrement=True),
2744        mysql_engine='InnoDB'
2745    )
2746
2747
2748
2749Dialect Improvements and Changes - SQLite
2750=========================================
2751
2752.. _change_3634:
2753
2754Right-nested join workaround lifted for SQLite version 3.7.16
2755-------------------------------------------------------------
2756
2757In version 0.9, the feature introduced by :ref:`feature_joins_09` went
2758through lots of effort to support rewriting of joins on SQLite to always
2759use subqueries in order to achieve a "right-nested-join" effect, as
2760SQLite has not supported this syntax for many years.  Ironically,
2761the version of SQLite noted in that migration note, 3.7.15.2, was the *last*
2762version of SQLite to actually have this limitation!   The next release was
27633.7.16 and support for right nested joins was quietly added.   In 1.1, the work
2764to identify the specific SQLite version and source commit where this change
2765was made was done (SQLite's changelog refers to it with the cryptic phrase "Enhance
2766the query optimizer to exploit transitive join constraints" without linking
2767to any issue number, change number, or further explanation), and the workarounds
2768present in this change are now lifted for SQLite when the DBAPI reports
2769that version 3.7.16 or greater is in effect.
2770
2771:ticket:`3634`
2772
2773.. _change_3633:
2774
2775Dotted column names workaround lifted for SQLite version 3.10.0
2776---------------------------------------------------------------
2777
2778The SQLite dialect has long had a workaround for an issue where the database
2779driver does not report the correct column names for some SQL result sets, in
2780particular when UNION is used.  The workaround is detailed at
2781:ref:`sqlite_dotted_column_names`, and requires that SQLAlchemy assume that any
2782column name with a dot in it is actually a ``tablename.columnname`` combination
2783delivered via this buggy behavior, with an option to turn it off via the
2784``sqlite_raw_colnames`` execution option.
2785
2786As of SQLite version 3.10.0, the bug in UNION and other queries has been fixed;
2787like the change described in :ref:`change_3634`, SQLite's changelog only
2788identifies it cryptically as "Added the colUsed field to sqlite3_index_info for
2789use by the sqlite3_module.xBestIndex method", however SQLAlchemy's translation
2790of these dotted column names is no longer required with this version, so is
2791turned off when version 3.10.0 or greater is detected.
2792
2793Overall, the SQLAlchemy :class:`_engine.ResultProxy` as of the 1.0 series relies much
2794less on column names in result sets when delivering results for Core and ORM
2795SQL constructs, so the importance of this issue was already lessened in any
2796case.
2797
2798:ticket:`3633`
2799
2800.. _change_sqlite_schemas:
2801
2802Improved Support for Remote Schemas
2803-----------------------------------
2804The SQLite dialect now implements :meth:`_reflection.Inspector.get_schema_names`
2805and additionally has improved support for tables and indexes that are
2806created and reflected from a remote schema, which in SQLite is a
2807database that is assigned a name via the ``ATTACH`` statement; previously,
2808the``CREATE INDEX`` DDL didn't work correctly for a schema-bound table
2809and the :meth:`_reflection.Inspector.get_foreign_keys` method will now indicate the
2810given schema in the results.  Cross-schema foreign keys aren't supported.
2811
2812.. _change_3629:
2813
2814Reflection of the name of PRIMARY KEY constraints
2815-------------------------------------------------
2816
2817The SQLite backend now takes advantage of the "sqlite_master" view
2818of SQLite in order to extract the name of the primary key constraint
2819of a table from the original DDL, in the same way that is achieved for
2820foreign key constraints in recent SQLAlchemy versions.
2821
2822:ticket:`3629`
2823
2824Check constraints now reflect
2825-----------------------------
2826
2827The SQLite dialect now supports reflection of CHECK constraints
2828both within the method :meth:`_reflection.Inspector.get_check_constraints` as well
2829as within :class:`_schema.Table` reflection within the :attr:`_schema.Table.constraints`
2830collection.
2831
2832ON DELETE and ON UPDATE foreign key phrases now reflect
2833-------------------------------------------------------
2834
2835The :class:`_reflection.Inspector` will now include ON DELETE and ON UPDATE
2836phrases from foreign key constraints on the SQLite dialect, and the
2837:class:`_schema.ForeignKeyConstraint` object as reflected as part of a
2838:class:`_schema.Table` will also indicate these phrases.
2839
2840Dialect Improvements and Changes - SQL Server
2841=============================================
2842
2843.. _change_3534:
2844
2845Added transaction isolation level support for SQL Server
2846--------------------------------------------------------
2847
2848All SQL Server dialects support transaction isolation level settings
2849via the :paramref:`_sa.create_engine.isolation_level` and
2850:paramref:`.Connection.execution_options.isolation_level`
2851parameters.  The four standard levels are supported as well as
2852``SNAPSHOT``::
2853
2854    engine = create_engine(
2855        "mssql+pyodbc://scott:tiger@ms_2008",
2856        isolation_level="REPEATABLE READ"
2857    )
2858
2859.. seealso::
2860
2861    :ref:`mssql_isolation_level`
2862
2863:ticket:`3534`
2864
2865.. _change_3504:
2866
2867String / varlength types no longer represent "max" explicitly on reflection
2868---------------------------------------------------------------------------
2869
2870When reflecting a type such as :class:`.String`, :class:`_expression.TextClause`, etc.
2871which includes a length, an "un-lengthed" type under SQL Server would
2872copy the "length" parameter as the value ``"max"``::
2873
2874    >>> from sqlalchemy import create_engine, inspect
2875    >>> engine = create_engine('mssql+pyodbc://scott:tiger@ms_2008', echo=True)
2876    >>> engine.execute("create table s (x varchar(max), y varbinary(max))")
2877    >>> insp = inspect(engine)
2878    >>> for col in insp.get_columns("s"):
2879    ...     print(col['type'].__class__, col['type'].length)
2880    ...
2881    <class 'sqlalchemy.sql.sqltypes.VARCHAR'> max
2882    <class 'sqlalchemy.dialects.mssql.base.VARBINARY'> max
2883
2884The "length" parameter in the base types is expected to be an integer value
2885or None only; None indicates unbounded length which the SQL Server dialect
2886interprets as "max".   The fix then is so that these lengths come
2887out as None, so that the type objects work in non-SQL Server contexts::
2888
2889    >>> for col in insp.get_columns("s"):
2890    ...     print(col['type'].__class__, col['type'].length)
2891    ...
2892    <class 'sqlalchemy.sql.sqltypes.VARCHAR'> None
2893    <class 'sqlalchemy.dialects.mssql.base.VARBINARY'> None
2894
2895Applications which may have been relying on a direct comparison of the "length"
2896value to the string "max" should consider the value of ``None`` to mean
2897the same thing.
2898
2899:ticket:`3504`
2900
2901Support for "non clustered" on primary key to allow clustered elsewhere
2902-----------------------------------------------------------------------
2903
2904The ``mssql_clustered`` flag available on :class:`.UniqueConstraint`,
2905:class:`.PrimaryKeyConstraint`, :class:`.Index` now defaults to ``None``, and
2906can be set to False which will render the NONCLUSTERED keyword in particular
2907for a primary key, allowing a different index to be used as "clustered".
2908
2909.. seealso::
2910
2911    :ref:`mssql_indexes`
2912
2913.. _change_3434:
2914
2915The legacy_schema_aliasing flag is now set to False
2916---------------------------------------------------
2917
2918SQLAlchemy 1.0.5 introduced the ``legacy_schema_aliasing`` flag to the
2919MSSQL dialect, allowing so-called "legacy mode" aliasing to be turned off.
2920This aliasing attempts to turn schema-qualified tables into aliases;
2921given a table such as::
2922
2923    account_table = Table(
2924        'account', metadata,
2925        Column('id', Integer, primary_key=True),
2926        Column('info', String(100)),
2927        schema="customer_schema"
2928    )
2929
2930The legacy mode of behavior will attempt to turn a schema-qualified table
2931name into an alias::
2932
2933    >>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
2934    >>> print(account_table.select().compile(eng))
2935    SELECT account_1.id, account_1.info
2936    FROM customer_schema.account AS account_1
2937
2938However, this aliasing has been shown to be unnecessary and in many cases
2939produces incorrect SQL.
2940
2941In SQLAlchemy 1.1, the ``legacy_schema_aliasing`` flag now defaults to
2942False, disabling this mode of behavior and allowing the MSSQL dialect to behave
2943normally with schema-qualified tables.  For applications which may rely
2944on this behavior, set the flag back to True.
2945
2946
2947:ticket:`3434`
2948
2949Dialect Improvements and Changes - Oracle
2950=========================================
2951
2952Support for SKIP LOCKED
2953-----------------------
2954
2955The new parameter :paramref:`.GenerativeSelect.with_for_update.skip_locked`
2956in both Core and ORM will generate the "SKIP LOCKED" suffix for a
2957"SELECT...FOR UPDATE" or "SELECT.. FOR SHARE" query.
2958