1==============================
2What's New in SQLAlchemy 0.8?
3==============================
4
5.. admonition:: About this Document
6
7    This document describes changes between SQLAlchemy version 0.7,
8    undergoing maintenance releases as of October, 2012,
9    and SQLAlchemy version 0.8, which is expected for release
10    in early 2013.
11
12    Document date: October 25, 2012
13    Updated: March 9, 2013
14
15Introduction
16============
17
18This guide introduces what's new in SQLAlchemy version 0.8,
19and also documents changes which affect users migrating
20their applications from the 0.7 series of SQLAlchemy to 0.8.
21
22SQLAlchemy releases are closing in on 1.0, and each new
23version since 0.5 features fewer major usage changes.   Most
24applications that are settled into modern 0.7 patterns
25should be movable to 0.8 with no changes. Applications that
26use 0.6 and even 0.5 patterns should be directly migratable
27to 0.8 as well, though larger applications may want to test
28with each interim version.
29
30Platform Support
31================
32
33Targeting Python 2.5 and Up Now
34-------------------------------
35
36SQLAlchemy 0.8 will target Python 2.5 and forward;
37compatibility for Python 2.4 is being dropped.
38
39The internals will be able to make usage of Python ternaries
40(that is, ``x if y else z``) which will improve things
41versus the usage of ``y and x or z``, which naturally has
42been the source of some bugs, as well as context managers
43(that is, ``with:``) and perhaps in some cases
44``try:/except:/else:`` blocks which will help with code
45readability.
46
47SQLAlchemy will eventually drop 2.5 support as well - when
482.6 is reached as the baseline, SQLAlchemy will move to use
492.6/3.3 in-place compatibility, removing the usage of the
50``2to3`` tool and maintaining a source base that works with
51Python 2 and 3 at the same time.
52
53New ORM Features
54================
55
56.. _feature_relationship_08:
57
58Rewritten :func:`.relationship` mechanics
59-----------------------------------------
60
610.8 features a much improved and capable system regarding
62how :func:`.relationship` determines how to join between two
63entities.  The new system includes these features:
64
65* The ``primaryjoin`` argument is **no longer needed** when
66  constructing a :func:`.relationship`   against a class that
67  has multiple foreign key paths to the target.  Only the
68  ``foreign_keys``   argument is needed to specify those
69  columns which should be included:
70
71  ::
72
73
74        class Parent(Base):
75            __tablename__ = 'parent'
76            id = Column(Integer, primary_key=True)
77            child_id_one = Column(Integer, ForeignKey('child.id'))
78            child_id_two = Column(Integer, ForeignKey('child.id'))
79
80            child_one = relationship("Child", foreign_keys=child_id_one)
81            child_two = relationship("Child", foreign_keys=child_id_two)
82
83        class Child(Base):
84            __tablename__ = 'child'
85            id = Column(Integer, primary_key=True)
86
87* relationships against self-referential, composite foreign
88  keys where **a column points to itself**   are now
89  supported.   The canonical case is as follows:
90
91  ::
92
93        class Folder(Base):
94            __tablename__ = 'folder'
95            __table_args__ = (
96              ForeignKeyConstraint(
97                  ['account_id', 'parent_id'],
98                  ['folder.account_id', 'folder.folder_id']),
99            )
100
101            account_id = Column(Integer, primary_key=True)
102            folder_id = Column(Integer, primary_key=True)
103            parent_id = Column(Integer)
104            name = Column(String)
105
106            parent_folder = relationship("Folder",
107                                backref="child_folders",
108                                remote_side=[account_id, folder_id]
109                          )
110
111  Above, the ``Folder`` refers to its parent ``Folder``
112  joining from ``account_id`` to itself, and ``parent_id``
113  to ``folder_id``.  When SQLAlchemy constructs an auto-
114  join, no longer can it assume all columns on the "remote"
115  side are aliased, and all columns on the "local" side are
116  not - the ``account_id`` column is **on both sides**.   So
117  the internal relationship mechanics were totally rewritten
118  to support an entirely different system whereby two copies
119  of ``account_id`` are generated, each containing different
120  *annotations* to determine their role within the
121  statement.  Note the join condition within a basic eager
122  load:
123
124  ::
125
126        SELECT
127            folder.account_id AS folder_account_id,
128            folder.folder_id AS folder_folder_id,
129            folder.parent_id AS folder_parent_id,
130            folder.name AS folder_name,
131            folder_1.account_id AS folder_1_account_id,
132            folder_1.folder_id AS folder_1_folder_id,
133            folder_1.parent_id AS folder_1_parent_id,
134            folder_1.name AS folder_1_name
135        FROM folder
136            LEFT OUTER JOIN folder AS folder_1
137            ON
138                folder_1.account_id = folder.account_id
139                AND folder.folder_id = folder_1.parent_id
140
141        WHERE folder.folder_id = ? AND folder.account_id = ?
142
143* Previously difficult custom join conditions, like those involving
144  functions and/or CASTing of types, will now function as
145  expected in most cases::
146
147    class HostEntry(Base):
148        __tablename__ = 'host_entry'
149
150        id = Column(Integer, primary_key=True)
151        ip_address = Column(INET)
152        content = Column(String(50))
153
154        # relationship() using explicit foreign_keys, remote_side
155        parent_host = relationship("HostEntry",
156                            primaryjoin=ip_address == cast(content, INET),
157                            foreign_keys=content,
158                            remote_side=ip_address
159                        )
160
161  The new :func:`.relationship` mechanics make use of a
162  SQLAlchemy concept known as :term:`annotations`.  These annotations
163  are also available to application code explicitly via
164  the :func:`.foreign` and :func:`.remote` functions, either
165  as a means to improve readability for advanced configurations
166  or to directly inject an exact configuration, bypassing
167  the usual join-inspection heuristics::
168
169    from sqlalchemy.orm import foreign, remote
170
171    class HostEntry(Base):
172        __tablename__ = 'host_entry'
173
174        id = Column(Integer, primary_key=True)
175        ip_address = Column(INET)
176        content = Column(String(50))
177
178        # relationship() using explicit foreign() and remote() annotations
179        # in lieu of separate arguments
180        parent_host = relationship("HostEntry",
181                            primaryjoin=remote(ip_address) == \
182                                    cast(foreign(content), INET),
183                        )
184
185
186.. seealso::
187
188    :ref:`relationship_configure_joins` - a newly revised section on :func:`.relationship`
189    detailing the latest techniques for customizing related attributes and collection
190    access.
191
192:ticket:`1401` :ticket:`610`
193
194.. _feature_orminspection_08:
195
196New Class/Object Inspection System
197----------------------------------
198
199Lots of SQLAlchemy users are writing systems that require
200the ability to inspect the attributes of a mapped class,
201including being able to get at the primary key columns,
202object relationships, plain attributes, and so forth,
203typically for the purpose of building data-marshalling
204systems, like JSON/XML conversion schemes and of course form
205libraries galore.
206
207Originally, the :class:`.Table` and :class:`.Column` model were the
208original inspection points, which have a well-documented
209system.  While SQLAlchemy ORM models are also fully
210introspectable, this has never been a fully stable and
211supported feature, and users tended to not have a clear idea
212how to get at this information.
213
2140.8 now provides a consistent, stable and fully
215documented API for this purpose, including an inspection
216system which works on mapped classes, instances, attributes,
217and other Core and ORM constructs.  The entrypoint to this
218system is the core-level :func:`.inspect` function.
219In most cases, the object being inspected
220is one already part of SQLAlchemy's system,
221such as :class:`.Mapper`, :class:`.InstanceState`,
222:class:`.Inspector`.  In some cases, new objects have been
223added with the job of providing the inspection API in
224certain contexts, such as :class:`.AliasedInsp` and
225:class:`.AttributeState`.
226
227A walkthrough of some key capabilities follows::
228
229    >>> class User(Base):
230    ...     __tablename__ = 'user'
231    ...     id = Column(Integer, primary_key=True)
232    ...     name = Column(String)
233    ...     name_syn = synonym(name)
234    ...     addresses = relationship("Address")
235    ...
236
237    >>> # universal entry point is inspect()
238    >>> b = inspect(User)
239
240    >>> # b in this case is the Mapper
241    >>> b
242    <Mapper at 0x101521950; User>
243
244    >>> # Column namespace
245    >>> b.columns.id
246    Column('id', Integer(), table=<user>, primary_key=True, nullable=False)
247
248    >>> # mapper's perspective of the primary key
249    >>> b.primary_key
250    (Column('id', Integer(), table=<user>, primary_key=True, nullable=False),)
251
252    >>> # MapperProperties available from .attrs
253    >>> b.attrs.keys()
254    ['name_syn', 'addresses', 'id', 'name']
255
256    >>> # .column_attrs, .relationships, etc. filter this collection
257    >>> b.column_attrs.keys()
258    ['id', 'name']
259
260    >>> list(b.relationships)
261    [<sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>]
262
263    >>> # they are also namespaces
264    >>> b.column_attrs.id
265    <sqlalchemy.orm.properties.ColumnProperty object at 0x101525090>
266
267    >>> b.relationships.addresses
268    <sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>
269
270    >>> # point inspect() at a mapped, class level attribute,
271    >>> # returns the attribute itself
272    >>> b = inspect(User.addresses)
273    >>> b
274    <sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x101521fd0>
275
276    >>> # From here we can get the mapper:
277    >>> b.mapper
278    <Mapper at 0x101525810; Address>
279
280    >>> # the parent inspector, in this case a mapper
281    >>> b.parent
282    <Mapper at 0x101521950; User>
283
284    >>> # an expression
285    >>> print(b.expression)
286    "user".id = address.user_id
287
288    >>> # inspect works on instances
289    >>> u1 = User(id=3, name='x')
290    >>> b = inspect(u1)
291
292    >>> # it returns the InstanceState
293    >>> b
294    <sqlalchemy.orm.state.InstanceState object at 0x10152bed0>
295
296    >>> # similar attrs accessor refers to the
297    >>> b.attrs.keys()
298    ['id', 'name_syn', 'addresses', 'name']
299
300    >>> # attribute interface - from attrs, you get a state object
301    >>> b.attrs.id
302    <sqlalchemy.orm.state.AttributeState object at 0x10152bf90>
303
304    >>> # this object can give you, current value...
305    >>> b.attrs.id.value
306    3
307
308    >>> # ... current history
309    >>> b.attrs.id.history
310    History(added=[3], unchanged=(), deleted=())
311
312    >>> # InstanceState can also provide session state information
313    >>> # lets assume the object is persistent
314    >>> s = Session()
315    >>> s.add(u1)
316    >>> s.commit()
317
318    >>> # now we can get primary key identity, always
319    >>> # works in query.get()
320    >>> b.identity
321    (3,)
322
323    >>> # the mapper level key
324    >>> b.identity_key
325    (<class '__main__.User'>, (3,))
326
327    >>> # state within the session
328    >>> b.persistent, b.transient, b.deleted, b.detached
329    (True, False, False, False)
330
331    >>> # owning session
332    >>> b.session
333    <sqlalchemy.orm.session.Session object at 0x101701150>
334
335.. seealso::
336
337    :ref:`core_inspection_toplevel`
338
339:ticket:`2208`
340
341New with_polymorphic() feature, can be used anywhere
342----------------------------------------------------
343
344The :meth:`.Query.with_polymorphic` method allows the user to
345specify which tables should be present when querying against
346a joined-table entity.   Unfortunately the method is awkward
347and only applies to the first entity in the list, and
348otherwise has awkward behaviors both in usage as well as
349within the internals.  A new enhancement to the
350:func:`.aliased` construct has been added called
351:func:`.with_polymorphic` which allows any entity to be
352"aliased" into a "polymorphic" version of itself, freely
353usable anywhere:
354
355::
356
357    from sqlalchemy.orm import with_polymorphic
358    palias = with_polymorphic(Person, [Engineer, Manager])
359    session.query(Company).\
360                join(palias, Company.employees).\
361                filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
362
363.. seealso::
364
365    :ref:`with_polymorphic` - newly updated documentation for polymorphic
366    loading control.
367
368:ticket:`2333`
369
370of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()
371--------------------------------------------------------------------------------------------------------------
372
373The :meth:`.PropComparator.of_type` method is used to specify
374a specific subtype to use when constructing SQL expressions along
375a :func:`.relationship` that has a :term:`polymorphic` mapping as its target.
376This method can now be used to target *any number* of target subtypes,
377by combining it with the new :func:`.with_polymorphic` function::
378
379    # use eager loading in conjunction with with_polymorphic targets
380    Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
381    q = s.query(DataContainer).\
382                join(DataContainer.jobs.of_type(Job_P)).\
383                    options(contains_eager(DataContainer.jobs.of_type(Job_P)))
384
385The method now works equally well in most places a regular relationship
386attribute is accepted, including with loader functions like
387:func:`.joinedload`, :func:`.subqueryload`, :func:`.contains_eager`,
388and comparison methods like :meth:`.PropComparator.any`
389and :meth:`.PropComparator.has`::
390
391    # use eager loading in conjunction with with_polymorphic targets
392    Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
393    q = s.query(DataContainer).\
394                join(DataContainer.jobs.of_type(Job_P)).\
395                    options(contains_eager(DataContainer.jobs.of_type(Job_P)))
396
397    # pass subclasses to eager loads (implicitly applies with_polymorphic)
398    q = s.query(ParentThing).\
399                    options(
400                        joinedload_all(
401                            ParentThing.container,
402                            DataContainer.jobs.of_type(SubJob)
403                    ))
404
405    # control self-referential aliasing with any()/has()
406    Job_A = aliased(Job)
407    q = s.query(Job).join(DataContainer.jobs).\
408                    filter(
409                        DataContainer.jobs.of_type(Job_A).\
410                            any(and_(Job_A.id < Job.id, Job_A.type=='fred')
411                        )
412                    )
413
414.. seealso::
415
416    :ref:`of_type`
417
418:ticket:`2438` :ticket:`1106`
419
420Events Can Be Applied to Unmapped Superclasses
421----------------------------------------------
422
423Mapper and instance events can now be associated with an unmapped
424superclass, where those events will be propagated to subclasses
425as those subclasses are mapped.   The ``propagate=True`` flag
426should be used.  This feature allows events to be associated
427with a declarative base class::
428
429    from sqlalchemy.ext.declarative import declarative_base
430
431    Base = declarative_base()
432
433    @event.listens_for("load", Base, propagate=True)
434    def on_load(target, context):
435        print("New instance loaded:", target)
436
437    # on_load() will be applied to SomeClass
438    class SomeClass(Base):
439        __tablename__ = 'sometable'
440
441        # ...
442
443:ticket:`2585`
444
445Declarative Distinguishes Between Modules/Packages
446--------------------------------------------------
447
448A key feature of Declarative is the ability to refer
449to other mapped classes using their string name.   The
450registry of class names is now sensitive to the owning
451module and package of a given class.   The classes
452can be referred to via dotted name in expressions::
453
454    class Snack(Base):
455        # ...
456
457        peanuts = relationship("nuts.Peanut",
458                primaryjoin="nuts.Peanut.snack_id == Snack.id")
459
460The resolution allows that any full or partial
461disambiguating package name can be used.   If the
462path to a particular class is still ambiguous,
463an error is raised.
464
465:ticket:`2338`
466
467
468New DeferredReflection Feature in Declarative
469---------------------------------------------
470
471The "deferred reflection" example has been moved to a
472supported feature within Declarative.  This feature allows
473the construction of declarative mapped classes with only
474placeholder ``Table`` metadata, until a ``prepare()`` step
475is called, given an ``Engine`` with which to reflect fully
476all tables and establish actual mappings.   The system
477supports overriding of columns, single and joined
478inheritance, as well as distinct bases-per-engine. A full
479declarative configuration can now be created against an
480existing table that is assembled upon engine creation time
481in one step:
482
483::
484
485    class ReflectedOne(DeferredReflection, Base):
486        __abstract__ = True
487
488    class ReflectedTwo(DeferredReflection, Base):
489        __abstract__ = True
490
491    class MyClass(ReflectedOne):
492        __tablename__ = 'mytable'
493
494    class MyOtherClass(ReflectedOne):
495        __tablename__ = 'myothertable'
496
497    class YetAnotherClass(ReflectedTwo):
498        __tablename__ = 'yetanothertable'
499
500    ReflectedOne.prepare(engine_one)
501    ReflectedTwo.prepare(engine_two)
502
503.. seealso::
504
505    :class:`.DeferredReflection`
506
507:ticket:`2485`
508
509ORM Classes Now Accepted by Core Constructs
510-------------------------------------------
511
512While the SQL expressions used with :meth:`.Query.filter`,
513such as ``User.id == 5``, have always been compatible for
514use with core constructs such as :func:`.select`, the mapped
515class itself would not be recognized when passed to :func:`.select`,
516:meth:`.Select.select_from`, or :meth:`.Select.correlate`.
517A new SQL registration system allows a mapped class to be
518accepted as a FROM clause within the core::
519
520    from sqlalchemy import select
521
522    stmt = select([User]).where(User.id == 5)
523
524Above, the mapped ``User`` class will expand into
525the :class:`.Table` to which ``User`` is mapped.
526
527:ticket:`2245`
528
529Query.update() supports UPDATE..FROM
530-------------------------------------
531
532The new UPDATE..FROM mechanics work in query.update().
533Below, we emit an UPDATE against ``SomeEntity``, adding
534a FROM clause (or equivalent, depending on backend)
535against ``SomeOtherEntity``::
536
537    query(SomeEntity).\
538        filter(SomeEntity.id==SomeOtherEntity.id).\
539        filter(SomeOtherEntity.foo=='bar').\
540        update({"data":"x"})
541
542In particular, updates to joined-inheritance
543entities are supported, provided the target of the UPDATE is local to the
544table being filtered on, or if the parent and child tables
545are mixed, they are joined explicitly in the query.  Below,
546given ``Engineer`` as a joined subclass of ``Person``:
547
548::
549
550    query(Engineer).\
551            filter(Person.id==Engineer.id).\
552            filter(Person.name=='dilbert').\
553            update({"engineer_data":"java"})
554
555would produce:
556
557::
558
559    UPDATE engineer SET engineer_data='java' FROM person
560    WHERE person.id=engineer.id AND person.name='dilbert'
561
562:ticket:`2365`
563
564rollback() will only roll back "dirty" objects from a begin_nested()
565--------------------------------------------------------------------
566
567A behavioral change that should improve efficiency for those
568users using SAVEPOINT via ``Session.begin_nested()`` - upon
569``rollback()``, only those objects that were made dirty
570since the last flush will be expired, the rest of the
571``Session`` remains intact.  This because a ROLLBACK to a
572SAVEPOINT does not terminate the containing transaction's
573isolation, so no expiry is needed except for those changes
574that were not flushed in the current transaction.
575
576:ticket:`2452`
577
578Caching Example now uses dogpile.cache
579---------------------------------------
580
581The caching example now uses `dogpile.cache <https://dogpilecache.readthedocs.io/>`_.
582Dogpile.cache is a rewrite of the caching portion
583of Beaker, featuring vastly simpler and faster operation,
584as well as support for distributed locking.
585
586Note that the SQLAlchemy APIs used by the Dogpile example as well
587as the previous Beaker example have changed slightly, in particular
588this change is needed as illustrated in the Beaker example::
589
590    --- examples/beaker_caching/caching_query.py
591    +++ examples/beaker_caching/caching_query.py
592    @@ -222,7 +222,8 @@
593
594             """
595             if query._current_path:
596    -            mapper, key = query._current_path[-2:]
597    +            mapper, prop = query._current_path[-2:]
598    +            key = prop.key
599
600                 for cls in mapper.class_.__mro__:
601                     if (cls, key) in self._relationship_options:
602
603.. seealso::
604
605    :mod:`dogpile_caching`
606
607:ticket:`2589`
608
609New Core Features
610==================
611
612Fully extensible, type-level operator support in Core
613-----------------------------------------------------
614
615The Core has to date never had any system of adding support
616for new SQL operators to Column and other expression
617constructs, other than the :meth:`.ColumnOperators.op` method
618which is "just enough" to make things work. There has also
619never been any system in place for Core which allows the
620behavior of existing operators to be overridden.   Up until
621now, the only way operators could be flexibly redefined was
622in the ORM layer, using :func:`.column_property` given a
623``comparator_factory`` argument.   Third party libraries
624like GeoAlchemy therefore were forced to be ORM-centric and
625rely upon an array of hacks to apply new opertions as well
626as to get them to propagate correctly.
627
628The new operator system in Core adds the one hook that's
629been missing all along, which is to associate new and
630overridden operators with *types*.   Since after all, it's
631not really a column, CAST operator, or SQL function that
632really drives what kinds of operations are present, it's the
633*type* of the expression.   The implementation details are
634minimal - only a few extra methods are added to the core
635:class:`.ColumnElement` type so that it consults its
636:class:`.TypeEngine` object for an optional set of operators.
637New or revised operations can be associated with any type,
638either via subclassing of an existing type, by using
639:class:`.TypeDecorator`, or "globally across-the-board" by
640attaching a new :class:`.TypeEngine.Comparator` object to an existing type
641class.
642
643For example, to add logarithm support to :class:`.Numeric` types:
644
645::
646
647
648    from sqlalchemy.types import Numeric
649    from sqlalchemy.sql import func
650
651    class CustomNumeric(Numeric):
652        class comparator_factory(Numeric.Comparator):
653            def log(self, other):
654                return func.log(self.expr, other)
655
656The new type is usable like any other type:
657
658::
659
660
661    data = Table('data', metadata,
662              Column('id', Integer, primary_key=True),
663              Column('x', CustomNumeric(10, 5)),
664              Column('y', CustomNumeric(10, 5))
665         )
666
667    stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
668    print(conn.execute(stmt).fetchall())
669
670
671New features which have come from this immediately include
672support for Postgresql's HSTORE type, as well as new
673operations associated with Postgresql's ARRAY
674type.    It also paves the way for existing types to acquire
675lots more operators that are specific to those types, such
676as more string, integer and date operators.
677
678.. seealso::
679
680    :ref:`types_operators`
681
682    :class:`.HSTORE`
683
684:ticket:`2547`
685
686.. _feature_2623:
687
688Multiple-VALUES support for Insert
689-----------------------------------
690
691The :meth:`.Insert.values` method now supports a list of dictionaries,
692which will render a multi-VALUES statement such as
693``VALUES (<row1>), (<row2>), ...``.  This is only relevant to backends which
694support this syntax, including Postgresql, SQLite, and MySQL.  It is
695not the same thing as the usual ``executemany()`` style of INSERT which
696remains unchanged::
697
698    users.insert().values([
699                        {"name": "some name"},
700                        {"name": "some other name"},
701                        {"name": "yet another name"},
702                    ])
703
704.. seealso::
705
706    :meth:`.Insert.values`
707
708:ticket:`2623`
709
710Type Expressions
711-----------------
712
713SQL expressions can now be associated with types.  Historically,
714:class:`.TypeEngine` has always allowed Python-side functions which
715receive both bound parameters as well as result row values, passing
716them through a Python side conversion function on the way to/back from
717the database.   The new feature allows similar
718functionality, except on the database side::
719
720    from sqlalchemy.types import String
721    from sqlalchemy import func, Table, Column, MetaData
722
723    class LowerString(String):
724        def bind_expression(self, bindvalue):
725            return func.lower(bindvalue)
726
727        def column_expression(self, col):
728            return func.lower(col)
729
730    metadata = MetaData()
731    test_table = Table(
732            'test_table',
733            metadata,
734            Column('data', LowerString)
735    )
736
737Above, the ``LowerString`` type defines a SQL expression that will be emitted
738whenever the ``test_table.c.data`` column is rendered in the columns
739clause of a SELECT statement::
740
741    >>> print(select([test_table]).where(test_table.c.data == 'HI'))
742    SELECT lower(test_table.data) AS data
743    FROM test_table
744    WHERE test_table.data = lower(:data_1)
745
746This feature is also used heavily by the new release of GeoAlchemy,
747to embed PostGIS expressions inline in SQL based on type rules.
748
749.. seealso::
750
751    :ref:`types_sql_value_processing`
752
753:ticket:`1534`
754
755Core Inspection System
756-----------------------
757
758The :func:`.inspect` function introduced in :ref:`feature_orminspection_08`
759also applies to the core.  Applied to an :class:`.Engine` it produces
760an :class:`.Inspector` object::
761
762    from sqlalchemy import inspect
763    from sqlalchemy import create_engine
764
765    engine = create_engine("postgresql://scott:tiger@localhost/test")
766    insp = inspect(engine)
767    print(insp.get_table_names())
768
769It can also be applied to any :class:`.ClauseElement`, which returns
770the :class:`.ClauseElement` itself, such as :class:`.Table`, :class:`.Column`,
771:class:`.Select`, etc.   This allows it to work fluently between Core
772and ORM constructs.
773
774
775New Method :meth:`.Select.correlate_except`
776-------------------------------------------
777
778:func:`.select` now has a method :meth:`.Select.correlate_except`
779which specifies "correlate on all FROM clauses except those
780specified".  It can be used for mapping scenarios where
781a related subquery should correlate normally, except
782against a particular target selectable::
783
784    class SnortEvent(Base):
785        __tablename__ = "event"
786
787        id = Column(Integer, primary_key=True)
788        signature = Column(Integer, ForeignKey("signature.id"))
789
790        signatures = relationship("Signature", lazy=False)
791
792    class Signature(Base):
793        __tablename__ = "signature"
794
795        id = Column(Integer, primary_key=True)
796
797        sig_count = column_property(
798                        select([func.count('*')]).\
799                            where(SnortEvent.signature == id).
800                            correlate_except(SnortEvent)
801                    )
802
803.. seealso::
804
805    :meth:`.Select.correlate_except`
806
807Postgresql HSTORE type
808----------------------
809
810Support for Postgresql's ``HSTORE`` type is now available as
811:class:`.postgresql.HSTORE`.   This type makes great usage
812of the new operator system to provide a full range of operators
813for HSTORE types, including index access, concatenation,
814and containment methods such as
815:meth:`~.HSTORE.comparator_factory.has_key`,
816:meth:`~.HSTORE.comparator_factory.has_any`, and
817:meth:`~.HSTORE.comparator_factory.matrix`::
818
819    from sqlalchemy.dialects.postgresql import HSTORE
820
821    data = Table('data_table', metadata,
822            Column('id', Integer, primary_key=True),
823            Column('hstore_data', HSTORE)
824        )
825
826    engine.execute(
827        select([data.c.hstore_data['some_key']])
828    ).scalar()
829
830    engine.execute(
831        select([data.c.hstore_data.matrix()])
832    ).scalar()
833
834
835.. seealso::
836
837    :class:`.postgresql.HSTORE`
838
839    :class:`.postgresql.hstore`
840
841:ticket:`2606`
842
843Enhanced Postgresql ARRAY type
844------------------------------
845
846The :class:`.postgresql.ARRAY` type will accept an optional
847"dimension" argument, pinning it to a fixed number of
848dimensions and greatly improving efficiency when retrieving
849results:
850
851::
852
853    # old way, still works since PG supports N-dimensions per row:
854    Column("my_array", postgresql.ARRAY(Integer))
855
856    # new way, will render ARRAY with correct number of [] in DDL,
857    # will process binds and results more efficiently as we don't need
858    # to guess how many levels deep to go
859    Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
860
861The type also introduces new operators, using the new type-specific
862operator framework.  New operations include indexed access::
863
864    result = conn.execute(
865        select([mytable.c.arraycol[2]])
866    )
867
868slice access in SELECT::
869
870    result = conn.execute(
871        select([mytable.c.arraycol[2:4]])
872    )
873
874slice updates in UPDATE::
875
876    conn.execute(
877        mytable.update().values({mytable.c.arraycol[2:3]: [7, 8]})
878    )
879
880freestanding array literals::
881
882    >>> from sqlalchemy.dialects import postgresql
883    >>> conn.scalar(
884    ...    select([
885    ...        postgresql.array([1, 2]) + postgresql.array([3, 4, 5])
886    ...    ])
887    ...  )
888    [1, 2, 3, 4, 5]
889
890array concatenation, where below, the right side ``[4, 5, 6]`` is coerced into an array literal::
891
892    select([mytable.c.arraycol + [4, 5, 6]])
893
894.. seealso::
895
896    :class:`.postgresql.ARRAY`
897
898    :class:`.postgresql.array`
899
900:ticket:`2441`
901
902New, configurable DATE, TIME types for SQLite
903---------------------------------------------
904
905SQLite has no built-in DATE, TIME, or DATETIME types, and
906instead provides some support for storage of date and time
907values either as strings or integers.   The date and time
908types for SQLite are enhanced in 0.8 to be much more
909configurable as to the specific format, including that the
910"microseconds" portion is optional, as well as pretty much
911everything else.
912
913::
914
915    Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
916    Column('sometimestamp', sqlite.DATETIME(
917                        storage_format=(
918                                    "%(year)04d%(month)02d%(day)02d"
919                                    "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
920                        ),
921                        regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
922                        )
923                )
924    Column('somedate', sqlite.DATE(
925                        storage_format="%(month)02d/%(day)02d/%(year)04d",
926                        regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
927                    )
928                )
929
930Huge thanks to Nate Dub for the sprinting on this at Pycon 2012.
931
932.. seealso::
933
934    :class:`.sqlite.DATETIME`
935
936    :class:`.sqlite.DATE`
937
938    :class:`.sqlite.TIME`
939
940:ticket:`2363`
941
942"COLLATE" supported across all dialects; in particular MySQL, Postgresql, SQLite
943--------------------------------------------------------------------------------
944
945The "collate" keyword, long accepted by the MySQL dialect, is now established
946on all :class:`.String` types and will render on any backend, including
947when features such as :meth:`.MetaData.create_all` and :func:`.cast` is used::
948
949    >>> stmt = select([cast(sometable.c.somechar, String(20, collation='utf8'))])
950    >>> print(stmt)
951    SELECT CAST(sometable.somechar AS VARCHAR(20) COLLATE "utf8") AS anon_1
952    FROM sometable
953
954.. seealso::
955
956    :class:`.String`
957
958:ticket:`2276`
959
960"Prefixes" now supported for :func:`.update`, :func:`.delete`
961-------------------------------------------------------------
962
963Geared towards MySQL, a "prefix" can be rendered within any of
964these constructs.   E.g.::
965
966    stmt = table.delete().prefix_with("LOW_PRIORITY", dialect="mysql")
967
968
969    stmt = table.update().prefix_with("LOW_PRIORITY", dialect="mysql")
970
971The method is new in addition to those which already existed
972on :func:`.insert`, :func:`.select` and :class:`.Query`.
973
974.. seealso::
975
976    :meth:`.Update.prefix_with`
977
978    :meth:`.Delete.prefix_with`
979
980    :meth:`.Insert.prefix_with`
981
982    :meth:`.Select.prefix_with`
983
984    :meth:`.Query.prefix_with`
985
986:ticket:`2431`
987
988
989Behavioral Changes
990==================
991
992.. _legacy_is_orphan_addition:
993
994The consideration of a "pending" object as an "orphan" has been made more aggressive
995------------------------------------------------------------------------------------
996
997This is a late add to the 0.8 series, however it is hoped that the new behavior
998is generally more consistent and intuitive in a wider variety of
999situations.   The ORM has since at least version 0.4 included behavior
1000such that an object that's "pending", meaning that it's
1001associated with a :class:`.Session` but hasn't been inserted into the database
1002yet, is automatically expunged from the :class:`.Session` when it becomes an "orphan",
1003which means it has been de-associated with a parent object that refers to it
1004with ``delete-orphan`` cascade on the configured :func:`.relationship`.   This
1005behavior is intended to approximately mirror the behavior of a persistent
1006(that is, already inserted) object, where the ORM will emit a DELETE for such
1007objects that become orphans based on the interception of detachment events.
1008
1009The behavioral change comes into play for objects that
1010are referred to by multiple kinds of parents that each specify ``delete-orphan``; the
1011typical example is an :ref:`association object <association_pattern>` that bridges two other kinds of objects
1012in a many-to-many pattern.   Previously, the behavior was such that the
1013pending object would be expunged only when de-associated with *all* of its parents.
1014With the behavioral change, the pending object
1015is expunged as soon as it is de-associated from *any* of the parents that it was
1016previously associated with.  This behavior is intended to more closely
1017match that of persistent objects, which are deleted as soon
1018as they are de-associated from any parent.
1019
1020The rationale for the older behavior dates back
1021at least to version 0.4, and was basically a defensive decision to try to alleviate
1022confusion when an object was still being constructed for INSERT.   But the reality
1023is that the object is re-associated with the :class:`.Session` as soon as it is
1024attached to any new parent in any case.
1025
1026It's still possible to flush an object
1027that is not associated with all of its required parents, if the object was either
1028not associated with those parents in the first place, or if it was expunged, but then
1029re-associated with a :class:`.Session` via a subsequent attachment event but still
1030not fully associated.   In this situation, it is expected that the database
1031would emit an integrity error, as there are likely NOT NULL foreign key columns
1032that are unpopulated.   The ORM makes the decision to let these INSERT attempts
1033occur, based on the judgment that an object that is only partially associated with
1034its required parents but has been actively associated with some of them,
1035is more often than not a user error, rather than an intentional
1036omission which should be silently skipped - silently skipping the INSERT here would
1037make user errors of this nature very hard to debug.
1038
1039The old behavior, for applications that might have been relying upon it, can be re-enabled for
1040any :class:`.Mapper` by specifying the flag ``legacy_is_orphan`` as a mapper
1041option.
1042
1043The new behavior allows the following test case to work::
1044
1045    from sqlalchemy import Column, Integer, String, ForeignKey
1046    from sqlalchemy.orm import relationship, backref
1047    from sqlalchemy.ext.declarative import declarative_base
1048
1049    Base = declarative_base()
1050
1051    class User(Base):
1052        __tablename__ = 'user'
1053        id = Column(Integer, primary_key=True)
1054        name = Column(String(64))
1055
1056    class UserKeyword(Base):
1057        __tablename__ = 'user_keyword'
1058        user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
1059        keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True)
1060
1061        user = relationship(User,
1062                    backref=backref("user_keywords",
1063                                    cascade="all, delete-orphan")
1064                )
1065
1066        keyword = relationship("Keyword",
1067                    backref=backref("user_keywords",
1068                                    cascade="all, delete-orphan")
1069                )
1070
1071        # uncomment this to enable the old behavior
1072        # __mapper_args__ = {"legacy_is_orphan": True}
1073
1074    class Keyword(Base):
1075        __tablename__ = 'keyword'
1076        id = Column(Integer, primary_key=True)
1077        keyword = Column('keyword', String(64))
1078
1079    from sqlalchemy import create_engine
1080    from sqlalchemy.orm import Session
1081
1082    # note we're using Postgresql to ensure that referential integrity
1083    # is enforced, for demonstration purposes.
1084    e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
1085
1086    Base.metadata.drop_all(e)
1087    Base.metadata.create_all(e)
1088
1089    session = Session(e)
1090
1091    u1 = User(name="u1")
1092    k1 = Keyword(keyword="k1")
1093
1094    session.add_all([u1, k1])
1095
1096    uk1 = UserKeyword(keyword=k1, user=u1)
1097
1098    # previously, if session.flush() were called here,
1099    # this operation would succeed, but if session.flush()
1100    # were not called here, the operation fails with an
1101    # integrity error.
1102    # session.flush()
1103    del u1.user_keywords[0]
1104
1105    session.commit()
1106
1107
1108:ticket:`2655`
1109
1110The after_attach event fires after the item is associated with the Session instead of before; before_attach added
1111-----------------------------------------------------------------------------------------------------------------
1112
1113Event handlers which use after_attach can now assume the
1114given instance is associated with the given session:
1115
1116::
1117
1118    @event.listens_for(Session, "after_attach")
1119    def after_attach(session, instance):
1120        assert instance in session
1121
1122Some use cases require that it work this way.  However,
1123other use cases require that the item is *not* yet part of
1124the session, such as when a query, intended to load some
1125state required for an instance, emits autoflush first and
1126would otherwise prematurely flush the target object.  Those
1127use cases should use the new "before_attach" event:
1128
1129::
1130
1131    @event.listens_for(Session, "before_attach")
1132    def before_attach(session, instance):
1133        instance.some_necessary_attribute = session.query(Widget).\
1134                                                filter_by(instance.widget_name).\
1135                                                first()
1136
1137:ticket:`2464`
1138
1139
1140
1141Query now auto-correlates like a select() does
1142----------------------------------------------
1143
1144Previously it was necessary to call :meth:`.Query.correlate` in
1145order to have a column- or WHERE-subquery correlate to the
1146parent:
1147
1148::
1149
1150    subq = session.query(Entity.value).\
1151                    filter(Entity.id==Parent.entity_id).\
1152                    correlate(Parent).\
1153                    as_scalar()
1154    session.query(Parent).filter(subq=="some value")
1155
1156This was the opposite behavior of a plain ``select()``
1157construct which would assume auto-correlation by default.
1158The above statement in 0.8 will correlate automatically:
1159
1160::
1161
1162    subq = session.query(Entity.value).\
1163                    filter(Entity.id==Parent.entity_id).\
1164                    as_scalar()
1165    session.query(Parent).filter(subq=="some value")
1166
1167like in ``select()``, correlation can be disabled by calling
1168``query.correlate(None)`` or manually set by passing an
1169entity, ``query.correlate(someentity)``.
1170
1171:ticket:`2179`
1172
1173.. _correlation_context_specific:
1174
1175Correlation is now always context-specific
1176------------------------------------------
1177
1178To allow a wider variety of correlation scenarios, the behavior of
1179:meth:`.Select.correlate` and :meth:`.Query.correlate` has changed slightly
1180such that the SELECT statement will omit the "correlated" target from the
1181FROM clause only if the statement is actually used in that context.  Additionally,
1182it's no longer possible for a SELECT statement that's placed as a FROM
1183in an enclosing SELECT statement to "correlate" (i.e. omit) a FROM clause.
1184
1185This change only makes things better as far as rendering SQL, in that it's no
1186longer possible to render illegal SQL where there are insufficient FROM
1187objects relative to what's being selected::
1188
1189    from sqlalchemy.sql import table, column, select
1190
1191    t1 = table('t1', column('x'))
1192    t2 = table('t2', column('y'))
1193    s = select([t1, t2]).correlate(t1)
1194
1195    print(s)
1196
1197Prior to this change, the above would return::
1198
1199    SELECT t1.x, t2.y FROM t2
1200
1201which is invalid SQL as "t1" is not referred to in any FROM clause.
1202
1203Now, in the absence of an enclosing SELECT, it returns::
1204
1205    SELECT t1.x, t2.y FROM t1, t2
1206
1207Within a SELECT, the correlation takes effect as expected::
1208
1209    s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
1210
1211    print(s2)
1212
1213    SELECT t1.x, t2.y FROM t1, t2
1214    WHERE t1.x = t2.y AND t1.x =
1215        (SELECT t1.x, t2.y FROM t2)
1216
1217This change is not expected to impact any existing applications, as
1218the correlation behavior remains identical for properly constructed
1219expressions.  Only an application that relies, most likely within a
1220testing scenario, on the invalid string output of a correlated
1221SELECT used in a non-correlating context would see any change.
1222
1223:ticket:`2668`
1224
1225
1226.. _metadata_create_drop_tables:
1227
1228create_all() and drop_all() will now honor an empty list as such
1229----------------------------------------------------------------
1230
1231The methods :meth:`.MetaData.create_all` and :meth:`.MetaData.drop_all`
1232will now accept a list of :class:`.Table` objects that is empty,
1233and will not emit any CREATE or DROP statements.  Previously,
1234an empty list was interepreted the same as passing ``None``
1235for a collection, and CREATE/DROP would be emitted for all
1236items unconditionally.
1237
1238This is a bug fix but some applications may have been relying upon
1239the previous behavior.
1240
1241:ticket:`2664`
1242
1243Repaired the Event Targeting of :class:`.InstrumentationEvents`
1244----------------------------------------------------------------
1245
1246The :class:`.InstrumentationEvents` series of event targets have
1247documented that the events will only be fired off according to
1248the actual class passed as a target.  Through 0.7, this wasn't the
1249case, and any event listener applied to :class:`.InstrumentationEvents`
1250would be invoked for all classes mapped.  In 0.8, additional
1251logic has been added so that the events will only invoke for those
1252classes sent in.  The ``propagate`` flag here is set to ``True``
1253by default as class instrumentation events are typically used to
1254intercept classes that aren't yet created.
1255
1256:ticket:`2590`
1257
1258No more magic coercion of "=" to IN when comparing to subquery in MS-SQL
1259------------------------------------------------------------------------
1260
1261We found a very old behavior in the MSSQL dialect which
1262would attempt to rescue users from themselves when
1263doing something like this:
1264
1265::
1266
1267    scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
1268    select([sometable]).where(sometable.c.id==scalar_subq)
1269
1270SQL Server doesn't allow an equality comparison to a scalar
1271SELECT, that is, "x = (SELECT something)". The MSSQL dialect
1272would convert this to an IN.   The same thing would happen
1273however upon a comparison like "(SELECT something) = x", and
1274overall this level of guessing is outside of SQLAlchemy's
1275usual scope so the behavior is removed.
1276
1277:ticket:`2277`
1278
1279Fixed the behavior of :meth:`.Session.is_modified`
1280--------------------------------------------------
1281
1282The :meth:`.Session.is_modified` method accepts an argument
1283``passive`` which basically should not be necessary, the
1284argument in all cases should be the value ``True`` - when
1285left at its default of ``False`` it would have the effect of
1286hitting the database, and often triggering autoflush which
1287would itself change the results.   In 0.8 the ``passive``
1288argument will have no effect, and unloaded attributes will
1289never be checked for history since by definition there can
1290be no pending state change on an unloaded attribute.
1291
1292.. seealso::
1293
1294    :meth:`.Session.is_modified`
1295
1296:ticket:`2320`
1297
1298:attr:`.Column.key` is honored in the :attr:`.Select.c` attribute of :func:`.select` with :meth:`.Select.apply_labels`
1299-----------------------------------------------------------------------------------------------------------------------
1300
1301Users of the expression system know that :meth:`.Select.apply_labels`
1302prepends the table name to each column name, affecting the
1303names that are available from :attr:`.Select.c`:
1304
1305::
1306
1307    s = select([table1]).apply_labels()
1308    s.c.table1_col1
1309    s.c.table1_col2
1310
1311Before 0.8, if the :class:`.Column` had a different :attr:`.Column.key`, this
1312key would be ignored, inconsistently versus when
1313:meth:`.Select.apply_labels` were not used:
1314
1315::
1316
1317    # before 0.8
1318    table1 = Table('t1', metadata,
1319        Column('col1', Integer, key='column_one')
1320    )
1321    s = select([table1])
1322    s.c.column_one # would be accessible like this
1323    s.c.col1 # would raise AttributeError
1324
1325    s = select([table1]).apply_labels()
1326    s.c.table1_column_one # would raise AttributeError
1327    s.c.table1_col1 # would be accessible like this
1328
1329In 0.8, :attr:`.Column.key` is honored in both cases:
1330
1331::
1332
1333    # with 0.8
1334    table1 = Table('t1', metadata,
1335        Column('col1', Integer, key='column_one')
1336    )
1337    s = select([table1])
1338    s.c.column_one # works
1339    s.c.col1 # AttributeError
1340
1341    s = select([table1]).apply_labels()
1342    s.c.table1_column_one # works
1343    s.c.table1_col1 # AttributeError
1344
1345All other behavior regarding "name" and "key" are the same,
1346including that the rendered SQL will still use the form
1347``<tablename>_<colname>`` - the emphasis here was on
1348preventing the :attr:`.Column.key` contents from being rendered into the
1349``SELECT`` statement so that there are no issues with
1350special/ non-ascii characters used in the :attr:`.Column.key`.
1351
1352:ticket:`2397`
1353
1354single_parent warning is now an error
1355-------------------------------------
1356
1357A :func:`.relationship` that is many-to-one or many-to-many and
1358specifies "cascade='all, delete-orphan'", which is an
1359awkward but nonetheless supported use case (with
1360restrictions) will now raise an error if the relationship
1361does not specify the ``single_parent=True`` option.
1362Previously it would only emit a warning, but a failure would
1363follow almost immediately within the attribute system in any
1364case.
1365
1366:ticket:`2405`
1367
1368Adding the ``inspector`` argument to the ``column_reflect`` event
1369-----------------------------------------------------------------
1370
13710.7 added a new event called ``column_reflect``, provided so
1372that the reflection of columns could be augmented as each
1373one were reflected.   We got this event slightly wrong in
1374that the event gave no way to get at the current
1375``Inspector`` and ``Connection`` being used for the
1376reflection, in the case that additional information from the
1377database is needed.   As this is a new event not widely used
1378yet, we'll be adding the ``inspector`` argument into it
1379directly:
1380
1381::
1382
1383    @event.listens_for(Table, "column_reflect")
1384    def listen_for_col(inspector, table, column_info):
1385        # ...
1386
1387:ticket:`2418`
1388
1389Disabling auto-detect of collations, casing for MySQL
1390-----------------------------------------------------
1391
1392The MySQL dialect does two calls, one very expensive, to
1393load all possible collations from the database as well as
1394information on casing, the first time an ``Engine``
1395connects.   Neither of these collections are used for any
1396SQLAlchemy functions, so these calls will be changed to no
1397longer be emitted automatically. Applications that might
1398have relied on these collections being present on
1399``engine.dialect`` will need to call upon
1400``_detect_collations()`` and ``_detect_casing()`` directly.
1401
1402:ticket:`2404`
1403
1404"Unconsumed column names" warning becomes an exception
1405------------------------------------------------------
1406
1407Referring to a non-existent column in an ``insert()`` or
1408``update()`` construct will raise an error instead of a
1409warning:
1410
1411::
1412
1413    t1 = table('t1', column('x'))
1414    t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
1415
1416:ticket:`2415`
1417
1418Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint
1419---------------------------------------------------------------------------
1420
1421These two methods on ``Inspector`` were redundant, where
1422``get_primary_keys()`` would return the same information as
1423``get_pk_constraint()`` minus the name of the constraint:
1424
1425::
1426
1427    >>> insp.get_primary_keys()
1428    ["a", "b"]
1429
1430    >>> insp.get_pk_constraint()
1431    {"name":"pk_constraint", "constrained_columns":["a", "b"]}
1432
1433:ticket:`2422`
1434
1435Case-insensitive result row names will be disabled in most cases
1436----------------------------------------------------------------
1437
1438A very old behavior, the column names in ``RowProxy`` were
1439always compared case-insensitively:
1440
1441::
1442
1443    >>> row = result.fetchone()
1444    >>> row['foo'] == row['FOO'] == row['Foo']
1445    True
1446
1447This was for the benefit of a few dialects which in the
1448early days needed this, like Oracle and Firebird, but in
1449modern usage we have more accurate ways of dealing with the
1450case-insensitive behavior of these two platforms.
1451
1452Going forward, this behavior will be available only
1453optionally, by passing the flag ```case_sensitive=False```
1454to ```create_engine()```, but otherwise column names
1455requested from the row must match as far as casing.
1456
1457:ticket:`2423`
1458
1459``InstrumentationManager`` and alternate class instrumentation is now an extension
1460----------------------------------------------------------------------------------
1461
1462The ``sqlalchemy.orm.interfaces.InstrumentationManager``
1463class is moved to
1464``sqlalchemy.ext.instrumentation.InstrumentationManager``.
1465The "alternate instrumentation" system was built for the
1466benefit of a very small number of installations that needed
1467to work with existing or unusual class instrumentation
1468systems, and generally is very seldom used.   The complexity
1469of this system has been exported to an ``ext.`` module.  It
1470remains unused until once imported, typically when a third
1471party library imports ``InstrumentationManager``, at which
1472point it is injected back into ``sqlalchemy.orm`` by
1473replacing the default ``InstrumentationFactory`` with
1474``ExtendedInstrumentationRegistry``.
1475
1476Removed
1477=======
1478
1479SQLSoup
1480-------
1481
1482SQLSoup is a handy package that presents an alternative
1483interface on top of the SQLAlchemy ORM.   SQLSoup is now
1484moved into its own project and documented/released
1485separately; see https://bitbucket.org/zzzeek/sqlsoup.
1486
1487SQLSoup is a very simple tool that could also benefit from
1488contributors who are interested in its style of usage.
1489
1490:ticket:`2262`
1491
1492MutableType
1493-----------
1494
1495The older "mutable" system within the SQLAlchemy ORM has
1496been removed.   This refers to the ``MutableType`` interface
1497which was applied to types such as ``PickleType`` and
1498conditionally to ``TypeDecorator``, and since very early
1499SQLAlchemy versions has provided a way for the ORM to detect
1500changes in so-called "mutable" data structures such as JSON
1501structures and pickled objects.   However, the
1502implementation was never reasonable and forced a very
1503inefficient mode of usage on the unit-of-work which caused
1504an expensive scan of all objects to take place during flush.
1505In 0.7, the `sqlalchemy.ext.mutable <http://docs.sqlalchemy.
1506org/en/latest/orm/extensions/mutable.html>`_ extension was
1507introduced so that user-defined datatypes can appropriately
1508send events to the unit of work as changes occur.
1509
1510Today, usage of ``MutableType`` is expected to be low, as
1511warnings have been in place for some years now regarding its
1512inefficiency.
1513
1514:ticket:`2442`
1515
1516sqlalchemy.exceptions (has been sqlalchemy.exc for years)
1517---------------------------------------------------------
1518
1519We had left in an alias ``sqlalchemy.exceptions`` to attempt
1520to make it slightly easier for some very old libraries that
1521hadn't yet been upgraded to use ``sqlalchemy.exc``.  Some
1522users are still being confused by it however so in 0.8 we're
1523taking it out entirely to eliminate any of that confusion.
1524
1525:ticket:`2433`
1526
1527