1=============================
2What's New in SQLAlchemy 0.8?
3=============================
4
5.. admonition:: About this Document
6
7    This document describes changes between SQLAlchemy version 0.7,
8    undergoing maintenance releases as of October, 2012,
9    and SQLAlchemy version 0.8, which is expected for release
10    in early 2013.
11
12    Document date: October 25, 2012
13    Updated: March 9, 2013
14
15Introduction
16============
17
18This guide introduces what's new in SQLAlchemy version 0.8,
19and also documents changes which affect users migrating
20their applications from the 0.7 series of SQLAlchemy to 0.8.
21
22SQLAlchemy releases are closing in on 1.0, and each new
23version since 0.5 features fewer major usage changes.   Most
24applications that are settled into modern 0.7 patterns
25should be movable to 0.8 with no changes. Applications that
26use 0.6 and even 0.5 patterns should be directly migratable
27to 0.8 as well, though larger applications may want to test
28with each interim version.
29
30Platform Support
31================
32
33Targeting Python 2.5 and Up Now
34-------------------------------
35
36SQLAlchemy 0.8 will target Python 2.5 and forward;
37compatibility for Python 2.4 is being dropped.
38
39The internals will be able to make usage of Python ternaries
40(that is, ``x if y else z``) which will improve things
41versus the usage of ``y and x or z``, which naturally has
42been the source of some bugs, as well as context managers
43(that is, ``with:``) and perhaps in some cases
44``try:/except:/else:`` blocks which will help with code
45readability.
46
47SQLAlchemy will eventually drop 2.5 support as well - when
482.6 is reached as the baseline, SQLAlchemy will move to use
492.6/3.3 in-place compatibility, removing the usage of the
50``2to3`` tool and maintaining a source base that works with
51Python 2 and 3 at the same time.
52
53New ORM Features
54================
55
56.. _feature_relationship_08:
57
58Rewritten :func:`_orm.relationship` mechanics
59----------------------------------------------
600.8 features a much improved and capable system regarding
61how :func:`_orm.relationship` determines how to join between two
62entities.  The new system includes these features:
63
64* The ``primaryjoin`` argument is **no longer needed** when
65  constructing a :func:`_orm.relationship`   against a class that
66  has multiple foreign key paths to the target.  Only the
67  ``foreign_keys``   argument is needed to specify those
68  columns which should be included:
69
70  ::
71
72
73        class Parent(Base):
74            __tablename__ = 'parent'
75            id = Column(Integer, primary_key=True)
76            child_id_one = Column(Integer, ForeignKey('child.id'))
77            child_id_two = Column(Integer, ForeignKey('child.id'))
78
79            child_one = relationship("Child", foreign_keys=child_id_one)
80            child_two = relationship("Child", foreign_keys=child_id_two)
81
82        class Child(Base):
83            __tablename__ = 'child'
84            id = Column(Integer, primary_key=True)
85
86* relationships against self-referential, composite foreign
87  keys where **a column points to itself**   are now
88  supported.   The canonical case is as follows:
89
90  ::
91
92        class Folder(Base):
93            __tablename__ = 'folder'
94            __table_args__ = (
95              ForeignKeyConstraint(
96                  ['account_id', 'parent_id'],
97                  ['folder.account_id', 'folder.folder_id']),
98            )
99
100            account_id = Column(Integer, primary_key=True)
101            folder_id = Column(Integer, primary_key=True)
102            parent_id = Column(Integer)
103            name = Column(String)
104
105            parent_folder = relationship("Folder",
106                                backref="child_folders",
107                                remote_side=[account_id, folder_id]
108                          )
109
110  Above, the ``Folder`` refers to its parent ``Folder``
111  joining from ``account_id`` to itself, and ``parent_id``
112  to ``folder_id``.  When SQLAlchemy constructs an auto-
113  join, no longer can it assume all columns on the "remote"
114  side are aliased, and all columns on the "local" side are
115  not - the ``account_id`` column is **on both sides**.   So
116  the internal relationship mechanics were totally rewritten
117  to support an entirely different system whereby two copies
118  of ``account_id`` are generated, each containing different
119  *annotations* to determine their role within the
120  statement.  Note the join condition within a basic eager
121  load:
122
123  ::
124
125        SELECT
126            folder.account_id AS folder_account_id,
127            folder.folder_id AS folder_folder_id,
128            folder.parent_id AS folder_parent_id,
129            folder.name AS folder_name,
130            folder_1.account_id AS folder_1_account_id,
131            folder_1.folder_id AS folder_1_folder_id,
132            folder_1.parent_id AS folder_1_parent_id,
133            folder_1.name AS folder_1_name
134        FROM folder
135            LEFT OUTER JOIN folder AS folder_1
136            ON
137                folder_1.account_id = folder.account_id
138                AND folder.folder_id = folder_1.parent_id
139
140        WHERE folder.folder_id = ? AND folder.account_id = ?
141
142* Previously difficult custom join conditions, like those involving
143  functions and/or CASTing of types, will now function as
144  expected in most cases::
145
146    class HostEntry(Base):
147        __tablename__ = 'host_entry'
148
149        id = Column(Integer, primary_key=True)
150        ip_address = Column(INET)
151        content = Column(String(50))
152
153        # relationship() using explicit foreign_keys, remote_side
154        parent_host = relationship("HostEntry",
155                            primaryjoin=ip_address == cast(content, INET),
156                            foreign_keys=content,
157                            remote_side=ip_address
158                        )
159
160  The new :func:`_orm.relationship` mechanics make use of a
161  SQLAlchemy concept known as :term:`annotations`.  These annotations
162  are also available to application code explicitly via
163  the :func:`.foreign` and :func:`.remote` functions, either
164  as a means to improve readability for advanced configurations
165  or to directly inject an exact configuration, bypassing
166  the usual join-inspection heuristics::
167
168    from sqlalchemy.orm import foreign, remote
169
170    class HostEntry(Base):
171        __tablename__ = 'host_entry'
172
173        id = Column(Integer, primary_key=True)
174        ip_address = Column(INET)
175        content = Column(String(50))
176
177        # relationship() using explicit foreign() and remote() annotations
178        # in lieu of separate arguments
179        parent_host = relationship("HostEntry",
180                            primaryjoin=remote(ip_address) == \
181                                    cast(foreign(content), INET),
182                        )
183
184
185.. seealso::
186
187    :ref:`relationship_configure_joins` - a newly revised section on :func:`_orm.relationship`
188    detailing the latest techniques for customizing related attributes and collection
189    access.
190
191:ticket:`1401` :ticket:`610`
192
193.. _feature_orminspection_08:
194
195New Class/Object Inspection System
196----------------------------------
197
198Lots of SQLAlchemy users are writing systems that require
199the ability to inspect the attributes of a mapped class,
200including being able to get at the primary key columns,
201object relationships, plain attributes, and so forth,
202typically for the purpose of building data-marshalling
203systems, like JSON/XML conversion schemes and of course form
204libraries galore.
205
206Originally, the :class:`_schema.Table` and :class:`_schema.Column` model were the
207original inspection points, which have a well-documented
208system.  While SQLAlchemy ORM models are also fully
209introspectable, this has never been a fully stable and
210supported feature, and users tended to not have a clear idea
211how to get at this information.
212
2130.8 now provides a consistent, stable and fully
214documented API for this purpose, including an inspection
215system which works on mapped classes, instances, attributes,
216and other Core and ORM constructs.  The entrypoint to this
217system is the core-level :func:`_sa.inspect` function.
218In most cases, the object being inspected
219is one already part of SQLAlchemy's system,
220such as :class:`_orm.Mapper`, :class:`.InstanceState`,
221:class:`_reflection.Inspector`.  In some cases, new objects have been
222added with the job of providing the inspection API in
223certain contexts, such as :class:`.AliasedInsp` and
224:class:`.AttributeState`.
225
226A walkthrough of some key capabilities follows::
227
228    >>> class User(Base):
229    ...     __tablename__ = 'user'
230    ...     id = Column(Integer, primary_key=True)
231    ...     name = Column(String)
232    ...     name_syn = synonym(name)
233    ...     addresses = relationship("Address")
234    ...
235
236    >>> # universal entry point is inspect()
237    >>> b = inspect(User)
238
239    >>> # b in this case is the Mapper
240    >>> b
241    <Mapper at 0x101521950; User>
242
243    >>> # Column namespace
244    >>> b.columns.id
245    Column('id', Integer(), table=<user>, primary_key=True, nullable=False)
246
247    >>> # mapper's perspective of the primary key
248    >>> b.primary_key
249    (Column('id', Integer(), table=<user>, primary_key=True, nullable=False),)
250
251    >>> # MapperProperties available from .attrs
252    >>> b.attrs.keys()
253    ['name_syn', 'addresses', 'id', 'name']
254
255    >>> # .column_attrs, .relationships, etc. filter this collection
256    >>> b.column_attrs.keys()
257    ['id', 'name']
258
259    >>> list(b.relationships)
260    [<sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>]
261
262    >>> # they are also namespaces
263    >>> b.column_attrs.id
264    <sqlalchemy.orm.properties.ColumnProperty object at 0x101525090>
265
266    >>> b.relationships.addresses
267    <sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>
268
269    >>> # point inspect() at a mapped, class level attribute,
270    >>> # returns the attribute itself
271    >>> b = inspect(User.addresses)
272    >>> b
273    <sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x101521fd0>
274
275    >>> # From here we can get the mapper:
276    >>> b.mapper
277    <Mapper at 0x101525810; Address>
278
279    >>> # the parent inspector, in this case a mapper
280    >>> b.parent
281    <Mapper at 0x101521950; User>
282
283    >>> # an expression
284    >>> print(b.expression)
285    "user".id = address.user_id
286
287    >>> # inspect works on instances
288    >>> u1 = User(id=3, name='x')
289    >>> b = inspect(u1)
290
291    >>> # it returns the InstanceState
292    >>> b
293    <sqlalchemy.orm.state.InstanceState object at 0x10152bed0>
294
295    >>> # similar attrs accessor refers to the
296    >>> b.attrs.keys()
297    ['id', 'name_syn', 'addresses', 'name']
298
299    >>> # attribute interface - from attrs, you get a state object
300    >>> b.attrs.id
301    <sqlalchemy.orm.state.AttributeState object at 0x10152bf90>
302
303    >>> # this object can give you, current value...
304    >>> b.attrs.id.value
305    3
306
307    >>> # ... current history
308    >>> b.attrs.id.history
309    History(added=[3], unchanged=(), deleted=())
310
311    >>> # InstanceState can also provide session state information
312    >>> # lets assume the object is persistent
313    >>> s = Session()
314    >>> s.add(u1)
315    >>> s.commit()
316
317    >>> # now we can get primary key identity, always
318    >>> # works in query.get()
319    >>> b.identity
320    (3,)
321
322    >>> # the mapper level key
323    >>> b.identity_key
324    (<class '__main__.User'>, (3,))
325
326    >>> # state within the session
327    >>> b.persistent, b.transient, b.deleted, b.detached
328    (True, False, False, False)
329
330    >>> # owning session
331    >>> b.session
332    <sqlalchemy.orm.session.Session object at 0x101701150>
333
334.. seealso::
335
336    :ref:`core_inspection_toplevel`
337
338:ticket:`2208`
339
340New with_polymorphic() feature, can be used anywhere
341----------------------------------------------------
342
343The :meth:`_query.Query.with_polymorphic` method allows the user to
344specify which tables should be present when querying against
345a joined-table entity.   Unfortunately the method is awkward
346and only applies to the first entity in the list, and
347otherwise has awkward behaviors both in usage as well as
348within the internals.  A new enhancement to the
349:func:`.aliased` construct has been added called
350:func:`.with_polymorphic` which allows any entity to be
351"aliased" into a "polymorphic" version of itself, freely
352usable anywhere:
353
354::
355
356    from sqlalchemy.orm import with_polymorphic
357    palias = with_polymorphic(Person, [Engineer, Manager])
358    session.query(Company).\
359                join(palias, Company.employees).\
360                filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
361
362.. seealso::
363
364    :ref:`with_polymorphic` - newly updated documentation for polymorphic
365    loading control.
366
367:ticket:`2333`
368
369of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()
370--------------------------------------------------------------------------------------------------------------
371
372The :meth:`.PropComparator.of_type` method is used to specify
373a specific subtype to use when constructing SQL expressions along
374a :func:`_orm.relationship` that has a :term:`polymorphic` mapping as its target.
375This method can now be used to target *any number* of target subtypes,
376by combining it with the new :func:`.with_polymorphic` function::
377
378    # use eager loading in conjunction with with_polymorphic targets
379    Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
380    q = s.query(DataContainer).\
381                join(DataContainer.jobs.of_type(Job_P)).\
382                    options(contains_eager(DataContainer.jobs.of_type(Job_P)))
383
384The method now works equally well in most places a regular relationship
385attribute is accepted, including with loader functions like
386:func:`_orm.joinedload`, :func:`.subqueryload`, :func:`.contains_eager`,
387and comparison methods like :meth:`.PropComparator.any`
388and :meth:`.PropComparator.has`::
389
390    # use eager loading in conjunction with with_polymorphic targets
391    Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
392    q = s.query(DataContainer).\
393                join(DataContainer.jobs.of_type(Job_P)).\
394                    options(contains_eager(DataContainer.jobs.of_type(Job_P)))
395
396    # pass subclasses to eager loads (implicitly applies with_polymorphic)
397    q = s.query(ParentThing).\
398                    options(
399                        joinedload_all(
400                            ParentThing.container,
401                            DataContainer.jobs.of_type(SubJob)
402                    ))
403
404    # control self-referential aliasing with any()/has()
405    Job_A = aliased(Job)
406    q = s.query(Job).join(DataContainer.jobs).\
407                    filter(
408                        DataContainer.jobs.of_type(Job_A).\
409                            any(and_(Job_A.id < Job.id, Job_A.type=='fred')
410                        )
411                    )
412
413.. seealso::
414
415    :ref:`inheritance_of_type`
416
417:ticket:`2438` :ticket:`1106`
418
419Events Can Be Applied to Unmapped Superclasses
420----------------------------------------------
421
422Mapper and instance events can now be associated with an unmapped
423superclass, where those events will be propagated to subclasses
424as those subclasses are mapped.   The ``propagate=True`` flag
425should be used.  This feature allows events to be associated
426with a declarative base class::
427
428    from sqlalchemy.ext.declarative import declarative_base
429
430    Base = declarative_base()
431
432    @event.listens_for("load", Base, propagate=True)
433    def on_load(target, context):
434        print("New instance loaded:", target)
435
436    # on_load() will be applied to SomeClass
437    class SomeClass(Base):
438        __tablename__ = 'sometable'
439
440        # ...
441
442:ticket:`2585`
443
444Declarative Distinguishes Between Modules/Packages
445--------------------------------------------------
446
447A key feature of Declarative is the ability to refer
448to other mapped classes using their string name.   The
449registry of class names is now sensitive to the owning
450module and package of a given class.   The classes
451can be referred to via dotted name in expressions::
452
453    class Snack(Base):
454        # ...
455
456        peanuts = relationship("nuts.Peanut",
457                primaryjoin="nuts.Peanut.snack_id == Snack.id")
458
459The resolution allows that any full or partial
460disambiguating package name can be used.   If the
461path to a particular class is still ambiguous,
462an error is raised.
463
464:ticket:`2338`
465
466
467New DeferredReflection Feature in Declarative
468---------------------------------------------
469
470The "deferred reflection" example has been moved to a
471supported feature within Declarative.  This feature allows
472the construction of declarative mapped classes with only
473placeholder ``Table`` metadata, until a ``prepare()`` step
474is called, given an ``Engine`` with which to reflect fully
475all tables and establish actual mappings.   The system
476supports overriding of columns, single and joined
477inheritance, as well as distinct bases-per-engine. A full
478declarative configuration can now be created against an
479existing table that is assembled upon engine creation time
480in one step:
481
482::
483
484    class ReflectedOne(DeferredReflection, Base):
485        __abstract__ = True
486
487    class ReflectedTwo(DeferredReflection, Base):
488        __abstract__ = True
489
490    class MyClass(ReflectedOne):
491        __tablename__ = 'mytable'
492
493    class MyOtherClass(ReflectedOne):
494        __tablename__ = 'myothertable'
495
496    class YetAnotherClass(ReflectedTwo):
497        __tablename__ = 'yetanothertable'
498
499    ReflectedOne.prepare(engine_one)
500    ReflectedTwo.prepare(engine_two)
501
502.. seealso::
503
504    :class:`.DeferredReflection`
505
506:ticket:`2485`
507
508ORM Classes Now Accepted by Core Constructs
509-------------------------------------------
510
511While the SQL expressions used with :meth:`_query.Query.filter`,
512such as ``User.id == 5``, have always been compatible for
513use with core constructs such as :func:`_expression.select`, the mapped
514class itself would not be recognized when passed to :func:`_expression.select`,
515:meth:`_expression.Select.select_from`, or :meth:`_expression.Select.correlate`.
516A new SQL registration system allows a mapped class to be
517accepted as a FROM clause within the core::
518
519    from sqlalchemy import select
520
521    stmt = select([User]).where(User.id == 5)
522
523Above, the mapped ``User`` class will expand into
524the :class:`_schema.Table` to which ``User`` is mapped.
525
526:ticket:`2245`
527
528.. _change_orm_2365:
529
530Query.update() supports UPDATE..FROM
531------------------------------------
532
533The new UPDATE..FROM mechanics work in query.update().
534Below, we emit an UPDATE against ``SomeEntity``, adding
535a FROM clause (or equivalent, depending on backend)
536against ``SomeOtherEntity``::
537
538    query(SomeEntity).\
539        filter(SomeEntity.id==SomeOtherEntity.id).\
540        filter(SomeOtherEntity.foo=='bar').\
541        update({"data":"x"})
542
543In particular, updates to joined-inheritance
544entities are supported, provided the target of the UPDATE is local to the
545table being filtered on, or if the parent and child tables
546are mixed, they are joined explicitly in the query.  Below,
547given ``Engineer`` as a joined subclass of ``Person``:
548
549::
550
551    query(Engineer).\
552            filter(Person.id==Engineer.id).\
553            filter(Person.name=='dilbert').\
554            update({"engineer_data":"java"})
555
556would produce:
557
558::
559
560    UPDATE engineer SET engineer_data='java' FROM person
561    WHERE person.id=engineer.id AND person.name='dilbert'
562
563:ticket:`2365`
564
565rollback() will only roll back "dirty" objects from a begin_nested()
566--------------------------------------------------------------------
567
568A behavioral change that should improve efficiency for those
569users using SAVEPOINT via ``Session.begin_nested()`` - upon
570``rollback()``, only those objects that were made dirty
571since the last flush will be expired, the rest of the
572``Session`` remains intact.  This because a ROLLBACK to a
573SAVEPOINT does not terminate the containing transaction's
574isolation, so no expiry is needed except for those changes
575that were not flushed in the current transaction.
576
577:ticket:`2452`
578
579Caching Example now uses dogpile.cache
580--------------------------------------
581
582The caching example now uses `dogpile.cache <https://dogpilecache.readthedocs.io/>`_.
583Dogpile.cache is a rewrite of the caching portion
584of Beaker, featuring vastly simpler and faster operation,
585as well as support for distributed locking.
586
587Note that the SQLAlchemy APIs used by the Dogpile example as well
588as the previous Beaker example have changed slightly, in particular
589this change is needed as illustrated in the Beaker example::
590
591    --- examples/beaker_caching/caching_query.py
592    +++ examples/beaker_caching/caching_query.py
593    @@ -222,7 +222,8 @@
594
595             """
596             if query._current_path:
597    -            mapper, key = query._current_path[-2:]
598    +            mapper, prop = query._current_path[-2:]
599    +            key = prop.key
600
601                 for cls in mapper.class_.__mro__:
602                     if (cls, key) in self._relationship_options:
603
604.. seealso::
605
606    :mod:`dogpile_caching`
607
608:ticket:`2589`
609
610New Core Features
611=================
612
613Fully extensible, type-level operator support in Core
614-----------------------------------------------------
615
616The Core has to date never had any system of adding support
617for new SQL operators to Column and other expression
618constructs, other than the :meth:`.ColumnOperators.op` method
619which is "just enough" to make things work. There has also
620never been any system in place for Core which allows the
621behavior of existing operators to be overridden.   Up until
622now, the only way operators could be flexibly redefined was
623in the ORM layer, using :func:`.column_property` given a
624``comparator_factory`` argument.   Third party libraries
625like GeoAlchemy therefore were forced to be ORM-centric and
626rely upon an array of hacks to apply new operations as well
627as to get them to propagate correctly.
628
629The new operator system in Core adds the one hook that's
630been missing all along, which is to associate new and
631overridden operators with *types*.   Since after all, it's
632not really a column, CAST operator, or SQL function that
633really drives what kinds of operations are present, it's the
634*type* of the expression.   The implementation details are
635minimal - only a few extra methods are added to the core
636:class:`_expression.ColumnElement` type so that it consults its
637:class:`.TypeEngine` object for an optional set of operators.
638New or revised operations can be associated with any type,
639either via subclassing of an existing type, by using
640:class:`.TypeDecorator`, or "globally across-the-board" by
641attaching a new :class:`.TypeEngine.Comparator` object to an existing type
642class.
643
644For example, to add logarithm support to :class:`.Numeric` types:
645
646::
647
648
649    from sqlalchemy.types import Numeric
650    from sqlalchemy.sql import func
651
652    class CustomNumeric(Numeric):
653        class comparator_factory(Numeric.Comparator):
654            def log(self, other):
655                return func.log(self.expr, other)
656
657The new type is usable like any other type:
658
659::
660
661
662    data = Table('data', metadata,
663              Column('id', Integer, primary_key=True),
664              Column('x', CustomNumeric(10, 5)),
665              Column('y', CustomNumeric(10, 5))
666         )
667
668    stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
669    print(conn.execute(stmt).fetchall())
670
671
672New features which have come from this immediately include
673support for PostgreSQL's HSTORE type, as well as new
674operations associated with PostgreSQL's ARRAY
675type.    It also paves the way for existing types to acquire
676lots more operators that are specific to those types, such
677as more string, integer and date operators.
678
679.. seealso::
680
681    :ref:`types_operators`
682
683    :class:`.HSTORE`
684
685:ticket:`2547`
686
687.. _feature_2623:
688
689Multiple-VALUES support for Insert
690----------------------------------
691
692The :meth:`_expression.Insert.values` method now supports a list of dictionaries,
693which will render a multi-VALUES statement such as
694``VALUES (<row1>), (<row2>), ...``.  This is only relevant to backends which
695support this syntax, including PostgreSQL, SQLite, and MySQL.  It is
696not the same thing as the usual ``executemany()`` style of INSERT which
697remains unchanged::
698
699    users.insert().values([
700                        {"name": "some name"},
701                        {"name": "some other name"},
702                        {"name": "yet another name"},
703                    ])
704
705.. seealso::
706
707    :meth:`_expression.Insert.values`
708
709:ticket:`2623`
710
711Type Expressions
712----------------
713
714SQL expressions can now be associated with types.  Historically,
715:class:`.TypeEngine` has always allowed Python-side functions which
716receive both bound parameters as well as result row values, passing
717them through a Python side conversion function on the way to/back from
718the database.   The new feature allows similar
719functionality, except on the database side::
720
721    from sqlalchemy.types import String
722    from sqlalchemy import func, Table, Column, MetaData
723
724    class LowerString(String):
725        def bind_expression(self, bindvalue):
726            return func.lower(bindvalue)
727
728        def column_expression(self, col):
729            return func.lower(col)
730
731    metadata = MetaData()
732    test_table = Table(
733            'test_table',
734            metadata,
735            Column('data', LowerString)
736    )
737
738Above, the ``LowerString`` type defines a SQL expression that will be emitted
739whenever the ``test_table.c.data`` column is rendered in the columns
740clause of a SELECT statement::
741
742    >>> print(select([test_table]).where(test_table.c.data == 'HI'))
743    SELECT lower(test_table.data) AS data
744    FROM test_table
745    WHERE test_table.data = lower(:data_1)
746
747This feature is also used heavily by the new release of GeoAlchemy,
748to embed PostGIS expressions inline in SQL based on type rules.
749
750.. seealso::
751
752    :ref:`types_sql_value_processing`
753
754:ticket:`1534`
755
756Core Inspection System
757----------------------
758
759The :func:`_sa.inspect` function introduced in :ref:`feature_orminspection_08`
760also applies to the core.  Applied to an :class:`_engine.Engine` it produces
761an :class:`_reflection.Inspector` object::
762
763    from sqlalchemy import inspect
764    from sqlalchemy import create_engine
765
766    engine = create_engine("postgresql://scott:tiger@localhost/test")
767    insp = inspect(engine)
768    print(insp.get_table_names())
769
770It can also be applied to any :class:`_expression.ClauseElement`, which returns
771the :class:`_expression.ClauseElement` itself, such as :class:`_schema.Table`, :class:`_schema.Column`,
772:class:`_expression.Select`, etc.   This allows it to work fluently between Core
773and ORM constructs.
774
775
776New Method :meth:`_expression.Select.correlate_except`
777-------------------------------------------------------
778:func:`_expression.select` now has a method :meth:`_expression.Select.correlate_except`
779which specifies "correlate on all FROM clauses except those
780specified".  It can be used for mapping scenarios where
781a related subquery should correlate normally, except
782against a particular target selectable::
783
784    class SnortEvent(Base):
785        __tablename__ = "event"
786
787        id = Column(Integer, primary_key=True)
788        signature = Column(Integer, ForeignKey("signature.id"))
789
790        signatures = relationship("Signature", lazy=False)
791
792    class Signature(Base):
793        __tablename__ = "signature"
794
795        id = Column(Integer, primary_key=True)
796
797        sig_count = column_property(
798                        select([func.count('*')]).\
799                            where(SnortEvent.signature == id).
800                            correlate_except(SnortEvent)
801                    )
802
803.. seealso::
804
805    :meth:`_expression.Select.correlate_except`
806
807PostgreSQL HSTORE type
808----------------------
809
810Support for PostgreSQL's ``HSTORE`` type is now available as
811:class:`_postgresql.HSTORE`.   This type makes great usage
812of the new operator system to provide a full range of operators
813for HSTORE types, including index access, concatenation,
814and containment methods such as
815:meth:`~.HSTORE.comparator_factory.has_key`,
816:meth:`~.HSTORE.comparator_factory.has_any`, and
817:meth:`~.HSTORE.comparator_factory.matrix`::
818
819    from sqlalchemy.dialects.postgresql import HSTORE
820
821    data = Table('data_table', metadata,
822            Column('id', Integer, primary_key=True),
823            Column('hstore_data', HSTORE)
824        )
825
826    engine.execute(
827        select([data.c.hstore_data['some_key']])
828    ).scalar()
829
830    engine.execute(
831        select([data.c.hstore_data.matrix()])
832    ).scalar()
833
834
835.. seealso::
836
837    :class:`_postgresql.HSTORE`
838
839    :class:`_postgresql.hstore`
840
841:ticket:`2606`
842
843Enhanced PostgreSQL ARRAY type
844------------------------------
845
846The :class:`_postgresql.ARRAY` type will accept an optional
847"dimension" argument, pinning it to a fixed number of
848dimensions and greatly improving efficiency when retrieving
849results:
850
851::
852
853    # old way, still works since PG supports N-dimensions per row:
854    Column("my_array", postgresql.ARRAY(Integer))
855
856    # new way, will render ARRAY with correct number of [] in DDL,
857    # will process binds and results more efficiently as we don't need
858    # to guess how many levels deep to go
859    Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
860
861The type also introduces new operators, using the new type-specific
862operator framework.  New operations include indexed access::
863
864    result = conn.execute(
865        select([mytable.c.arraycol[2]])
866    )
867
868slice access in SELECT::
869
870    result = conn.execute(
871        select([mytable.c.arraycol[2:4]])
872    )
873
874slice updates in UPDATE::
875
876    conn.execute(
877        mytable.update().values({mytable.c.arraycol[2:3]: [7, 8]})
878    )
879
880freestanding array literals::
881
882    >>> from sqlalchemy.dialects import postgresql
883    >>> conn.scalar(
884    ...    select([
885    ...        postgresql.array([1, 2]) + postgresql.array([3, 4, 5])
886    ...    ])
887    ...  )
888    [1, 2, 3, 4, 5]
889
890array concatenation, where below, the right side ``[4, 5, 6]`` is coerced into an array literal::
891
892    select([mytable.c.arraycol + [4, 5, 6]])
893
894.. seealso::
895
896    :class:`_postgresql.ARRAY`
897
898    :class:`_postgresql.array`
899
900:ticket:`2441`
901
902New, configurable DATE, TIME types for SQLite
903---------------------------------------------
904
905SQLite has no built-in DATE, TIME, or DATETIME types, and
906instead provides some support for storage of date and time
907values either as strings or integers.   The date and time
908types for SQLite are enhanced in 0.8 to be much more
909configurable as to the specific format, including that the
910"microseconds" portion is optional, as well as pretty much
911everything else.
912
913::
914
915    Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
916    Column('sometimestamp', sqlite.DATETIME(
917                        storage_format=(
918                                    "%(year)04d%(month)02d%(day)02d"
919                                    "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
920                        ),
921                        regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
922                        )
923                )
924    Column('somedate', sqlite.DATE(
925                        storage_format="%(month)02d/%(day)02d/%(year)04d",
926                        regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
927                    )
928                )
929
930Huge thanks to Nate Dub for the sprinting on this at Pycon 2012.
931
932.. seealso::
933
934    :class:`_sqlite.DATETIME`
935
936    :class:`_sqlite.DATE`
937
938    :class:`_sqlite.TIME`
939
940:ticket:`2363`
941
942"COLLATE" supported across all dialects; in particular MySQL, PostgreSQL, SQLite
943--------------------------------------------------------------------------------
944
945The "collate" keyword, long accepted by the MySQL dialect, is now established
946on all :class:`.String` types and will render on any backend, including
947when features such as :meth:`_schema.MetaData.create_all` and :func:`.cast` is used::
948
949    >>> stmt = select([cast(sometable.c.somechar, String(20, collation='utf8'))])
950    >>> print(stmt)
951    SELECT CAST(sometable.somechar AS VARCHAR(20) COLLATE "utf8") AS anon_1
952    FROM sometable
953
954.. seealso::
955
956    :class:`.String`
957
958:ticket:`2276`
959
960"Prefixes" now supported for :func:`_expression.update`, :func:`_expression.delete`
961------------------------------------------------------------------------------------
962Geared towards MySQL, a "prefix" can be rendered within any of
963these constructs.   E.g.::
964
965    stmt = table.delete().prefix_with("LOW_PRIORITY", dialect="mysql")
966
967
968    stmt = table.update().prefix_with("LOW_PRIORITY", dialect="mysql")
969
970The method is new in addition to those which already existed
971on :func:`_expression.insert`, :func:`_expression.select` and :class:`_query.Query`.
972
973.. seealso::
974
975    :meth:`_expression.Update.prefix_with`
976
977    :meth:`_expression.Delete.prefix_with`
978
979    :meth:`_expression.Insert.prefix_with`
980
981    :meth:`_expression.Select.prefix_with`
982
983    :meth:`_query.Query.prefix_with`
984
985:ticket:`2431`
986
987
988Behavioral Changes
989==================
990
991.. _legacy_is_orphan_addition:
992
993The consideration of a "pending" object as an "orphan" has been made more aggressive
994------------------------------------------------------------------------------------
995
996This is a late add to the 0.8 series, however it is hoped that the new behavior
997is generally more consistent and intuitive in a wider variety of
998situations.   The ORM has since at least version 0.4 included behavior
999such that an object that's "pending", meaning that it's
1000associated with a :class:`.Session` but hasn't been inserted into the database
1001yet, is automatically expunged from the :class:`.Session` when it becomes an "orphan",
1002which means it has been de-associated with a parent object that refers to it
1003with ``delete-orphan`` cascade on the configured :func:`_orm.relationship`.   This
1004behavior is intended to approximately mirror the behavior of a persistent
1005(that is, already inserted) object, where the ORM will emit a DELETE for such
1006objects that become orphans based on the interception of detachment events.
1007
1008The behavioral change comes into play for objects that
1009are referred to by multiple kinds of parents that each specify ``delete-orphan``; the
1010typical example is an :ref:`association object <association_pattern>` that bridges two other kinds of objects
1011in a many-to-many pattern.   Previously, the behavior was such that the
1012pending object would be expunged only when de-associated with *all* of its parents.
1013With the behavioral change, the pending object
1014is expunged as soon as it is de-associated from *any* of the parents that it was
1015previously associated with.  This behavior is intended to more closely
1016match that of persistent objects, which are deleted as soon
1017as they are de-associated from any parent.
1018
1019The rationale for the older behavior dates back
1020at least to version 0.4, and was basically a defensive decision to try to alleviate
1021confusion when an object was still being constructed for INSERT.   But the reality
1022is that the object is re-associated with the :class:`.Session` as soon as it is
1023attached to any new parent in any case.
1024
1025It's still possible to flush an object
1026that is not associated with all of its required parents, if the object was either
1027not associated with those parents in the first place, or if it was expunged, but then
1028re-associated with a :class:`.Session` via a subsequent attachment event but still
1029not fully associated.   In this situation, it is expected that the database
1030would emit an integrity error, as there are likely NOT NULL foreign key columns
1031that are unpopulated.   The ORM makes the decision to let these INSERT attempts
1032occur, based on the judgment that an object that is only partially associated with
1033its required parents but has been actively associated with some of them,
1034is more often than not a user error, rather than an intentional
1035omission which should be silently skipped - silently skipping the INSERT here would
1036make user errors of this nature very hard to debug.
1037
1038The old behavior, for applications that might have been relying upon it, can be re-enabled for
1039any :class:`_orm.Mapper` by specifying the flag ``legacy_is_orphan`` as a mapper
1040option.
1041
1042The new behavior allows the following test case to work::
1043
1044    from sqlalchemy import Column, Integer, String, ForeignKey
1045    from sqlalchemy.orm import relationship, backref
1046    from sqlalchemy.ext.declarative import declarative_base
1047
1048    Base = declarative_base()
1049
1050    class User(Base):
1051        __tablename__ = 'user'
1052        id = Column(Integer, primary_key=True)
1053        name = Column(String(64))
1054
1055    class UserKeyword(Base):
1056        __tablename__ = 'user_keyword'
1057        user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
1058        keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True)
1059
1060        user = relationship(User,
1061                    backref=backref("user_keywords",
1062                                    cascade="all, delete-orphan")
1063                )
1064
1065        keyword = relationship("Keyword",
1066                    backref=backref("user_keywords",
1067                                    cascade="all, delete-orphan")
1068                )
1069
1070        # uncomment this to enable the old behavior
1071        # __mapper_args__ = {"legacy_is_orphan": True}
1072
1073    class Keyword(Base):
1074        __tablename__ = 'keyword'
1075        id = Column(Integer, primary_key=True)
1076        keyword = Column('keyword', String(64))
1077
1078    from sqlalchemy import create_engine
1079    from sqlalchemy.orm import Session
1080
1081    # note we're using PostgreSQL to ensure that referential integrity
1082    # is enforced, for demonstration purposes.
1083    e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
1084
1085    Base.metadata.drop_all(e)
1086    Base.metadata.create_all(e)
1087
1088    session = Session(e)
1089
1090    u1 = User(name="u1")
1091    k1 = Keyword(keyword="k1")
1092
1093    session.add_all([u1, k1])
1094
1095    uk1 = UserKeyword(keyword=k1, user=u1)
1096
1097    # previously, if session.flush() were called here,
1098    # this operation would succeed, but if session.flush()
1099    # were not called here, the operation fails with an
1100    # integrity error.
1101    # session.flush()
1102    del u1.user_keywords[0]
1103
1104    session.commit()
1105
1106
1107:ticket:`2655`
1108
1109The after_attach event fires after the item is associated with the Session instead of before; before_attach added
1110-----------------------------------------------------------------------------------------------------------------
1111
1112Event handlers which use after_attach can now assume the
1113given instance is associated with the given session:
1114
1115::
1116
1117    @event.listens_for(Session, "after_attach")
1118    def after_attach(session, instance):
1119        assert instance in session
1120
1121Some use cases require that it work this way.  However,
1122other use cases require that the item is *not* yet part of
1123the session, such as when a query, intended to load some
1124state required for an instance, emits autoflush first and
1125would otherwise prematurely flush the target object.  Those
1126use cases should use the new "before_attach" event:
1127
1128::
1129
1130    @event.listens_for(Session, "before_attach")
1131    def before_attach(session, instance):
1132        instance.some_necessary_attribute = session.query(Widget).\
1133                                                filter_by(instance.widget_name).\
1134                                                first()
1135
1136:ticket:`2464`
1137
1138
1139
1140Query now auto-correlates like a select() does
1141----------------------------------------------
1142
1143Previously it was necessary to call :meth:`_query.Query.correlate` in
1144order to have a column- or WHERE-subquery correlate to the
1145parent:
1146
1147::
1148
1149    subq = session.query(Entity.value).\
1150                    filter(Entity.id==Parent.entity_id).\
1151                    correlate(Parent).\
1152                    as_scalar()
1153    session.query(Parent).filter(subq=="some value")
1154
1155This was the opposite behavior of a plain ``select()``
1156construct which would assume auto-correlation by default.
1157The above statement in 0.8 will correlate automatically:
1158
1159::
1160
1161    subq = session.query(Entity.value).\
1162                    filter(Entity.id==Parent.entity_id).\
1163                    as_scalar()
1164    session.query(Parent).filter(subq=="some value")
1165
1166like in ``select()``, correlation can be disabled by calling
1167``query.correlate(None)`` or manually set by passing an
1168entity, ``query.correlate(someentity)``.
1169
1170:ticket:`2179`
1171
1172.. _correlation_context_specific:
1173
1174Correlation is now always context-specific
1175------------------------------------------
1176
1177To allow a wider variety of correlation scenarios, the behavior of
1178:meth:`_expression.Select.correlate` and :meth:`_query.Query.correlate` has changed slightly
1179such that the SELECT statement will omit the "correlated" target from the
1180FROM clause only if the statement is actually used in that context.  Additionally,
1181it's no longer possible for a SELECT statement that's placed as a FROM
1182in an enclosing SELECT statement to "correlate" (i.e. omit) a FROM clause.
1183
1184This change only makes things better as far as rendering SQL, in that it's no
1185longer possible to render illegal SQL where there are insufficient FROM
1186objects relative to what's being selected::
1187
1188    from sqlalchemy.sql import table, column, select
1189
1190    t1 = table('t1', column('x'))
1191    t2 = table('t2', column('y'))
1192    s = select([t1, t2]).correlate(t1)
1193
1194    print(s)
1195
1196Prior to this change, the above would return::
1197
1198    SELECT t1.x, t2.y FROM t2
1199
1200which is invalid SQL as "t1" is not referred to in any FROM clause.
1201
1202Now, in the absence of an enclosing SELECT, it returns::
1203
1204    SELECT t1.x, t2.y FROM t1, t2
1205
1206Within a SELECT, the correlation takes effect as expected::
1207
1208    s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
1209
1210    print(s2)
1211
1212    SELECT t1.x, t2.y FROM t1, t2
1213    WHERE t1.x = t2.y AND t1.x =
1214        (SELECT t1.x, t2.y FROM t2)
1215
1216This change is not expected to impact any existing applications, as
1217the correlation behavior remains identical for properly constructed
1218expressions.  Only an application that relies, most likely within a
1219testing scenario, on the invalid string output of a correlated
1220SELECT used in a non-correlating context would see any change.
1221
1222:ticket:`2668`
1223
1224
1225.. _metadata_create_drop_tables:
1226
1227create_all() and drop_all() will now honor an empty list as such
1228----------------------------------------------------------------
1229
1230The methods :meth:`_schema.MetaData.create_all` and :meth:`_schema.MetaData.drop_all`
1231will now accept a list of :class:`_schema.Table` objects that is empty,
1232and will not emit any CREATE or DROP statements.  Previously,
1233an empty list was interpreted the same as passing ``None``
1234for a collection, and CREATE/DROP would be emitted for all
1235items unconditionally.
1236
1237This is a bug fix but some applications may have been relying upon
1238the previous behavior.
1239
1240:ticket:`2664`
1241
1242Repaired the Event Targeting of :class:`.InstrumentationEvents`
1243---------------------------------------------------------------
1244
1245The :class:`.InstrumentationEvents` series of event targets have
1246documented that the events will only be fired off according to
1247the actual class passed as a target.  Through 0.7, this wasn't the
1248case, and any event listener applied to :class:`.InstrumentationEvents`
1249would be invoked for all classes mapped.  In 0.8, additional
1250logic has been added so that the events will only invoke for those
1251classes sent in.  The ``propagate`` flag here is set to ``True``
1252by default as class instrumentation events are typically used to
1253intercept classes that aren't yet created.
1254
1255:ticket:`2590`
1256
1257No more magic coercion of "=" to IN when comparing to subquery in MS-SQL
1258------------------------------------------------------------------------
1259
1260We found a very old behavior in the MSSQL dialect which
1261would attempt to rescue users from themselves when
1262doing something like this:
1263
1264::
1265
1266    scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
1267    select([sometable]).where(sometable.c.id==scalar_subq)
1268
1269SQL Server doesn't allow an equality comparison to a scalar
1270SELECT, that is, "x = (SELECT something)". The MSSQL dialect
1271would convert this to an IN.   The same thing would happen
1272however upon a comparison like "(SELECT something) = x", and
1273overall this level of guessing is outside of SQLAlchemy's
1274usual scope so the behavior is removed.
1275
1276:ticket:`2277`
1277
1278Fixed the behavior of :meth:`.Session.is_modified`
1279--------------------------------------------------
1280
1281The :meth:`.Session.is_modified` method accepts an argument
1282``passive`` which basically should not be necessary, the
1283argument in all cases should be the value ``True`` - when
1284left at its default of ``False`` it would have the effect of
1285hitting the database, and often triggering autoflush which
1286would itself change the results.   In 0.8 the ``passive``
1287argument will have no effect, and unloaded attributes will
1288never be checked for history since by definition there can
1289be no pending state change on an unloaded attribute.
1290
1291.. seealso::
1292
1293    :meth:`.Session.is_modified`
1294
1295:ticket:`2320`
1296
1297:attr:`_schema.Column.key` is honored in the :attr:`_expression.Select.c` attribute of :func:`_expression.select` with :meth:`_expression.Select.apply_labels`
1298---------------------------------------------------------------------------------------------------------------------------------------------------------------
1299Users of the expression system know that :meth:`_expression.Select.apply_labels`
1300prepends the table name to each column name, affecting the
1301names that are available from :attr:`_expression.Select.c`:
1302
1303::
1304
1305    s = select([table1]).apply_labels()
1306    s.c.table1_col1
1307    s.c.table1_col2
1308
1309Before 0.8, if the :class:`_schema.Column` had a different :attr:`_schema.Column.key`, this
1310key would be ignored, inconsistently versus when
1311:meth:`_expression.Select.apply_labels` were not used:
1312
1313::
1314
1315    # before 0.8
1316    table1 = Table('t1', metadata,
1317        Column('col1', Integer, key='column_one')
1318    )
1319    s = select([table1])
1320    s.c.column_one # would be accessible like this
1321    s.c.col1 # would raise AttributeError
1322
1323    s = select([table1]).apply_labels()
1324    s.c.table1_column_one # would raise AttributeError
1325    s.c.table1_col1 # would be accessible like this
1326
1327In 0.8, :attr:`_schema.Column.key` is honored in both cases:
1328
1329::
1330
1331    # with 0.8
1332    table1 = Table('t1', metadata,
1333        Column('col1', Integer, key='column_one')
1334    )
1335    s = select([table1])
1336    s.c.column_one # works
1337    s.c.col1 # AttributeError
1338
1339    s = select([table1]).apply_labels()
1340    s.c.table1_column_one # works
1341    s.c.table1_col1 # AttributeError
1342
1343All other behavior regarding "name" and "key" are the same,
1344including that the rendered SQL will still use the form
1345``<tablename>_<colname>`` - the emphasis here was on
1346preventing the :attr:`_schema.Column.key` contents from being rendered into the
1347``SELECT`` statement so that there are no issues with
1348special/ non-ascii characters used in the :attr:`_schema.Column.key`.
1349
1350:ticket:`2397`
1351
1352single_parent warning is now an error
1353-------------------------------------
1354
1355A :func:`_orm.relationship` that is many-to-one or many-to-many and
1356specifies "cascade='all, delete-orphan'", which is an
1357awkward but nonetheless supported use case (with
1358restrictions) will now raise an error if the relationship
1359does not specify the ``single_parent=True`` option.
1360Previously it would only emit a warning, but a failure would
1361follow almost immediately within the attribute system in any
1362case.
1363
1364:ticket:`2405`
1365
1366Adding the ``inspector`` argument to the ``column_reflect`` event
1367-----------------------------------------------------------------
1368
13690.7 added a new event called ``column_reflect``, provided so
1370that the reflection of columns could be augmented as each
1371one were reflected.   We got this event slightly wrong in
1372that the event gave no way to get at the current
1373``Inspector`` and ``Connection`` being used for the
1374reflection, in the case that additional information from the
1375database is needed.   As this is a new event not widely used
1376yet, we'll be adding the ``inspector`` argument into it
1377directly:
1378
1379::
1380
1381    @event.listens_for(Table, "column_reflect")
1382    def listen_for_col(inspector, table, column_info):
1383        # ...
1384
1385:ticket:`2418`
1386
1387Disabling auto-detect of collations, casing for MySQL
1388-----------------------------------------------------
1389
1390The MySQL dialect does two calls, one very expensive, to
1391load all possible collations from the database as well as
1392information on casing, the first time an ``Engine``
1393connects.   Neither of these collections are used for any
1394SQLAlchemy functions, so these calls will be changed to no
1395longer be emitted automatically. Applications that might
1396have relied on these collections being present on
1397``engine.dialect`` will need to call upon
1398``_detect_collations()`` and ``_detect_casing()`` directly.
1399
1400:ticket:`2404`
1401
1402"Unconsumed column names" warning becomes an exception
1403------------------------------------------------------
1404
1405Referring to a non-existent column in an ``insert()`` or
1406``update()`` construct will raise an error instead of a
1407warning:
1408
1409::
1410
1411    t1 = table('t1', column('x'))
1412    t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
1413
1414:ticket:`2415`
1415
1416Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint
1417---------------------------------------------------------------------------
1418
1419These two methods on ``Inspector`` were redundant, where
1420``get_primary_keys()`` would return the same information as
1421``get_pk_constraint()`` minus the name of the constraint:
1422
1423::
1424
1425    >>> insp.get_primary_keys()
1426    ["a", "b"]
1427
1428    >>> insp.get_pk_constraint()
1429    {"name":"pk_constraint", "constrained_columns":["a", "b"]}
1430
1431:ticket:`2422`
1432
1433Case-insensitive result row names will be disabled in most cases
1434----------------------------------------------------------------
1435
1436A very old behavior, the column names in ``RowProxy`` were
1437always compared case-insensitively:
1438
1439::
1440
1441    >>> row = result.fetchone()
1442    >>> row['foo'] == row['FOO'] == row['Foo']
1443    True
1444
1445This was for the benefit of a few dialects which in the
1446early days needed this, like Oracle and Firebird, but in
1447modern usage we have more accurate ways of dealing with the
1448case-insensitive behavior of these two platforms.
1449
1450Going forward, this behavior will be available only
1451optionally, by passing the flag ```case_sensitive=False```
1452to ```create_engine()```, but otherwise column names
1453requested from the row must match as far as casing.
1454
1455:ticket:`2423`
1456
1457``InstrumentationManager`` and alternate class instrumentation is now an extension
1458----------------------------------------------------------------------------------
1459
1460The ``sqlalchemy.orm.interfaces.InstrumentationManager``
1461class is moved to
1462``sqlalchemy.ext.instrumentation.InstrumentationManager``.
1463The "alternate instrumentation" system was built for the
1464benefit of a very small number of installations that needed
1465to work with existing or unusual class instrumentation
1466systems, and generally is very seldom used.   The complexity
1467of this system has been exported to an ``ext.`` module.  It
1468remains unused until once imported, typically when a third
1469party library imports ``InstrumentationManager``, at which
1470point it is injected back into ``sqlalchemy.orm`` by
1471replacing the default ``InstrumentationFactory`` with
1472``ExtendedInstrumentationRegistry``.
1473
1474Removed
1475=======
1476
1477SQLSoup
1478-------
1479
1480SQLSoup is a handy package that presents an alternative
1481interface on top of the SQLAlchemy ORM.   SQLSoup is now
1482moved into its own project and documented/released
1483separately; see https://bitbucket.org/zzzeek/sqlsoup.
1484
1485SQLSoup is a very simple tool that could also benefit from
1486contributors who are interested in its style of usage.
1487
1488:ticket:`2262`
1489
1490MutableType
1491-----------
1492
1493The older "mutable" system within the SQLAlchemy ORM has
1494been removed.   This refers to the ``MutableType`` interface
1495which was applied to types such as ``PickleType`` and
1496conditionally to ``TypeDecorator``, and since very early
1497SQLAlchemy versions has provided a way for the ORM to detect
1498changes in so-called "mutable" data structures such as JSON
1499structures and pickled objects.   However, the
1500implementation was never reasonable and forced a very
1501inefficient mode of usage on the unit-of-work which caused
1502an expensive scan of all objects to take place during flush.
1503In 0.7, the `sqlalchemy.ext.mutable <https://docs.sqlalchemy.
1504org/en/latest/orm/extensions/mutable.html>`_ extension was
1505introduced so that user-defined datatypes can appropriately
1506send events to the unit of work as changes occur.
1507
1508Today, usage of ``MutableType`` is expected to be low, as
1509warnings have been in place for some years now regarding its
1510inefficiency.
1511
1512:ticket:`2442`
1513
1514sqlalchemy.exceptions (has been sqlalchemy.exc for years)
1515---------------------------------------------------------
1516
1517We had left in an alias ``sqlalchemy.exceptions`` to attempt
1518to make it slightly easier for some very old libraries that
1519hadn't yet been upgraded to use ``sqlalchemy.exc``.  Some
1520users are still being confused by it however so in 0.8 we're
1521taking it out entirely to eliminate any of that confusion.
1522
1523:ticket:`2433`
1524
1525