1========
2Cookbook
3========
4
5A collection of "How-Tos" highlighting popular ways to extend
6Alembic.
7
8.. note::
9
10    This is a new section where we catalogue various "how-tos"
11    based on user requests.  It is often the case that users
12    will request a feature only to learn it can be provided with
13    a simple customization.
14
15.. _building_uptodate:
16
17Building an Up to Date Database from Scratch
18=============================================
19
20There's a theory of database migrations that says that the revisions in existence for a database should be
21able to go from an entirely blank schema to the finished product, and back again.   Alembic can roll
22this way.   Though we think it's kind of overkill, considering that SQLAlchemy itself can emit
23the full CREATE statements for any given model using :meth:`~sqlalchemy.schema.MetaData.create_all`.   If you check out
24a copy of an application, running this will give you the entire database in one shot, without the need
25to run through all those migration files, which are instead tailored towards applying incremental
26changes to an existing database.
27
28Alembic can integrate with a :meth:`~sqlalchemy.schema.MetaData.create_all` script quite easily.  After running the
29create operation, tell Alembic to create a new version table, and to stamp it with the most recent
30revision (i.e. ``head``)::
31
32    # inside of a "create the database" script, first create
33    # tables:
34    my_metadata.create_all(engine)
35
36    # then, load the Alembic configuration and generate the
37    # version table, "stamping" it with the most recent rev:
38    from alembic.config import Config
39    from alembic import command
40    alembic_cfg = Config("/path/to/yourapp/alembic.ini")
41    command.stamp(alembic_cfg, "head")
42
43When this approach is used, the application can generate the database using normal SQLAlchemy
44techniques instead of iterating through hundreds of migration scripts.   Now, the purpose of the
45migration scripts is relegated just to movement between versions on out-of-date databases, not
46*new* databases.    You can now remove old migration files that are no longer represented
47on any existing environments.
48
49To prune old migration files, simply delete the files.   Then, in the earliest, still-remaining
50migration file, set ``down_revision`` to ``None``::
51
52    # replace this:
53    #down_revision = '290696571ad2'
54
55    # with this:
56    down_revision = None
57
58That file now becomes the "base" of the migration series.
59
60Conditional Migration Elements
61==============================
62
63This example features the basic idea of a common need, that of affecting
64how a migration runs based on command line switches.
65
66The technique to use here is simple; within a migration script, inspect
67the :meth:`.EnvironmentContext.get_x_argument` collection for any additional,
68user-defined parameters.  Then take action based on the presence of those
69arguments.
70
71To make it such that the logic to inspect these flags is easy to use and
72modify, we modify our ``script.py.mako`` template to make this feature
73available in all new revision files:
74
75.. code-block:: mako
76
77    """${message}
78
79    Revision ID: ${up_revision}
80    Revises: ${down_revision}
81    Create Date: ${create_date}
82
83    """
84
85    # revision identifiers, used by Alembic.
86    revision = ${repr(up_revision)}
87    down_revision = ${repr(down_revision)}
88
89    from alembic import op
90    import sqlalchemy as sa
91    ${imports if imports else ""}
92
93    from alembic import context
94
95
96    def upgrade():
97        schema_upgrades()
98        if context.get_x_argument(as_dictionary=True).get('data', None):
99            data_upgrades()
100
101    def downgrade():
102        if context.get_x_argument(as_dictionary=True).get('data', None):
103            data_downgrades()
104        schema_downgrades()
105
106    def schema_upgrades():
107        """schema upgrade migrations go here."""
108        ${upgrades if upgrades else "pass"}
109
110    def schema_downgrades():
111        """schema downgrade migrations go here."""
112        ${downgrades if downgrades else "pass"}
113
114    def data_upgrades():
115        """Add any optional data upgrade migrations here!"""
116        pass
117
118    def data_downgrades():
119        """Add any optional data downgrade migrations here!"""
120        pass
121
122Now, when we create a new migration file, the ``data_upgrades()`` and ``data_downgrades()``
123placeholders will be available, where we can add optional data migrations::
124
125    """rev one
126
127    Revision ID: 3ba2b522d10d
128    Revises: None
129    Create Date: 2014-03-04 18:05:36.992867
130
131    """
132
133    # revision identifiers, used by Alembic.
134    revision = '3ba2b522d10d'
135    down_revision = None
136
137    from alembic import op
138    import sqlalchemy as sa
139    from sqlalchemy import String, Column
140    from sqlalchemy.sql import table, column
141
142    from alembic import context
143
144    def upgrade():
145        schema_upgrades()
146        if context.get_x_argument(as_dictionary=True).get('data', None):
147            data_upgrades()
148
149    def downgrade():
150        if context.get_x_argument(as_dictionary=True).get('data', None):
151            data_downgrades()
152        schema_downgrades()
153
154    def schema_upgrades():
155        """schema upgrade migrations go here."""
156        op.create_table("my_table", Column('data', String))
157
158    def schema_downgrades():
159        """schema downgrade migrations go here."""
160        op.drop_table("my_table")
161
162    def data_upgrades():
163        """Add any optional data upgrade migrations here!"""
164
165        my_table = table('my_table',
166            column('data', String),
167        )
168
169        op.bulk_insert(my_table,
170            [
171                {'data': 'data 1'},
172                {'data': 'data 2'},
173                {'data': 'data 3'},
174            ]
175        )
176
177    def data_downgrades():
178        """Add any optional data downgrade migrations here!"""
179
180        op.execute("delete from my_table")
181
182To invoke our migrations with data included, we use the ``-x`` flag::
183
184    alembic -x data=true upgrade head
185
186The :meth:`.EnvironmentContext.get_x_argument` is an easy way to support
187new commandline options within environment and migration scripts.
188
189.. _connection_sharing:
190
191Sharing a Connection with a Series of Migration Commands and Environments
192=========================================================================
193
194It is often the case that an application will need to call upon a series
195of commands within :ref:`alembic.command.toplevel`, where it would be advantageous
196for all operations to proceed along a single transaction.   The connectivity
197for a migration is typically solely determined within the ``env.py`` script
198of a migration environment, which is called within the scope of a command.
199
200The steps to take here are:
201
2021. Produce the :class:`~sqlalchemy.engine.Connection` object to use.
203
2042. Place it somewhere that ``env.py`` will be able to access it.  This
205   can be either a. a module-level global somewhere, or b.
206   an attribute which we place into the :attr:`.Config.attributes`
207   dictionary (if we are on an older Alembic version, we may also attach
208   an attribute directly to the :class:`.Config` object).
209
2103. The ``env.py`` script is modified such that it looks for this
211   :class:`~sqlalchemy.engine.Connection` and makes use of it, in lieu
212   of building up its own :class:`~sqlalchemy.engine.Engine` instance.
213
214We illustrate using :attr:`.Config.attributes`::
215
216    from alembic import command, config
217
218    cfg = config.Config("/path/to/yourapp/alembic.ini")
219    with engine.begin() as connection:
220        cfg.attributes['connection'] = connection
221        command.upgrade(cfg, "head")
222
223Then in ``env.py``::
224
225    def run_migrations_online():
226        connectable = config.attributes.get('connection', None)
227
228        if connectable is None:
229            # only create Engine if we don't have a Connection
230            # from the outside
231            connectable = engine_from_config(
232                config.get_section(config.config_ini_section),
233                prefix='sqlalchemy.',
234                poolclass=pool.NullPool)
235
236        context.configure(
237            connection=connectable,
238            target_metadata=target_metadata
239        )
240
241        with context.begin_transaction():
242            context.run_migrations()
243
244.. versionchanged:: 1.4
245
246    Prior to this version, we used a "branched connection", by calling
247    :meth:`~sqlalchemy.engine.Connection.connect`.
248    This is now deprecated and unnecessary,
249    since we no longer have to guess if the given "connection"
250    is an ``Engine`` or ``Connection``, it is always a ``Connection``.
251
252.. _replaceable_objects:
253
254Replaceable Objects
255===================
256
257This recipe proposes a hypothetical way of dealing with
258what we might call a *replaceable* schema object.  A replaceable object
259is a schema object that needs to be created and dropped all at once.
260Examples of such objects include views, stored procedures, and triggers.
261
262.. seealso::
263
264    The Replaceable Object concept has been integrated by the
265    `Alembic Utils <https://github.com/olirice/alembic_utils>`_ project,
266    which provides autogenerate and migration
267    support for PostgreSQL functions and views.   See
268    Alembic Utils at https://github.com/olirice/alembic_utils .
269
270Replaceable objects present a problem in that in order to make incremental
271changes to them, we have to refer to the whole definition at once.
272If we need to add a new column to a view, for example, we have to drop
273it entirely and recreate it fresh with the extra column added, referring to
274the whole structure; but to make it even tougher, if we wish to support
275downgrade operarations in our migration scripts,
276we need to refer to the *previous* version of that
277construct fully, and we'd much rather not have to type out the whole
278definition in multiple places.
279
280This recipe proposes that we may refer to the older version of a
281replaceable construct by directly naming the migration version in
282which it was created, and having a migration refer to that previous
283file as migrations run.   We will also demonstrate how to integrate this
284logic within the :ref:`operation_plugins` feature introduced in
285Alembic 0.8.  It may be very helpful to review
286this section first to get an overview of this API.
287
288The Replaceable Object Structure
289--------------------------------
290
291We first need to devise a simple format that represents the "CREATE XYZ" /
292"DROP XYZ" aspect of what it is we're building.  We will work with an object
293that represents a textual definition; while a SQL view is an object that we can define
294using a `table-metadata-like system <https://github.com/sqlalchemy/sqlalchemy/wiki/UsageRecipes/Views>`_,
295this is not so much the case for things like stored procedures, where
296we pretty much need to have a full string definition written down somewhere.
297We'll use a simple value object called ``ReplaceableObject`` that can
298represent any named set of SQL text to send to a "CREATE" statement of
299some kind::
300
301    class ReplaceableObject:
302        def __init__(self, name, sqltext):
303            self.name = name
304            self.sqltext = sqltext
305
306Using this object in a migration script, assuming a Postgresql-style
307syntax, looks like::
308
309    customer_view = ReplaceableObject(
310        "customer_view",
311        "SELECT name, order_count FROM customer WHERE order_count > 0"
312    )
313
314    add_customer_sp = ReplaceableObject(
315        "add_customer_sp(name varchar, order_count integer)",
316        """
317        RETURNS integer AS $$
318        BEGIN
319            insert into customer (name, order_count)
320            VALUES (in_name, in_order_count);
321        END;
322        $$ LANGUAGE plpgsql;
323        """
324    )
325
326The ``ReplaceableObject`` class is only one very simplistic way to do this.
327The structure of how we represent our schema objects
328is not too important for the purposes of this example; we can just
329as well put strings inside of tuples or dictionaries, as well as
330that we could define any kind of series of fields and class structures we want.
331The only important part is that below we will illustrate how organize the
332code that can consume the structure we create here.
333
334Create Operations for the Target Objects
335----------------------------------------
336
337We'll use the :class:`.Operations` extension API to make new operations
338for create, drop, and replace of views and stored procedures.  Using this
339API is also optional; we can just as well make any kind of Python
340function that we would invoke from our migration scripts.
341However, using this API gives us operations
342built directly into the Alembic ``op.*`` namespace very nicely.
343
344The most intricate class is below.  This is the base of our "replaceable"
345operation, which includes not just a base operation for emitting
346CREATE and DROP instructions on a ``ReplaceableObject``, it also assumes
347a certain model of "reversibility" which makes use of references to
348other migration files in order to refer to the "previous" version
349of an object::
350
351    from alembic.operations import Operations, MigrateOperation
352
353    class ReversibleOp(MigrateOperation):
354        def __init__(self, target):
355            self.target = target
356
357        @classmethod
358        def invoke_for_target(cls, operations, target):
359            op = cls(target)
360            return operations.invoke(op)
361
362        def reverse(self):
363            raise NotImplementedError()
364
365        @classmethod
366        def _get_object_from_version(cls, operations, ident):
367            version, objname = ident.split(".")
368
369            module = operations.get_context().script.get_revision(version).module
370            obj = getattr(module, objname)
371            return obj
372
373        @classmethod
374        def replace(cls, operations, target, replaces=None, replace_with=None):
375
376            if replaces:
377                old_obj = cls._get_object_from_version(operations, replaces)
378                drop_old = cls(old_obj).reverse()
379                create_new = cls(target)
380            elif replace_with:
381                old_obj = cls._get_object_from_version(operations, replace_with)
382                drop_old = cls(target).reverse()
383                create_new = cls(old_obj)
384            else:
385                raise TypeError("replaces or replace_with is required")
386
387            operations.invoke(drop_old)
388            operations.invoke(create_new)
389
390The workings of this class should become clear as we walk through the
391example.   To create usable operations from this base, we will build
392a series of stub classes and use :meth:`.Operations.register_operation`
393to make them part of the ``op.*`` namespace::
394
395    @Operations.register_operation("create_view", "invoke_for_target")
396    @Operations.register_operation("replace_view", "replace")
397    class CreateViewOp(ReversibleOp):
398        def reverse(self):
399            return DropViewOp(self.target)
400
401
402    @Operations.register_operation("drop_view", "invoke_for_target")
403    class DropViewOp(ReversibleOp):
404        def reverse(self):
405            return CreateViewOp(self.target)
406
407
408    @Operations.register_operation("create_sp", "invoke_for_target")
409    @Operations.register_operation("replace_sp", "replace")
410    class CreateSPOp(ReversibleOp):
411        def reverse(self):
412            return DropSPOp(self.target)
413
414
415    @Operations.register_operation("drop_sp", "invoke_for_target")
416    class DropSPOp(ReversibleOp):
417        def reverse(self):
418            return CreateSPOp(self.target)
419
420To actually run the SQL like "CREATE VIEW" and "DROP SEQUENCE", we'll provide
421implementations using :meth:`.Operations.implementation_for`
422that run straight into :meth:`.Operations.execute`::
423
424    @Operations.implementation_for(CreateViewOp)
425    def create_view(operations, operation):
426        operations.execute("CREATE VIEW %s AS %s" % (
427            operation.target.name,
428            operation.target.sqltext
429        ))
430
431
432    @Operations.implementation_for(DropViewOp)
433    def drop_view(operations, operation):
434        operations.execute("DROP VIEW %s" % operation.target.name)
435
436
437    @Operations.implementation_for(CreateSPOp)
438    def create_sp(operations, operation):
439        operations.execute(
440            "CREATE FUNCTION %s %s" % (
441                operation.target.name, operation.target.sqltext
442            )
443        )
444
445
446    @Operations.implementation_for(DropSPOp)
447    def drop_sp(operations, operation):
448        operations.execute("DROP FUNCTION %s" % operation.target.name)
449
450All of the above code can be present anywhere within an application's
451source tree; the only requirement is that when the ``env.py`` script is
452invoked, it includes imports that ultimately call upon these classes
453as well as the :meth:`.Operations.register_operation` and
454:meth:`.Operations.implementation_for` sequences.
455
456Create Initial Migrations
457-------------------------
458
459We can now illustrate how these objects look during use.  For the first step,
460we'll create a new migration to create a "customer" table::
461
462    $ alembic revision -m "create table"
463
464We build the first revision as follows::
465
466    """create table
467
468    Revision ID: 3ab8b2dfb055
469    Revises:
470    Create Date: 2015-07-27 16:22:44.918507
471
472    """
473
474    # revision identifiers, used by Alembic.
475    revision = '3ab8b2dfb055'
476    down_revision = None
477    branch_labels = None
478    depends_on = None
479
480    from alembic import op
481    import sqlalchemy as sa
482
483
484    def upgrade():
485        op.create_table(
486            "customer",
487            sa.Column('id', sa.Integer, primary_key=True),
488            sa.Column('name', sa.String),
489            sa.Column('order_count', sa.Integer),
490        )
491
492
493    def downgrade():
494        op.drop_table('customer')
495
496For the second migration, we will create a view and a stored procedure
497which act upon this table::
498
499    $ alembic revision -m "create views/sp"
500
501This migration will use the new directives::
502
503    """create views/sp
504
505    Revision ID: 28af9800143f
506    Revises: 3ab8b2dfb055
507    Create Date: 2015-07-27 16:24:03.589867
508
509    """
510
511    # revision identifiers, used by Alembic.
512    revision = '28af9800143f'
513    down_revision = '3ab8b2dfb055'
514    branch_labels = None
515    depends_on = None
516
517    from alembic import op
518    import sqlalchemy as sa
519
520    from foo import ReplaceableObject
521
522    customer_view = ReplaceableObject(
523        "customer_view",
524        "SELECT name, order_count FROM customer WHERE order_count > 0"
525    )
526
527    add_customer_sp = ReplaceableObject(
528        "add_customer_sp(name varchar, order_count integer)",
529        """
530        RETURNS integer AS $$
531        BEGIN
532            insert into customer (name, order_count)
533            VALUES (in_name, in_order_count);
534        END;
535        $$ LANGUAGE plpgsql;
536        """
537    )
538
539
540    def upgrade():
541        op.create_view(customer_view)
542        op.create_sp(add_customer_sp)
543
544
545    def downgrade():
546        op.drop_view(customer_view)
547        op.drop_sp(add_customer_sp)
548
549
550We see the use of our new ``create_view()``, ``create_sp()``,
551``drop_view()``, and ``drop_sp()`` directives.  Running these to "head"
552we get the following (this includes an edited view of SQL emitted)::
553
554    $ alembic upgrade 28af9800143
555    INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
556    INFO  [alembic.runtime.migration] Will assume transactional DDL.
557    INFO  [sqlalchemy.engine.base.Engine] BEGIN (implicit)
558    INFO  [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
559    INFO  [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
560    INFO  [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
561    FROM alembic_version
562    INFO  [sqlalchemy.engine.base.Engine] {}
563    INFO  [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
564    INFO  [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
565    INFO  [alembic.runtime.migration] Running upgrade  -> 3ab8b2dfb055, create table
566    INFO  [sqlalchemy.engine.base.Engine]
567    CREATE TABLE customer (
568        id SERIAL NOT NULL,
569        name VARCHAR,
570        order_count INTEGER,
571        PRIMARY KEY (id)
572    )
573
574
575    INFO  [sqlalchemy.engine.base.Engine] {}
576    INFO  [sqlalchemy.engine.base.Engine] INSERT INTO alembic_version (version_num) VALUES ('3ab8b2dfb055')
577    INFO  [sqlalchemy.engine.base.Engine] {}
578    INFO  [alembic.runtime.migration] Running upgrade 3ab8b2dfb055 -> 28af9800143f, create views/sp
579    INFO  [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
580    INFO  [sqlalchemy.engine.base.Engine] {}
581    INFO  [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
582        RETURNS integer AS $$
583        BEGIN
584            insert into customer (name, order_count)
585            VALUES (in_name, in_order_count);
586        END;
587        $$ LANGUAGE plpgsql;
588
589    INFO  [sqlalchemy.engine.base.Engine] {}
590    INFO  [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '3ab8b2dfb055'
591    INFO  [sqlalchemy.engine.base.Engine] {}
592    INFO  [sqlalchemy.engine.base.Engine] COMMIT
593
594We see that our CREATE TABLE proceeded as well as the CREATE VIEW and CREATE
595FUNCTION operations produced by our new directives.
596
597
598Create Revision Migrations
599--------------------------
600
601Finally, we can illustrate how we would "revise" these objects.
602Let's consider we added a new column ``email`` to our ``customer`` table::
603
604    $ alembic revision -m "add email col"
605
606The migration is::
607
608    """add email col
609
610    Revision ID: 191a2d20b025
611    Revises: 28af9800143f
612    Create Date: 2015-07-27 16:25:59.277326
613
614    """
615
616    # revision identifiers, used by Alembic.
617    revision = '191a2d20b025'
618    down_revision = '28af9800143f'
619    branch_labels = None
620    depends_on = None
621
622    from alembic import op
623    import sqlalchemy as sa
624
625
626    def upgrade():
627        op.add_column("customer", sa.Column("email", sa.String()))
628
629
630    def downgrade():
631        op.drop_column("customer", "email")
632
633
634We now need to recreate the ``customer_view`` view and the
635``add_customer_sp`` function.   To include downgrade capability, we will
636need to refer to the **previous** version of the construct; the
637``replace_view()`` and ``replace_sp()`` operations we've created make
638this possible, by allowing us to refer to a specific, previous revision.
639the ``replaces`` and ``replace_with`` arguments accept a dot-separated
640string, which refers to a revision number and an object name, such
641as ``"28af9800143f.customer_view"``.  The ``ReversibleOp`` class makes use
642of the :meth:`.Operations.get_context` method to locate the version file
643we refer to::
644
645    $ alembic revision -m "update views/sp"
646
647The migration::
648
649    """update views/sp
650
651    Revision ID: 199028bf9856
652    Revises: 191a2d20b025
653    Create Date: 2015-07-27 16:26:31.344504
654
655    """
656
657    # revision identifiers, used by Alembic.
658    revision = '199028bf9856'
659    down_revision = '191a2d20b025'
660    branch_labels = None
661    depends_on = None
662
663    from alembic import op
664    import sqlalchemy as sa
665
666    from foo import ReplaceableObject
667
668    customer_view = ReplaceableObject(
669        "customer_view",
670        "SELECT name, order_count, email "
671        "FROM customer WHERE order_count > 0"
672    )
673
674    add_customer_sp = ReplaceableObject(
675        "add_customer_sp(name varchar, order_count integer, email varchar)",
676        """
677        RETURNS integer AS $$
678        BEGIN
679            insert into customer (name, order_count, email)
680            VALUES (in_name, in_order_count, email);
681        END;
682        $$ LANGUAGE plpgsql;
683        """
684    )
685
686
687    def upgrade():
688        op.replace_view(customer_view, replaces="28af9800143f.customer_view")
689        op.replace_sp(add_customer_sp, replaces="28af9800143f.add_customer_sp")
690
691
692    def downgrade():
693        op.replace_view(customer_view, replace_with="28af9800143f.customer_view")
694        op.replace_sp(add_customer_sp, replace_with="28af9800143f.add_customer_sp")
695
696Above, instead of using ``create_view()``, ``create_sp()``,
697``drop_view()``, and ``drop_sp()`` methods, we now use ``replace_view()`` and
698``replace_sp()``.  The replace operation we've built always runs a DROP *and*
699a CREATE.  Running an upgrade to head we see::
700
701    $ alembic upgrade head
702    INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
703    INFO  [alembic.runtime.migration] Will assume transactional DDL.
704    INFO  [sqlalchemy.engine.base.Engine] BEGIN (implicit)
705    INFO  [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
706    INFO  [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
707    INFO  [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
708    FROM alembic_version
709    INFO  [sqlalchemy.engine.base.Engine] {}
710    INFO  [alembic.runtime.migration] Running upgrade 28af9800143f -> 191a2d20b025, add email col
711    INFO  [sqlalchemy.engine.base.Engine] ALTER TABLE customer ADD COLUMN email VARCHAR
712    INFO  [sqlalchemy.engine.base.Engine] {}
713    INFO  [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '28af9800143f'
714    INFO  [sqlalchemy.engine.base.Engine] {}
715    INFO  [alembic.runtime.migration] Running upgrade 191a2d20b025 -> 199028bf9856, update views/sp
716    INFO  [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
717    INFO  [sqlalchemy.engine.base.Engine] {}
718    INFO  [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count, email FROM customer WHERE order_count > 0
719    INFO  [sqlalchemy.engine.base.Engine] {}
720    INFO  [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer)
721    INFO  [sqlalchemy.engine.base.Engine] {}
722    INFO  [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
723        RETURNS integer AS $$
724        BEGIN
725            insert into customer (name, order_count, email)
726            VALUES (in_name, in_order_count, email);
727        END;
728        $$ LANGUAGE plpgsql;
729
730    INFO  [sqlalchemy.engine.base.Engine] {}
731    INFO  [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='199028bf9856' WHERE alembic_version.version_num = '191a2d20b025'
732    INFO  [sqlalchemy.engine.base.Engine] {}
733    INFO  [sqlalchemy.engine.base.Engine] COMMIT
734
735After adding our new ``email`` column, we see that both ``customer_view``
736and ``add_customer_sp()`` are dropped before the new version is created.
737If we downgrade back to the old version, we see the old version of these
738recreated again within the downgrade for this migration::
739
740    $ alembic downgrade 28af9800143
741    INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
742    INFO  [alembic.runtime.migration] Will assume transactional DDL.
743    INFO  [sqlalchemy.engine.base.Engine] BEGIN (implicit)
744    INFO  [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
745    INFO  [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
746    INFO  [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
747    FROM alembic_version
748    INFO  [sqlalchemy.engine.base.Engine] {}
749    INFO  [alembic.runtime.migration] Running downgrade 199028bf9856 -> 191a2d20b025, update views/sp
750    INFO  [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
751    INFO  [sqlalchemy.engine.base.Engine] {}
752    INFO  [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
753    INFO  [sqlalchemy.engine.base.Engine] {}
754    INFO  [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
755    INFO  [sqlalchemy.engine.base.Engine] {}
756    INFO  [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
757        RETURNS integer AS $$
758        BEGIN
759            insert into customer (name, order_count)
760            VALUES (in_name, in_order_count);
761        END;
762        $$ LANGUAGE plpgsql;
763
764    INFO  [sqlalchemy.engine.base.Engine] {}
765    INFO  [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '199028bf9856'
766    INFO  [sqlalchemy.engine.base.Engine] {}
767    INFO  [alembic.runtime.migration] Running downgrade 191a2d20b025 -> 28af9800143f, add email col
768    INFO  [sqlalchemy.engine.base.Engine] ALTER TABLE customer DROP COLUMN email
769    INFO  [sqlalchemy.engine.base.Engine] {}
770    INFO  [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '191a2d20b025'
771    INFO  [sqlalchemy.engine.base.Engine] {}
772    INFO  [sqlalchemy.engine.base.Engine] COMMIT
773
774.. _cookbook_postgresql_multi_tenancy:
775
776Rudimental Schema-Level Multi Tenancy for PostgreSQL Databases
777==============================================================
778
779**Multi tenancy** refers to an application that accommodates for many
780clients simultaneously.   Within the scope of a database migrations tool,
781multi-tenancy typically refers to the practice of maintaining multiple,
782identical databases where each database is assigned to one client.
783
784Alembic does not currently have explicit multi-tenant support; typically,
785the approach must involve running Alembic multiple times against different
786database URLs.
787
788One common approach to multi-tenancy, particularly on the PostgreSQL database,
789is to install tenants within **individual PostgreSQL schemas**.  When using
790PostgreSQL's schemas, a special variable ``search_path`` is offered that is
791intended to assist with targeting of different schemas.
792
793.. note::  SQLAlchemy includes a system of directing a common set of
794   ``Table`` metadata to many schemas called `schema_translate_map <https://docs.sqlalchemy.org/core/connections.html#translation-of-schema-names>`_.   Alembic at the time
795   of this writing lacks adequate support for this feature.  The recipe below
796   should be considered **interim** until Alembic has more first-class support
797   for schema-level multi-tenancy.
798
799The recipe below can be altered for flexibility.  The primary purpose of this
800recipe is to illustrate how to point the Alembic process towards one PostgreSQL
801schema or another.
802
8031. The model metadata used as the target for autogenerate must not include any
804   schema name for tables; the schema must be non-present or set to ``None``.
805   Otherwise, Alembic autogenerate will still attempt
806   to compare and render tables in terms of this schema::
807
808
809        class A(Base):
810            __tablename__ = 'a'
811
812            id = Column(Integer, primary_key=True)
813            data = Column(UnicodeText())
814            foo = Column(Integer)
815
816            __table_args__ = {
817                "schema": None
818            }
819
820   ..
821
8222. The :paramref:`.EnvironmentContext.configure.include_schemas` flag must
823   also be False or not included.
824
8253. The "tenant" will be a schema name passed to Alembic using the "-x" flag.
826   In ``env.py`` an approach like the following allows ``-xtenant=some_schema``
827   to be supported by making use of :meth:`.EnvironmentContext.get_x_argument`::
828
829        def run_migrations_online():
830            connectable = engine_from_config(
831                config.get_section(config.config_ini_section),
832                prefix="sqlalchemy.",
833                poolclass=pool.NullPool,
834            )
835
836            current_tenant = context.get_x_argument(as_dictionary=True).get("tenant")
837            with connectable.connect() as connection:
838
839                # set search path on the connection, which ensures that
840                # PostgreSQL will emit all CREATE / ALTER / DROP statements
841                # in terms of this schema by default
842                connection.execute("set search_path to %s" % current_tenant)
843
844                # make use of non-supported SQLAlchemy attribute to ensure
845                # the dialect reflects tables in terms of the current tenant name
846                connection.dialect.default_schema_name = current_tenant
847
848                context.configure(
849                    connection=connection,
850                    target_metadata=target_metadata,
851                )
852
853                with context.begin_transaction():
854                    context.run_migrations()
855
856   The current tenant is set using the PostgreSQL ``search_path`` variable on
857   the connection.  Note above we must employ a **non-supported SQLAlchemy
858   workaround** at the moment which is to hardcode the SQLAlchemy dialect's
859   default schema name to our target schema.
860
861   It is also important to note that the above changes **remain on the connection
862   permanently unless reversed explicitly**.  If the alembic application simply
863   exits above, there is no issue.  However if the application attempts to
864   continue using the above connection for other purposes, it may be necessary
865   to reset these variables back to the default, which for PostgreSQL is usually
866   the name "public" however may be different based on configuration.
867
868
8694. Alembic operations will now proceed in terms of whichever schema we pass
870   on the command line.   All logged SQL will show no schema, except for
871   reflection operations which will make use of the ``default_schema_name``
872   attribute::
873
874       []$ alembic -x tenant=some_schema revision -m "rev1" --autogenerate
875
876   ..
877
8785. Since all schemas are to be maintained in sync, autogenerate should be run
879   against only **one** schema, generating new Alembic migration files.
880   Autogenerate migratin operations are then run against **all** schemas.
881
882
883.. _cookbook_no_empty_migrations:
884
885Don't Generate Empty Migrations with Autogenerate
886=================================================
887
888A common request is to have the ``alembic revision --autogenerate`` command not
889actually generate a revision file if no changes to the schema is detected.  Using
890the :paramref:`.EnvironmentContext.configure.process_revision_directives`
891hook, this is straightforward; place a ``process_revision_directives``
892hook in :meth:`.MigrationContext.configure` which removes the
893single :class:`.MigrationScript` directive if it is empty of
894any operations::
895
896
897    def run_migrations_online():
898
899        # ...
900
901        def process_revision_directives(context, revision, directives):
902            if config.cmd_opts.autogenerate:
903                script = directives[0]
904                if script.upgrade_ops.is_empty():
905                    directives[:] = []
906
907
908        # connectable = ...
909
910        with connectable.connect() as connection:
911            context.configure(
912                connection=connection,
913                target_metadata=target_metadata,
914                process_revision_directives=process_revision_directives
915            )
916
917            with context.begin_transaction():
918                context.run_migrations()
919
920.. _cookbook_dont_emit_drop_index:
921
922Don't emit DROP INDEX when the table is to be dropped as well
923=============================================================
924
925MySQL may complain when dropping an index that is against a column
926that also has a foreign key constraint on it.   If the table is to be dropped
927in any case, the DROP INDEX isn't necessary.  This recipe will process the set
928of autogenerate directives such that all :class:`.DropIndexOp` directives
929are removed against tables that themselves are to be dropped::
930
931    def run_migrations_online():
932
933        # ...
934
935        from alembic.operations import ops
936
937        def process_revision_directives(context, revision, directives):
938            script = directives[0]
939
940            # process both "def upgrade()", "def downgrade()"
941            for directive in (script.upgrade_ops, script.downgrade_ops):
942
943                # make a set of tables that are being dropped within
944                # the migration function
945                tables_dropped = set()
946                for op in directive.ops:
947                    if isinstance(op, ops.DropTableOp):
948                        tables_dropped.add((op.table_name, op.schema))
949
950                # now rewrite the list of "ops" such that DropIndexOp
951                # is removed for those tables.   Needs a recursive function.
952                directive.ops = list(
953                    _filter_drop_indexes(directive.ops, tables_dropped)
954                )
955
956        def _filter_drop_indexes(directives, tables_dropped):
957            # given a set of (tablename, schemaname) to be dropped, filter
958            # out DropIndexOp from the list of directives and yield the result.
959
960            for directive in directives:
961                # ModifyTableOps is a container of ALTER TABLE types of
962                # commands.  process those in place recursively.
963                if isinstance(directive, ops.ModifyTableOps) and \
964                        (directive.table_name, directive.schema) in tables_dropped:
965                    directive.ops = list(
966                        _filter_drop_indexes(directive.ops, tables_dropped)
967                    )
968
969                    # if we emptied out the directives, then skip the
970                    # container altogether.
971                    if not directive.ops:
972                        continue
973                elif isinstance(directive, ops.DropIndexOp) and \
974                        (directive.table_name, directive.schema) in tables_dropped:
975                    # we found a target DropIndexOp.   keep looping
976                    continue
977
978                # otherwise if not filtered, yield out the directive
979                yield directive
980
981        # connectable = ...
982
983        with connectable.connect() as connection:
984            context.configure(
985                connection=connection,
986                target_metadata=target_metadata,
987                process_revision_directives=process_revision_directives
988            )
989
990            with context.begin_transaction():
991                context.run_migrations()
992
993
994Whereas autogenerate, when dropping two tables with a foreign key and
995an index, would previously generate something like::
996
997    def downgrade():
998        # ### commands auto generated by Alembic - please adjust! ###
999        op.drop_index(op.f('ix_b_aid'), table_name='b')
1000        op.drop_table('b')
1001        op.drop_table('a')
1002        # ### end Alembic commands ###
1003
1004With the above rewriter, it generates as::
1005
1006    def downgrade():
1007        # ### commands auto generated by Alembic - please adjust! ###
1008        op.drop_table('b')
1009        op.drop_table('a')
1010        # ### end Alembic commands ###
1011
1012
1013Don't generate any DROP TABLE directives with autogenerate
1014==========================================================
1015
1016When running autogenerate against a database that has existing tables outside
1017of the application's autogenerated metadata, it may be desirable to prevent
1018autogenerate from considering any of those existing tables to be dropped.
1019This will prevent autogenerate from detecting tables removed from the
1020local metadata as well however this is only a small caveat.
1021
1022The most direct way to achieve this using the
1023:paramref:`.EnvironmentContext.configure.include_object` hook.   There is no
1024need to hardcode a fixed "whitelist" of table names; the hook gives enough
1025information in the given arguments to determine if a particular table name is
1026not part of the local :class:`.MetaData` being autogenerated, by checking first
1027that the type of object is ``"table"``, then that ``reflected`` is ``True``,
1028indicating this table name is from the local database connection, not the
1029:class:`.MetaData`, and finally that ``compare_to`` is ``None``, indicating
1030autogenerate is not comparing this :class:`.Table` to any :class:`.Table` in
1031the local :class:`.MetaData` collection::
1032
1033    # in env.py
1034
1035    def include_object(object, name, type_, reflected, compare_to):
1036        if type_ == "table" and reflected and compare_to is None:
1037            return False
1038        else:
1039            return True
1040
1041
1042    context.configure(
1043        # ...
1044        include_object = include_object
1045    )
1046
1047.. _cookbook_custom_sorting_create_table:
1048
1049Apply Custom Sorting to Table Columns within CREATE TABLE
1050==========================================================
1051
1052This example illustrates use of the :class:`.Rewriter` object introduced
1053at :ref:`autogen_rewriter`.   While the rewriter grants access to the
1054individual :class:`.ops.MigrateOperation` objects, there are sometimes some
1055special techniques required to get around some structural limitations that
1056are present.
1057
1058One is when trying to reorganize the order of columns in a
1059table within a :class:`.ops.CreateTableOp` directive.  This directive, when
1060generated by autogenerate, actually holds onto the original :class:`.Table`
1061object as the source of its information, so attempting to reorder the
1062:attr:`.ops.CreateTableOp.columns` collection will usually have no effect.
1063Instead, a new :class:`.ops.CreateTableOp` object may be constructed with the
1064new ordering.   However, a second issue is that the :class:`.Column` objects
1065inside will already be associated with the :class:`.Table` that is from the
1066model being autogenerated, meaning they can't be reassigned directly to a new
1067:class:`.Table`.  To get around this, we can copy all the columns and constraints
1068using methods like :meth:`.Column.copy`.
1069
1070Below we use :class:`.Rewriter` to create a new :class:`.ops.CreateTableOp`
1071directive and to copy the :class:`.Column` objects from one into another,
1072copying each column or constraint object and applying a new sorting scheme::
1073
1074    # in env.py
1075
1076    from alembic.operations import ops
1077    from alembic.autogenerate import rewriter
1078
1079    writer = rewriter.Rewriter()
1080
1081    @writer.rewrites(ops.CreateTableOp)
1082    def order_columns(context, revision, op):
1083
1084        special_names = {"id": -100, "created_at": 1001, "updated_at": 1002}
1085
1086        cols_by_key = [
1087            (
1088                special_names.get(col.key, index)
1089                if isinstance(col, Column)
1090                else 2000,
1091                col.copy(),
1092            )
1093            for index, col in enumerate(op.columns)
1094        ]
1095
1096        columns = [
1097            col for idx, col in sorted(cols_by_key, key=lambda entry: entry[0])
1098        ]
1099        return ops.CreateTableOp(
1100            op.table_name, columns, schema=op.schema, **op.kw)
1101
1102
1103    # ...
1104
1105    context.configure(
1106        # ...
1107        process_revision_directives=writer
1108    )
1109
1110
1111Above, when we apply the ``writer`` to a table such as::
1112
1113    Table(
1114        "my_table",
1115        m,
1116        Column("data", String(50)),
1117        Column("created_at", DateTime),
1118        Column("id", Integer, primary_key=True),
1119        Column("updated_at", DateTime),
1120        UniqueConstraint("data", name="uq_data")
1121    )
1122
1123
1124This will render in the autogenerated file as::
1125
1126    def upgrade():
1127        # ### commands auto generated by Alembic - please adjust! ###
1128        op.create_table(
1129            "my_table",
1130            sa.Column("id", sa.Integer(), nullable=False),
1131            sa.Column("data", sa.String(length=50), nullable=True),
1132            sa.Column("created_at", sa.DateTime(), nullable=True),
1133            sa.Column("updated_at", sa.DateTime(), nullable=True),
1134            sa.PrimaryKeyConstraint("id"),
1135            sa.UniqueConstraint("data", name="uq_data"),
1136        )
1137        # ### end Alembic commands ###
1138
1139Don't emit CREATE TABLE statements for Views
1140============================================
1141
1142It is sometimes convenient to create :class:`~sqlalchemy.schema.Table` instances for views
1143so that they can be queried using normal SQLAlchemy techniques. Unfortunately this
1144causes Alembic to treat them as tables in need of creation and to generate spurious
1145``create_table()`` operations. This is easily fixable by flagging such Tables and using the
1146:paramref:`~.EnvironmentContext.configure.include_object` hook to exclude them::
1147
1148    my_view = Table('my_view', metadata, autoload=True, info=dict(is_view=True))    # Flag this as a view
1149
1150Then define ``include_object`` as::
1151
1152    def include_object(object, name, type_, reflected, compare_to):
1153        """
1154        Exclude views from Alembic's consideration.
1155        """
1156
1157        return not object.info.get('is_view', False)
1158
1159Finally, in ``env.py`` pass your ``include_object`` as a keyword argument to :meth:`.EnvironmentContext.configure`.
1160
1161.. _multiple_environments:
1162
1163Run Multiple Alembic Environments from one .ini file
1164====================================================
1165
1166Long before Alembic had the "multiple bases" feature described in :ref:`multiple_bases`,
1167projects had a need to maintain more than one Alembic version history in a single
1168project, where these version histories are completely independent of each other
1169and each refer to their own alembic_version table, either across multiple databases,
1170schemas, or namespaces.  A simple approach was added to support this, the
1171``--name`` flag on the commandline.
1172
1173First, one would create an alembic.ini file of this form::
1174
1175    [DEFAULT]
1176    # all defaults shared between environments go here
1177
1178    sqlalchemy.url = postgresql://scott:tiger@hostname/mydatabase
1179
1180
1181    [schema1]
1182    # path to env.py and migration scripts for schema1
1183    script_location = myproject/revisions/schema1
1184
1185    [schema2]
1186    # path to env.py and migration scripts for schema2
1187    script_location = myproject/revisions/schema2
1188
1189    [schema3]
1190    # path to env.py and migration scripts for schema3
1191    script_location = myproject/revisions/db2
1192
1193    # this schema uses a different database URL as well
1194    sqlalchemy.url = postgresql://scott:tiger@hostname/myotherdatabase
1195
1196
1197Above, in the ``[DEFAULT]`` section we set up a default database URL.
1198Then we create three sections corresponding to different revision lineages
1199in our project.   Each of these directories would have its own ``env.py``
1200and set of versioning files.   Then when we run the ``alembic`` command,
1201we simply give it the name of the configuration we want to use::
1202
1203    alembic --name schema2 revision -m "new rev for schema 2" --autogenerate
1204
1205Above, the ``alembic`` command makes use of the configuration in ``[schema2]``,
1206populated with defaults from the ``[DEFAULT]`` section.
1207
1208The above approach can be automated by creating a custom front-end to the
1209Alembic commandline as well.
1210
1211Print Python Code to Generate Particular Database Tables
1212========================================================
1213
1214Suppose you have a database already, and want to generate some
1215``op.create_table()`` and other directives that you'd have in a migration file.
1216How can we automate generating that code?
1217Suppose the database schema looks like (assume MySQL)::
1218
1219    CREATE TABLE IF NOT EXISTS `users` (
1220        `id` int(11) NOT NULL,
1221        KEY `id` (`id`)
1222    );
1223
1224    CREATE TABLE IF NOT EXISTS `user_properties` (
1225      `users_id` int(11) NOT NULL,
1226      `property_name` varchar(255) NOT NULL,
1227      `property_value` mediumtext NOT NULL,
1228      UNIQUE KEY `property_name_users_id` (`property_name`,`users_id`),
1229      KEY `users_id` (`users_id`),
1230      CONSTRAINT `user_properties_ibfk_1` FOREIGN KEY (`users_id`)
1231      REFERENCES `users` (`id`) ON DELETE CASCADE
1232    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
1233
1234Using :class:`.ops.UpgradeOps`, :class:`.ops.CreateTableOp`, and
1235:class:`.ops.CreateIndexOp`, we create a migration file structure,
1236using :class:`.Table` objects that we get from SQLAlchemy reflection.
1237The structure is passed to :func:`.autogenerate.render_python_code` to
1238produce the Python code for a migration file::
1239
1240    from sqlalchemy import create_engine
1241    from sqlalchemy import MetaData, Table
1242    from alembic import autogenerate
1243    from alembic.operations import ops
1244
1245    e = create_engine("mysql://scott:tiger@localhost/test")
1246
1247    with e.connect() as conn:
1248        m = MetaData()
1249        user_table = Table('users', m, autoload_with=conn)
1250        user_property_table = Table('user_properties', m, autoload_with=conn)
1251
1252    print(autogenerate.render_python_code(
1253        ops.UpgradeOps(
1254            ops=[
1255                ops.CreateTableOp.from_table(table) for table in m.tables.values()
1256            ] + [
1257                ops.CreateIndexOp.from_index(idx) for table in m.tables.values()
1258                for idx in table.indexes
1259            ]
1260        ))
1261    )
1262
1263Output::
1264
1265    # ### commands auto generated by Alembic - please adjust! ###
1266    op.create_table('users',
1267    sa.Column('id', mysql.INTEGER(display_width=11), autoincrement=False, nullable=False),
1268    mysql_default_charset='latin1',
1269    mysql_engine='InnoDB'
1270    )
1271    op.create_table('user_properties',
1272    sa.Column('users_id', mysql.INTEGER(display_width=11), autoincrement=False, nullable=False),
1273    sa.Column('property_name', mysql.VARCHAR(length=255), nullable=False),
1274    sa.Column('property_value', mysql.MEDIUMTEXT(), nullable=False),
1275    sa.ForeignKeyConstraint(['users_id'], ['users.id'], name='user_properties_ibfk_1', ondelete='CASCADE'),
1276    mysql_comment='user properties',
1277    mysql_default_charset='utf8',
1278    mysql_engine='InnoDB'
1279    )
1280    op.create_index('id', 'users', ['id'], unique=False)
1281    op.create_index('users_id', 'user_properties', ['users_id'], unique=False)
1282    op.create_index('property_name_users_id', 'user_properties', ['property_name', 'users_id'], unique=True)
1283    # ### end Alembic commands ###
1284
1285Run Alembic Operation Objects Directly (as in from autogenerate)
1286================================================================
1287
1288The :class:`.Operations` object has a method known as
1289:meth:`.Operations.invoke` that will generically invoke a particular operation
1290object.  We can therefore use the :func:`.autogenerate.produce_migrations`
1291function to run an autogenerate comparison, get back a
1292:class:`.ops.MigrationScript` structure representing the changes, and with a
1293little bit of insider information we can invoke them directly.
1294
1295The traversal through the :class:`.ops.MigrationScript` structure is as
1296follows::
1297
1298    use_batch = engine.name == "sqlite"
1299
1300    stack = [migrations.upgrade_ops]
1301    while stack:
1302        elem = stack.pop(0)
1303
1304        if use_batch and isinstance(elem, ModifyTableOps):
1305            with operations.batch_alter_table(
1306                elem.table_name, schema=elem.schema
1307            ) as batch_ops:
1308                for table_elem in elem.ops:
1309                    # work around Alembic issue #753 (fixed in 1.5.0)
1310                    if hasattr(table_elem, "column"):
1311                        table_elem.column = table_elem.column.copy()
1312                    batch_ops.invoke(table_elem)
1313
1314        elif hasattr(elem, "ops"):
1315            stack.extend(elem.ops)
1316        else:
1317            # work around Alembic issue #753 (fixed in 1.5.0)
1318            if hasattr(elem, "column"):
1319                elem.column = elem.column.copy()
1320            operations.invoke(elem)
1321
1322Above, we detect elements that have a collection of operations by looking
1323for the ``.ops`` attribute.   A check for :class:`.ModifyTableOps` allows
1324us to use a batch context if we are supporting that.   Finally there's a
1325workaround for an Alembic issue that exists for SQLAlchemy 1.3.20 and greater
1326combined with Alembic older than 1.5.
1327
1328A full example follows.  The overall setup here is copied from the example
1329at :func:`.autogenerate.compare_metadata`::
1330
1331    from sqlalchemy import Column
1332    from sqlalchemy import create_engine
1333    from sqlalchemy import Integer
1334    from sqlalchemy import MetaData
1335    from sqlalchemy import String
1336    from sqlalchemy import Table
1337
1338    from alembic.autogenerate import produce_migrations
1339    from alembic.migration import MigrationContext
1340    from alembic.operations import Operations
1341    from alembic.operations.ops import ModifyTableOps
1342
1343
1344    engine = create_engine("sqlite://", echo=True)
1345
1346    with engine.connect() as conn:
1347        conn.execute(
1348            """
1349            create table foo (
1350                id integer not null primary key,
1351                old_data varchar(50),
1352                x integer
1353            )"""
1354        )
1355
1356        conn.execute(
1357            """
1358            create table bar (
1359                data varchar(50)
1360            )"""
1361        )
1362
1363    metadata = MetaData()
1364    Table(
1365        "foo",
1366        metadata,
1367        Column("id", Integer, primary_key=True),
1368        Column("data", Integer),
1369        Column("x", Integer, nullable=False),
1370    )
1371    Table("bat", metadata, Column("info", String(100)))
1372
1373    mc = MigrationContext.configure(engine.connect())
1374
1375    migrations = produce_migrations(mc, metadata)
1376
1377    operations = Operations(mc)
1378
1379    use_batch = engine.name == "sqlite"
1380
1381    stack = [migrations.upgrade_ops]
1382    while stack:
1383        elem = stack.pop(0)
1384
1385        if use_batch and isinstance(elem, ModifyTableOps):
1386            with operations.batch_alter_table(
1387                elem.table_name, schema=elem.schema
1388            ) as batch_ops:
1389                for table_elem in elem.ops:
1390                    # work around Alembic issue #753 (fixed in 1.5.0)
1391                    if hasattr(table_elem, "column"):
1392                        table_elem.column = table_elem.column.copy()
1393                    batch_ops.invoke(table_elem)
1394
1395        elif hasattr(elem, "ops"):
1396            stack.extend(elem.ops)
1397        else:
1398            # work around Alembic issue #753 (fixed in 1.5.0)
1399            if hasattr(elem, "column"):
1400                elem.column = elem.column.copy()
1401            operations.invoke(elem)
1402
1403
1404
1405
1406Test current database revision is at head(s)
1407============================================
1408
1409A recipe to determine if a database schema is up to date in terms of applying
1410Alembic migrations.   May be useful for test or installation suites to
1411determine if the target database is up to date.   Makes use of the
1412:meth:`.MigrationContext.get_current_heads` as well as
1413:meth:`.ScriptDirectory.get_heads` methods so that it accommodates for a
1414branched revision tree::
1415
1416
1417    from alembic import config, script
1418    from alembic.runtime import migration
1419    from sqlalchemy import engine
1420
1421
1422    def check_current_head(alembic_cfg, connectable):
1423        # type: (config.Config, engine.Engine) -> bool
1424        directory = script.ScriptDirectory.from_config(alembic_cfg)
1425        with connectable.begin() as connection:
1426            context = migration.MigrationContext.configure(connection)
1427            return set(context.get_current_heads()) == set(directory.get_heads())
1428
1429    e = engine.create_engine("mysql://scott:tiger@localhost/test", echo=True)
1430    cfg = config.Config("alembic.ini")
1431    print(check_current_head(cfg, e))
1432
1433.. seealso::
1434
1435    :meth:`.MigrationContext.get_current_heads`
1436
1437    :meth:`.ScriptDirectory.get_heads`
1438
1439
1440Using Asyncio with Alembic
1441==========================
1442
1443SQLAlchemy version 1.4 introduced experimental support for asyncio, allowing
1444use of most of its interface from async applications. Alembic currently does
1445not provide an async api directly, but it can use an use SQLAlchemy Async
1446engine to run the migrations and autogenerate.
1447
1448New configurations can use the template "async" to bootstrap an environment which
1449can be used with async DBAPI like asyncpg, running the command::
1450
1451    alembic init -t async <script_directory_here>
1452
1453Existing configurations can be updated to use an async DBAPI by updating the ``env.py``
1454file that's used by Alembic to start its operations. In particular only
1455``run_migrations_online`` will need to be updated to be something like the example below::
1456
1457    import asyncio
1458
1459    # ... no change required to the rest of the code
1460
1461
1462    def do_run_migrations(connection):
1463        context.configure(connection=connection, target_metadata=target_metadata)
1464
1465        with context.begin_transaction():
1466            context.run_migrations()
1467
1468
1469    async def run_migrations_online():
1470        """Run migrations in 'online' mode.
1471
1472        In this scenario we need to create an Engine
1473        and associate a connection with the context.
1474
1475        """
1476        connectable = AsyncEngine(
1477            engine_from_config(
1478                config.get_section(config.config_ini_section),
1479                prefix="sqlalchemy.",
1480                poolclass=pool.NullPool,
1481                future=True,
1482            )
1483        )
1484
1485        async with connectable.connect() as connection:
1486            await connection.run_sync(do_run_migrations)
1487
1488
1489    if context.is_offline_mode():
1490        run_migrations_offline()
1491    else:
1492        asyncio.run(run_migrations_online())
1493
1494An asnyc application can also interact with the Alembic api directly by using
1495the SQLAlchemy ``run_sync`` method to adapt the non-async api of Alembic to
1496an async consumer.
1497