1Changes from 3.1.0 to 3.1.1
2===========================
3
4Bugs fixed
5----------
6
7- Fixed a critical bug that caused an exception at import time.
8  The error was triggered when a bug in long-double detection is detected
9  in the HDF5 library (see :issue:`275`) and numpy_ does not expose
10  `float96` or `float128`. Closes :issue:`344`.
11- The internal Blosc_ library has been updated to version 1.3.5.
12  This fixes a false buffer overrun condition that made c-blosc to fail,
13  even if the problem was not real.
14
15
16Improvements
17------------
18
19- Do not create a temporary array when the *obj* parameter is not specified
20  in :meth:`File.create_array` (thanks to Francesc).
21  Closes :issue:`337` and :issue:`339`).
22- Added two new utility functions
23  (:func:`tables.nodes.filenode.read_from_filenode` and
24  :func:`tables.nodes.filenode.save_to_filenode`) for the direct copy from
25  filesystem to filenode and vice versa (closes :issue:`342`).
26  Thanks to Andreas Hilboll.
27- Removed the :file:`examples/nested-iter.py` considered no longer useful.
28  Closes :issue:`343`.
29- Better detection of the `-msse2` compiler flag.
30
31
32Changes from 3.0 to 3.1.0
33=========================
34
35New features
36------------
37
38- Now PyTables is able to save/restore the default value of :class:`EnumAtom`
39  types (closes :issue:`234`).
40- Implemented support for the H5FD_SPLIT driver (closes :issue:`288`,
41  :issue:`289` and :issue:`295`). Many thanks to simleo.
42- New quantization filter: the filter truncates floating point data to a
43  specified precision before writing to disk. This can significantly improve
44  the performance of compressors (closes :issue:`261`).
45  Thanks to Andreas Hilboll.
46- Added new :meth:`VLArray.get_row_size` method to :class:`VLArray` for
47  querying the number of atoms of a :class:`VLArray` row.
48  Closes :issue:`24` and :issue:`315`.
49- The internal Blosc_ library has been updated to version 1.3.2.
50  All new features introduced in the Blosc_ 1.3.x series, and in particular
51  the ability to leverage different compressors within Blosc_ (see the `Blosc
52  Release Notes`_), are now available in PyTables via the blosc filter
53  (closes: :issue:`324`). A big thank you to Francesc.
54
55
56Improvements
57------------
58
59- The node caching mechanism has been completely redesigned to be simpler and
60  less dependent from specific behaviours of the ``__del__`` method.
61  Now PyTables is compatible with the forthcoming Python 3.4.
62  Closes :issue:`306`.
63- PyTables no longer uses shared/cached file handlers. This change somewhat
64  improves support for concurrent reading allowing the user to safely open the
65  same file in different threads for reading (requires HDF5 >= 1.8.7).
66  More details about this change can be found in the `Backward incompatible
67  changes`_ section.
68  See also :issue:`130`, :issue:`129` :issue:`292` and :issue:`216`.
69- PyTables is now able to detect and use external installations of the Blosc_
70  library (closes :issue:`104`).  If Blosc_ is not found in the system, and the
71  user do not specify a custom installation directory, then it is used an internal
72  copy of the Blosc_ source code.
73- Automatically disable extended float support if a buggy version of HDF5
74  is detected (see also `Issues with H5T_NATIVE_LDOUBLE`_).
75  See also :issue:`275`, :issue:`290` and :issue:`300`.
76- Documented an unexpected behaviour with string literals in query conditions
77  on Python 3 (closes :issue:`265`)
78- The deprecated :mod:`getopt` module has been dropped in favour of
79  :mod:`argparse` in all command line utilities (close :issue:`251`)
80- Improved the installation section of the :doc:`../usersguide/index`.
81
82  * instructions for installing PyTables via pip_ have been added.
83  * added a reference to the Anaconda_, Canopy_ and `Christoph Gohlke suites`_
84    (closes :issue:`291`)
85
86- Enabled `Travis-CI`_ builds for Python_ 3.3
87- :meth:`Tables.read_coordinates` now also works with boolean indices input.
88  Closes :issue:`287` and :issue:`298`.
89- Improved compatibility with numpy_ >= 1.8 (see :issue:`259`)
90- The code of the benchmark programs (bench directory) has been updated.
91  Closes :issue:`114`.
92- Fixed some warning related to non-unicode file names (the Windows bytes API
93  has been deprecated in Python 3.4)
94
95
96Bugs fixed
97----------
98
99- Fixed detection of platforms supporting Blosc_
100- Fixed a crash that occurred when one attempts to write a numpy_ array to
101  an :class:`Atom` (closes :issue:`209` and :issue:`296`)
102- Prevent creation of a table with no columns (closes :issue:`18` and
103  :issue:`299`)
104- Fixed a memory leak that occured when iterating over
105  :class:`CArray`/:class:`EArray` objects (closes :issue:`308`,
106  see also :issue:`309`).
107  Many thanks to Alistair Muldal.
108- Make NaN types sort to the end. Closes :issue:`282` and :issue:`313`
109- Fixed selection on float columns when NaNs are present (closes :issue:`327`
110  and :issue:`330`)
111- Fix computation of the buffer size for iterations on rows.
112  The buffers size was overestimated resulting in a :exc:`MemoryError`
113  in some cases.
114  Closes :issue:`316`. Thamks to bbudescu.
115- Better check of file open mode. Closes :issue:`318`.
116- The Blosc filter now works correctly together with fletcher32.
117  Closes :issue:`21`.
118- Close the file handle before trying to delete the corresponding file.
119  Fixes a test failure on Windows.
120- Use integer division for computing indices (fixes some warning on Windows)
121
122
123Deprecations
124------------
125
126Following the plan for the complete transition to the new (PEP8_ compliant)
127API, all calls to the old API will raise a :exc:`DeprecationWarning`.
128
129The new API has been introduced in PyTables 3.0 and is backward incompatible.
130In order to guarantee a smoother transition the old API is still usable even
131if it is now deprecated.
132
133The plan for the complete transition to the new API is outlined in
134:issue:`224`.
135
136
137Backward incompatible changes
138-----------------------------
139
140In PyTables <= 3.0 file handles (objects that are returned by the
141:func:`open_file` function) were stored in an internal registry and re-used
142when possible.
143
144Two subsequent attempts to open the same file (with compatible open mode)
145returned the same file handle in PyTables <= 3.0::
146
147    In [1]: import tables
148    In [2]: print(tables.__version__)
149    3.0.0
150    In [3]: a = tables.open_file('test.h5', 'a')
151    In [4]: b = tables.open_file('test.h5', 'a')
152    In [5]: a is b
153    Out[5]: True
154
155All this is an implementation detail, it happened under the hood and the user
156had no control over the process.
157
158This kind of behaviour was considered a feature since it can speed up opening
159of files in case of repeated opens and it also avoids any potential problem
160related to multiple opens, a practice that the HDF5 developers recommend to
161avoid (see also H5Fopen_ reference page).
162
163The trick, of course, is that files are not opened multiple times at HDF5
164level, rather an open file is referenced several times.
165
166The big drawback of this approach is that there are really few chances to use
167PyTables safely in a multi thread program.  Several bug reports have been
168filed regarding this topic.
169
170After long discussions about the possibility to actually achieve concurrent I/O
171and about patterns that should be used for the I/O in concurrent programs
172PyTables developers decided to remove the *black magic under the hood* and
173allow the users to implement the patterns they want.
174
175Starting from PyTables 3.1 file handles are no more re-used (*shared*) and
176each call to the :func:`open_file` function returns a new file handle::
177
178    In [1]: import tables
179    In [2]: print tables.__version__
180    3.1.0
181    In [3]: a = tables.open_file('test.h5', 'a')
182    In [4]: b = tables.open_file('test.h5', 'a')
183    In [5]: a is b
184    Out[5]: False
185
186It is important to stress that the new implementation still has an internal
187registry (implementation detail) and it is still **not thread safe**.
188Just now a smart enough developer should be able to use PyTables in a
189muti-thread program without too much headaches.
190
191The new implementation behaves differently from the previous one, although the
192API has not been changed.  Now users should pay more attention when they open a
193file multiple times (as recommended in the `HDF5 reference`__ ) and they
194should take care of using them in an appropriate way.
195
196__ H5Fopen_
197
198Please note that the :attr:`File.open_count` property was originally intended
199to keep track of the number of references to the same file handle.
200In PyTables >= 3.1, despite of the name, it maintains the same semantics, just
201now its value should never be higher that 1.
202
203.. note::
204
205    HDF5 versions lower than 1.8.7 are not fully compatible with PyTables 3.1.
206    A partial support to HDF5 < 1.8.7 is still provided but in that case
207    multiple file opens are not allowed at all (even in read-only mode).
208
209
210.. _pip: http://www.pip-installer.org
211.. _Anaconda: https://store.continuum.io/cshop/anaconda
212.. _Canopy: https://www.enthought.com/products/canopy
213.. _`Christoph Gohlke suites`: http://www.lfd.uci.edu/~gohlke/pythonlibs
214.. _`Issues with H5T_NATIVE_LDOUBLE`: http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
215.. _Python: http://www.python.org
216.. _Blosc: http://www.blosc.org
217.. _numpy: http://www.numpy.org
218.. _`Travis-CI`: https://travis-ci.org
219.. _PEP8: http://www.python.org/dev/peps/pep-0008
220.. _`Blosc Release Notes`: https://github.com/FrancescAlted/blosc/wiki/Release-notes
221.. _H5Fopen: http://www.hdfgroup.org/HDF5/doc/RM/RM_H5F.html#File-Open
222