1 2!! UPDATE TODO !! 3 4!! UPDATE BENCHMARKS !! 5 6 * custom serialization functions for lists of 'WordX's 7 - benchmark chunk size speedup for more complicated computations of list 8 elements => to be expected that we get no speedup anymore or even a 9 slowdown => adapt Blaze.ByteString.Builder.Word accordingly. 10 11 * fast serialization for 'Text' values (currently unpacking to 'String' is 12 the fastest :-/) 13 14 * implementation 15 - further encodings for 'Char' 16 - think about end-of-buffer wrapping when copying bytestrings 17 - toByteStringIO with accumulator capability => provide 'toByteStringIO_' 18 - allow buildr/foldr deforestation to happen for input to 'fromWrite<n>List' 19 (or whatever stream fusion framework is in place for lists) 20 - implement 'toByteString' with an amortized O(n) runtime using the 21 exponentional scaling trick. If the start size is chosen wisely this 22 may even be faster than 'S.pack', as the one copy per element is 23 cheaper than one list thunk per element. It is even likely that we can 24 amortize three copies per element, which allows to avoid spilling any 25 buffer space by doing a last compaction copy. 26 - we could provide builders that honor alignment restrictions, either as 27 builder transformers or as specialized write to builder converters. The 28 trick is for the driver to ensure that the buffer beginning is aligned 29 to the largest aligning (8 or 16 bytes?) required. This is probably the 30 case by default. Then we can always align a pointer in the buffer by 31 appropriately aligning the write pointer. 32 33 * extend tests to new functions 34 35 * benchmarks 36 - understand why the declarative blaze-builder version is the fastest 37 serializer for Word64 little-endian and big-endian 38 - check the cost of using `mappend` on builders instead of writes. 39 - show that using toByteStringIO has an advantage over toLazyByteString 40 - check performance of toByteStringIO 41 - compare speed of 'L.pack' to speed of 'toLazyByteString . fromWord8s' 42 43 * documentation 44 - sort out formultion: "serialization" vs. "encoding" 45 46 * check portability to Hugs 47 48 * performance: 49 - check if reordering 'pe' and 'pf' change performance; it seems that 'pe' 50 is only a reader argument while 'pf' is a state argument. 51 - perhaps we could improve performance by taking page size, page 52 alignment, and memory access alignment into account. 53 - detect machine endianness and use host order writes for the supported 54 endianness. 55 - introduce a type 'BoundedWrite' that encapsulates a 'Write' generator 56 with a bound on the number of bytes maximally written by the write. 57 This way we can achieve data independence for the size check by 58 sacrificing just a little bit of buffer space at buffer ends. 59 - investigate where we would profit from static bounds on number of bytes 60 written (e.g. to make the control flow more linear) 61 62 * testing 63 - port tests from 'Data.Binary.Builder' to ensure that the word writes 64 and builders are working correctly. I may have missed some pitfalls 65 about word types in Haskell during porting the functions from 66 'Data.Binary.Builder'. 67 68 * portability 69 - port to Hugs 70 - test lower versions of GHC 71 72 * deployment 73 - add source repository to 'blaze-html' and 'blaze-builder' cabal files 74