1This directory contains a few example programs. 2 3Each program takes the filename as a command-line argument 4"-fname filename". 5 6If you are using "mpirun" to run an MPI program, you can run the 7program "simple" with two processes as follows: 8 mpirun -np 2 simple -fname test 9 10 11simple.c: Each process creates its own file, writes to it, reads it 12 back, and checks the data read. 13 14psimple.c: Same as simple.c but uses the PMPI versions of all MPI routines 15 16error.c: Tests if error messages are printed correctly 17 18status.c: Tests if the status object is filled correctly by I/O functions 19 20perf.c: A simple read and write performance test. Each process writes 21 4Mbytes to a file at a location determined by its rank and 22 reads it back. For a different access size, change the value 23 of SIZE in the code. The bandwidth is reported for two cases: 24 (1) without including MPI_File_sync and (2) including 25 MPI_File_sync. 26 27async.c: This program is the same as simple.c, except that it uses 28 asynchronous I/O. 29 30coll_test.c: This program tests the use of collective I/O. It writes 31 a 3D block-distributed array to a file corresponding to the 32 global array in row-major (C) order, reads it back, and checks 33 that the data read is correct. The global array size has been 34 set to 32^3. If you are running it on NFS, which is very slow, 35 you may want to reduce that size to 16^3. 36 37coll_perf.c: Measures the I/O bandwidth for writing/reading a 3D 38 block-distributed array to a file corresponding to the global array 39 in row-major (C) order. The global array size has been 40 set to 128^3. If you are running it on NFS, which is very slow, 41 you may want to reduce that size to 16^3. 42 43misc.c: Tests various miscellaneous MPI-IO functions 44 45atomicity.c: Tests whether atomicity semantics are satisfied for 46 overlapping accesses in atomic mode. The probability of detecting 47 errors is higher if you run it on 8 or more processes. 48 49large_file.c: Tests access to large files. Writes a 4-Gbyte file and 50 reads it back. Run it only on one process and on a file system 51 on which ROMIO supports large files. 52 53large_array.c: Tests writing and reading a 4-Gbyte distributed array using 54 the distributed array datatype constructor. Works only on file 55 systems that support 64-bit file sizes and MPI implementations 56 that support 64-bit MPI_Aint. 57 58file_info.c: Tests the setting and retrieval of hints via 59 MPI_File_set_info and MPI_File_get_info 60 61excl.c: Tests MPI_File_open with MPI_MODE_EXCL 62 63noncontig.c: Tests noncontiguous accesses in memory and file using 64 independent I/O. Run it on two processes only. 65 66noncontig_coll.c: Same as noncontig.c, but uses collective I/O 67 68noncontig_coll2.c: Same as noncontig_coll.c, but exercises the 69 cb_config_list hint and aggregation handling more. 70 71i_noncontig.c: Same as noncontig.c, but uses nonblocking I/O 72 73shared_fp.c: Tests the shared file pointer functions 74 75split_coll.c: Tests the split collective I/O functions 76 77fperf.f: Fortran version of perf.c 78 79fcoll_test.f: Fortran version of coll_test.c 80 81pfcoll_test.f: Same as fcoll_test.f but uses the PMPI versions of 82 all MPI routines 83 84fmisc.f: Fortran version of misc.c 85