.. I can still be happy about a little script I wrote to sanity check the sort order of a list of files returned by server.

I didn’t want to hard code all the test data, in order to reduce maintenance and to benefit from a bit of extra coverage over multiple executions. So I started with a list of file names with potentially interesting alpha sort properties, e.g. a, aa, a0, a1, etc then shuffled that randomly to generate two arrays. The first is creation order. The position of a file name in the second governs the file size. I then use the same arrays to verify the results that come back from the server when I ask for the enumerations of sorted files. Here’s a pseudocode extract of the Perl script:

# create array of file names
my @files=qw/ 00 01 1 1.1 1.2 2 20 A Z a a0 a1 aa z /;

# randomly shuffle to govern creation and size order
my @file_creation_order= shuffle @files;
my @file_size_order= shuffle @files;

# create files, 1 per second for timestamp differences
# size of file depends on the index of $f in @file_size_order
foreach my $f (@file_creation_order) {
   my $size_index= indexof ($f, @file_size_order)
   my $content=create_string_of_size($size_index)
   sleep 1;

# run the tests
my @alpha_files = sort  @files;
testEnumeration(“order=alpha”, @alpha_files);
testEnumeration(“order=date”, @file_creation_order);
testEnumeration(“order=size”, @file_size_order);

Yes, I could have done the same kind of thing with three separate routines and I could have randomly generated file names too and if you noticed this approach has no scope for collisions of time stamp or file size, you’re right. In fact, there are any number of ways to do this, and better,  but this suits my needs and was quick. As I said, I’m no developer but …