logoAnerty's Lair - News << Home
fren
^ Software Documents
article

BugFix & Update: jSAVF 1.80

An user was interested in opening a large multi-save magnetic tape image file with jSAVF, so I have made a few changes to support that use-case. It also enabled me to add limited support for optical disk image files (ISO9660 only for now, not UDF) since these may also contain multiple save files.

The tape imaging tool he was using produces a tape image file in the AWSTAPE format (see Description, Hercules-390 Emulator), which inserts 6 byte block headers before each tape block.

This tape image format is not ideal given it describes the lengths of the previous and current block with 16-bit words, when modern tapes appear to have 256KB blocks or more. This will lead to overflows, so for example a block size described as 0xF000 in the block header can mean either 0x0F000, 0x1F000, 0x2F000 or 0x3F000 for a tape block size of 0x40000 (256KB) described in the ANSI X3.27-1978 tape file header labels.

jSAVF works around the issue by looking at these possible offsets for an AWSTAPE block header with a previous block length corresponding to the block length of the current header modulo 65536, and checking that the header flags look correct. If unlucky there may be data at one of these alternative offsets which looks enough like the real block header, so in that case jSAVF will bail with an error. If you're in that case I may improve upon the current strategy to distinguish the real header from data (by looking a the next headers), but since it seems to work well enough for now I'll leave it as it is unless someone has a tape which can't be read this way.

As scanning all the alternative block headers of a multi-gigabyte tape image file to find all the save files written on it can be a lengthy process, jSAVF will generate a compressed tape file index next to it named "xxx.jsavf_awstape_index.bin" where "xxx" is the original tape image file name when that takes more than 3s. The index contains the tape labels for all files on it, and offset tables to be able to quickly convert a save file offset to a tape file image position. This lets jSAVF open the file nearly instantly the next time, and doesn't take too much space (~1MB index for a ~30GB file).

When there are multiple saves in the file, jSAVF will now prompt the user to select the one to open. For batch scripts this is handled using a new jsavf:openMultiSaveFile(path, selector) API which calls a user-provided selector function with the list of save files found in the tape or optical image file and expects the selected one to be returned. This allows a script to select it based on the save size, name, or order. The JEXL dependency was updated to v3.3 to make this possible, so if you find incompatibilities in your batch scripts please tell me about them so I can see if it's something I can fix. I've exposed a few more classes to the batch API which help converting text to int or long, and updated the batch API documentation accordingly.

This version also fixes two bugs:

  • Some extractions failed to determine the saved item structure because of an error in the way jSAVF was computing the position of the first section when the item header length was exactly 4096 bytes.
  • Some large CSV extractions with CHAR VARYING or CLOB or BLOB fields couldn't be performed when the total amount of data in these fields was above some limit, which happened on tables with many records or with enough data in them. jSAVF now correctly handles tables with many records, but may still have issues with BLOB / CLOB whose values are very large. Don't hesitate to reach out to me if you're confronted with such an issue.

The CSV extraction of BLOB fields was also modified to extract their values as a hexadecimal string, because putting raw binary inside a UTF-8 CSV file is not very useful.

I've also updated the jSAVF dependencies and the embedded JRE bundled in the installable version for Windows. So jSAVF now needs a Java 21 version or better.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.72

This version fixes a bug which prevented jSAVF from opening SAVFs which contained files with special attributes.

The bug happened while analyzing the BASED ON attribute of files, which is a bit sad because it is not displayed anywhere at the moment.

I've also updated the jSAVF dependencies and the embedded JRE bundled in the installable version for Windows. So jSAVF now needs a Java 19 version or better.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.71

This version fixes a bug which prevented jSAVF from opening large SAVFs (more than a terabyte).

A component which allows jSAVF to check a block's checksum only once was limited to 2147483647 blocs, which is problematic when the SAVF contains more.

Given the memory cost of keeping this state for all blocks, I've disabled the cache for SAVF which have more blocs than this limit. This can reduce the performance a bit if you read the same bloc more than once, but it's unlikely on a SAVF this big. If this proves to be too detrimental to performance I'll think about using a more efficient data structure than the one currenlty used to hold this data, maybe an interval-tree could work there.

I've also updated the jSAVF dependencies and the embedded JRE bundled in the installable version for Windows. So jSAVF now needs a Java 18 version or better.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.70

This version fixes a few bugs in the way jSAVF implements the various unpacking algorithms needed to read the SAVF contents:

  • The *LOW/SNA unpacking could in some cases produce too many bytes, which corrupted the extraction of SAVF embedded in other SAVF.
  • The *MEDIMU/TERSE unpacking now handles many edge-cases in the way this algorithm dictionary behaves when full, which mostly happens on objects with high entropy such as programs or embedded SAVF, on which TERSE usualy gives negative compression ratios.
  • The zlib unpacking which seems to be needed to open some SAVF since V7R5 is now supported.
jSAVF now offers a raw object export with and without unpacking to help debug this kind of issues, so if you're facing an incorrect export which seems to be related to an unpacker bug you can now send me a sample.

This version offers a first shot at CSV export of database tables (members of non-source physical files). Most field types are handled except DECFLOAT which will probably come later. It's possible that some variants I have not tested may be incorrectly exported.

For now, I could export the following field types: NUMERIC, DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, REAL, DOUBLE, DATE, TIME, TIMESTAMP, CHAR, BINARY, VARCHAR, VARBINARY, GRAPHIC, VARGRAPHIC, DBCS, CLOB, BLOB, DBCLOB

Given that the size and number of samples which I could find freely available on Internet is rather limited, it is probable that exporting large tables will be buggy (especially if they contain VARYING or LOB fields given the way these are stored when they do not fit their allocated space in the main record). More information can be obtained when enabling the debug/experimental mode in jSAVF preferences and restarting jSAVF.

I am considering making the CSV table export configurable to allow the selection of exported fields and their formatting, and to force their encoding.

This version also brings a first experimental shot at extracting *SOURCE or *LIST debug views which are included in some programs during compilation. This is available on objects of types *PGM, *MODULE and *SRVPGM when the debug/experimental mode is enabled at jSAVF startup.

For now the feature is quite basic, all debug views are exported together without further post-processing besides some unpacking when needed, so such exports may include the content of multiple source files or compilation spools.

If you encounter any issue with this version don't hesitate to reach out to me.