logoAnerty's Lair - News << Home
fren
^ Software Documents
article

BugFix & Update: jSAVF 1.83

This version fixes a limitation which prevented the extraction of compressed objects larger than 2GB, and manifested itself as a Size exceeds Integer.MAX_VALUE message. This was due to the way jSAVF mapped the parts of temporary files it uncompresses such objects into. The mapped region was dependent not on the amount of data to extract for a task, but to the size of the uncompressed object it was located in, and for large objects this crossed the limit of what the Java FileChannel::map method allows. I"ve changed the decompression routine so it no longer tries to map the whole temporary file, which should solve the problem. Thanks to the user which reported the bug !

This version also updates the bundled Java environment in the Windows installer based on the the current 25.0.1+8 JDK from Adoptium, which thankfully still provide jmods for their JDKs after JEP 493, which enable people like me to build and distribute Java software for Windows without having to package it under that operating system.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.82

This version adds an extractor for bytestream objects stored in IFS saves (*STMF), which only extracts the data of such objects. The extractor performs no CCSID conversion because at the moment I'm not sure wheteher these objects retain such information, and I've seen binary contents for Zip files, ASCII, and EBCDIC so it's preferable to preserve the original bytes until I find a reliable way to tell their encoding. Once extracted if the the data proves to be EBCDIC it can always be converted later using programs such as iconv with the appropriate source code page: iconv -f cp037 -t utf8 your_stmf.txt > your_stmf.utf8.txt

This version also fixes a minor bug:

The size of IFS bytestream objects was displayed without rounding it to the nearest KB, so small files usually ended up being displayed with a size of 0. The hover still displayed the correct byte size, but it could have mislead people into thinking there was no data in there. The table now rounds the size to the nearest KB so for bytestreams with more than 500 bytes it should display 1 instead of 0.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.81

This version improves upon the experimental source extraction from programs by adding support for CLP programs, which seem to include in their associated space the same kind of source information found in debug views of CLLE programs (probably for the RTVCLSRC command), but also includes the source CCSID so it's probably more reliable for internationalized source text.

This version also fixes a bug:

The about dialog was showing two red question marks instead of the link to this site since I'm using Swing's ability to instantiate objects in HTML (using a syntax such as <object classid="...), and that's now disabled by default in recent Java versions. I've allowed it for now using the swing.html.object java system property, but I'll review that for potential security issues.

I've also updated the jSAVF dependencies and the embedded JRE bundled in the installable version for Windows. jSAVF needs a Java 21 version or better.

If you encounter any issue with this version don't hesitate to reach out to me.

article

BugFix & Update: jSAVF 1.80

An user was interested in opening a large multi-save magnetic tape image file with jSAVF, so I have made a few changes to support that use-case. It also enabled me to add limited support for optical disk image files (ISO9660 only for now, not UDF) since these may also contain multiple save files.

The tape imaging tool he was using produces a tape image file in the AWSTAPE format (see Description, Hercules-390 Emulator), which inserts 6 byte block headers before each tape block.

This tape image format is not ideal given it describes the lengths of the previous and current block with 16-bit words, when modern tapes appear to have 256KB blocks or more. This will lead to overflows, so for example a block size described as 0xF000 in the block header can mean either 0x0F000, 0x1F000, 0x2F000 or 0x3F000 for a tape block size of 0x40000 (256KB) described in the ANSI X3.27-1978 tape file header labels.

jSAVF works around the issue by looking at these possible offsets for an AWSTAPE block header with a previous block length corresponding to the block length of the current header modulo 65536, and checking that the header flags look correct. If unlucky there may be data at one of these alternative offsets which looks enough like the real block header, so in that case jSAVF will bail with an error. If you're in that case I may improve upon the current strategy to distinguish the real header from data (by looking a the next headers), but since it seems to work well enough for now I'll leave it as it is unless someone has a tape which can't be read this way.

As scanning all the alternative block headers of a multi-gigabyte tape image file to find all the save files written on it can be a lengthy process, jSAVF will generate a compressed tape file index next to it named "xxx.jsavf_awstape_index.bin" where "xxx" is the original tape image file name when that takes more than 3s. The index contains the tape labels for all files on it, and offset tables to be able to quickly convert a save file offset to a tape file image position. This lets jSAVF open the file nearly instantly the next time, and doesn't take too much space (~1MB index for a ~30GB file).

When there are multiple saves in the file, jSAVF will now prompt the user to select the one to open. For batch scripts this is handled using a new jsavf:openMultiSaveFile(path, selector) API which calls a user-provided selector function with the list of save files found in the tape or optical image file and expects the selected one to be returned. This allows a script to select it based on the save size, name, or order. The JEXL dependency was updated to v3.3 to make this possible, so if you find incompatibilities in your batch scripts please tell me about them so I can see if it's something I can fix. I've exposed a few more classes to the batch API which help converting text to int or long, and updated the batch API documentation accordingly.

This version also fixes two bugs:

  • Some extractions failed to determine the saved item structure because of an error in the way jSAVF was computing the position of the first section when the item header length was exactly 4096 bytes.
  • Some large CSV extractions with CHAR VARYING or CLOB or BLOB fields couldn't be performed when the total amount of data in these fields was above some limit, which happened on tables with many records or with enough data in them. jSAVF now correctly handles tables with many records, but may still have issues with BLOB / CLOB whose values are very large. Don't hesitate to reach out to me if you're confronted with such an issue.

The CSV extraction of BLOB fields was also modified to extract their values as a hexadecimal string, because putting raw binary inside a UTF-8 CSV file is not very useful.

I've also updated the jSAVF dependencies and the embedded JRE bundled in the installable version for Windows. So jSAVF now needs a Java 21 version or better.

If you encounter any issue with this version don't hesitate to reach out to me.