Wed Aug 15, 2018 4:35 pm
Login Register Lost Password? Contact Us


Compressing a record and Getting its Size

Questions around writing code and queries

Tue Jun 20, 2017 5:02 pm Change Time Zone

I hoping for some debugging tips.

Currently we have an occasional problem that causes the ECL BuildIndex not to work. In QC last night the following error occurred:

Graph graph83[2691], indexwrite[2695]: SLAVE #6 []: Key row too large to fit within a key node (uncompressed size=12140, variable=true, pos=0), - caused by (0, Key row too large to fit within a key node (uncompressed size=12140, variable=true, pos=0)) (in item 286)

The dataset has a certain field and occasionally the extract on that field from the database does not work properly. So when that row in the dataset is built into an index to be compressed the key row is too large.

The only way I know to debug this is to get the dataset and attempt to build an index on it. Then I keep splitting the dataset and trying to build indexes on these splits until I find the culprit. This morning it was line 610 out of 808 records, and I am grateful that 808 was not a large number.

There must be an easier way. As I understand it, if I look at the size of each row, that is not the problem. It is the size of the row once it has been compressed. So if somehow I could see the size of each row once it is compressed then I could quickly find the row that is causing the problem. The SIZEOF function returns the total number of bytes defined for storage of the specified data structure or field, so I do not think that would even help.

I looked at all the documentation I have and could find nothing to compress the rows, and further any way to get the size of it if I somehow compressed it.

I am open to suggestions on how to discover the aberrant record(s) more easily.
georgeb2d
 
Posts: 93
Joined: Wed Dec 24, 2014 3:36 pm

Mon Jun 26, 2017 12:50 pm Change Time Zone

Hi Don,

Have you tried using LENGTH on that field?

Bob
bforeman
Community Advisory Board Member
Community Advisory Board Member
 
Posts: 975
Joined: Wed Jun 29, 2011 7:13 pm

Mon Jun 26, 2017 12:55 pm Change Time Zone

Don,

AFAIK, there is no way to determine the post-compression size of a field. I suggest you submit a JIRA ticket asking for that feature.

HTH,

Richard
rtaylor
Community Advisory Board Member
Community Advisory Board Member
 
Posts: 1368
Joined: Wed Oct 26, 2011 7:40 pm

Wed Jun 28, 2017 7:48 pm Change Time Zone

Two replies:
I had tried LENGTH of field and it was not helpful. We split the original record into STRING10000 size segments and then compress these records. one of those segments does not compress properly. Of course we could use STRING9000 (or 8000 or smaller) size segments to finally get it to work but that would just hide the issue and the bad record, which is not what is desired.

I submitted the jira ticket. AFAIK appears far more sinister than it is..LOL.
georgeb2d
 
Posts: 93
Joined: Wed Dec 24, 2014 3:36 pm


Return to Programming

Who is online

Users browsing this forum: No registered users and 1 guest

cron