Skip to main content

OUTPUT

[attr := ] OUTPUT(recordset [, [ format ] [,file [thorfileoptions ] ] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT(recordset, [ format ] ,file , CSV [ (csvoptions) ] [csvfileoptions ] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT(recordset, [ format ] , file , XML [ (xmloptions) ] [xmlfileoptions ] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT(recordset, [ format ] , file , JSON [ (jsonoptions) ] [jsonfileoptions ] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT(recordset, [ format ] ,PIPE( pipeoptions [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT(recordset [, format ] , NAMED( name ) [,EXTEND] [,ALL] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT( expression [, NAMED( name ) ] [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

[attr := ] OUTPUT( recordset , THOR [, NOXPATH ] [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( name ) ] );

attrOptional. The action name, which turns the action into a definition, therefore not executed until the attr is used as an action.
recordsetThe set of records to process. This may be the name of a dataset or a record set derived from some filter condition, or any expression that results in a derived record set.
formatOptional. The format of the output records. If omitted, all fields in the recordset are output. If not omitted, this must be either the name of a previously defined RECORD structure definition or an "on-the-fly" record layout enclosed within curly braces ({}), and must meet the same requirements as a RECORD structure for the TABLE function (the "vertical slice" form) by defining the type, name, and source of the data for each field.
fileOptional. The logical name of the file to write the records to. See the Scope & Logical Filenames section of the Language Reference for more on logical filenames. If omitted, the formatted data stream only returns to the command issuer (command line or IDE) and is not written to a disk file.
thorfileoptionsOptional. A comma-delimited list of options valid for a THOR/FLAT file (see the section below for details).
NOXPATHSpecifies any XPATHs defined in the format or the RECORD structure of the recordset are ignored and field names are used instead. This allows control of whether XPATHs are used for output, so that XPATHs that were meant only for xml or json input can be ignored for output.
UNORDEREDOptional. Specifies the output record order is not significant.
ORDEREDSpecifies the significance of the output record order.
boolWhen False, specifies the output record order is not significant. When True, specifies the default output record order.
STABLEOptional. Specifies the input record order is significant.
UNSTABLEOptional. Specifies the input record order is not significant.
PARALLELOptional. Try to evaluate this activity in parallel.
numthreadsOptional. Try to evaluate this activity using numthreads threads.
ALGORITHMOptional. Override the algorithm used for this activity.
nameThe algorithm to use for this activity. Must be from the list of supported algorithms for the SORT function's STABLE and UNSTABLE options.
CSVSpecifies the file is a field-delimited (usually comma separated values) ASCII file.
csvoptionsOptional. A comma-delimited list of options defining how the file is delimited.
csvfileoptionsOptional. A comma-delimited list of options valid for a CSV file (see the section below for details).
XMLSpecifies the file is output as XML data with the name of each field in the format becoming the XML tag for that field's data.
xmloptionsOptional. A comma separated list of options that define how the output XML file is delimited.
xmlfileoptionsOptional. A comma-delimited list of options valid for an XML file (see the section below for details).
JSONSpecifies the file is output as JSON data with the name of each field in the format becoming the JSON tag for that field's data.
jsonoptionsOptional. A comma separated list of options that define how the output JSON file is delimited.
jsonfileoptionsOptional. A comma-delimited list of options valid for an JSON file (see the section below for details).
PIPEIndicates the specified command executes with the recordset provided as standard input to the command. This is a "write" pipe.
pipeoptionsThe name of a program to execute, which takes the file as its input stream, along with the options valid for an output PIPE.
NAMEDSpecifies the result name that appears in the workunit. Not valid if the file parameter is present.
nameA string constant containing the result label. This must be a compile-time constant and meet the attribute naming requirements. This must be a valid label (See Definition Name Rules)
EXTENDOptional. Specifies appending to the existing NAMED result name in the workunit. Using this feature requires that all NAMED OUTPUTs to the same name have the EXTEND option present, including the first instance.
ALLOptional. Specifies all records in the recordset are output to the ECL IDE.
expressionAny valid ECL expression that results in a single scalar value.
THORSpecifies the resulting recordset is stored as a file on disk, "owned" by the workunit, instead of storing it directly within the workunit. The name of the file in the DFU is scope::RESULT::workunitid.

The OUTPUT action produces a recordset result from the supercomputer, based on which form and options you choose. If no file to write to is specified, the result is stored in the workunit and returned to the calling program as a data stream.

OUTPUT Field Names

Field names in an "on the fly" record format {...} must be unique or a syntax error results. For example:

          OUTPUT(person(), {module1.attr1, module2.attr1});

will result in a syntax error. Output Field Names are assumed from the definition names.

To get around this situation, you can specify a unique name for the output field in the on-the-fly record format, like this:

          OUTPUT(person(), {module1.attr1, name := module2.attr1});

OUTPUT Thor/Flat Files

[attr := ] OUTPUT(recordset [, [ format ] [,file [, CLUSTER( target ) ] [,ENCRYPT( key ) ]

[,COMPRESSED] [,OVERWRITE][, UPDATE] [,EXPIRE( [ days ] ) ] ] ] )

CLUSTEROptional. Specifies writing the file to the specified list of target clusters. If omitted, the file is written to the cluster on which the workunit executes. The number of physical file parts written to disk is always determined by the number of nodes in the cluster on which the workunit executes, regardless of the number of nodes on the target cluster(s).
targetA comma-delimited list of string constants containing the names of the clusters to write the file to. The names must be listed as they appear on the ECL Watch Activity page or returned by the Std.System.Thorlib.Group() function, optionally with square brackets containing a comma-delimited list of node-numbers (1-based) and/or ranges (specified with a dash, as in n-m) to indicate the specific set of nodes to write to.
ENCRYPTOptional. Specifies writing the file to disk using both 256-bit AES encryption and LZW compression.
keyA string constant containing the encryption key to use to encrypt the data.
COMPRESSEDOptional. Specifies writing the file using LZW compression.
OVERWRITEOptional. Specifies overwriting the file if it already exists.
UPDATESpecifies that the file should be rewritten only if the code or input data has changed.
EXPIREOptional. Specifies the file is a temporary file that may be automatically deleted after the specified number of days since the file was read.
daysOptional. The number of days from last file read after which the file may be automatically deleted. If EXPIRE is specified without number of days, it defaults to use the ExpiryDefault setting in Sasha.

This form writes the recordset to the specified file in the specified format. If the format is omitted, all fields in the recordset are output. If the file is omitted, then the result is sent back to the requesting program (usually the ECL IDE or the program that sent the SOAP query to a Roxie).

Example:

OutputFormat1 := RECORD
  People.firstname;
  People.lastname;
END;
  
A_People := People(lastname[1]='A');
Score1 := HASHCRC(People.firstname);
Attr1 := People.firstname[1] = 'A';

OUTPUT(SORT(A_People,Score1),OutputFormat1,'hold01::fred.out');
  // writes the sorted A_People set to the fred.out file in
  // the format declared in the OutputFormat1 definition

OUTPUT(People,{firstname,lastname});
  // writes just First and Last Names to the command issuer
  // full qualification of the fields is unnecessary, since
  // the "on-the-fly" records structure is within the
  // scope of the OUTPUT -- People is assumed

OUTPUT(People(Attr1=FALSE));
  // writes all Peeople fields from records where Attr1 is
  // false to the command issuer

OUTPUT CSV Files

[attr := ] OUTPUT(recordset, [ format ] ,file , CSV[ (csvoptions) ] [, CLUSTER( target )] [,ENCRYPT(key) ] [,COMPRESSED]

[, OVERWRITE ][, UPDATE] [, EXPIRE( [ days ] ) ] )

CLUSTEROptional. Specifies writing the file to the specified list of target clusters. If omitted, the file is written to the cluster on which the workunit executes. The number of physical file parts written to disk is always determined by the number of nodes in the cluster on which the workunit executes, regardless of the number of nodes on the target cluster(s).
targetA comma-delimited list of string constants containing the names of the clusters to write the file to. The names must be listed as they appear on the ECL Watch Activity page or returned by the Std.System.Thorlib.Group() function, optionally with square brackets containing a comma-delimited list of node-numbers (1-based) and/or ranges (specified with a dash, as in n-m) to indicate the specific set of nodes to write to.
ENCRYPTOptional. Specifies writing the file to disk using both 256-bit AES encryption and LZW compression.
keyA string constant containing the encryption key to use to encrypt the data.
COMPRESSEDOptional. Specifies writing the file using LZW compression.
OVERWRITEOptional. Specifies overwriting the file if it already exists.
UPDATESpecifies that the file should be rewritten only if the code or input data has changed.
EXPIREOptional. Specifies the file is a temporary file that may be automatically deleted after the specified number of days.
daysOptional. The number of days after which the file may be automatically deleted. If omitted, the default is seven (7).

This form writes the recordset to the specified file in the specified format as a comma separated values ASCII file. The valid set of csvoptions are:

HEADING( [ headertext [ , footertext ] ] [, SINGLE ][, FORMAT(stringfunction) ] )

SEPARATOR( delimiters )

TERMINATOR( delimiters )

QUOTE( [ delimiters ] )

ASCII | EBCDIC | UNICODE

HEADINGSpecifies file headers and footers.
headertextOptional. The text of the header record to place in the file. If omitted, the field names are used.
footertextOptional. The text of the footer record to place in the file. If omitted, no footertext is output.
SINGLEOptional. Specifies the headertext is written only to the beginning of part 1 and the footertext is written only at the end of part n (producing a "standard" CSV file). If omitted, the headertext and footertext are placed at the beginning and end of each file part (useful for producing complex XML output).
FORMATOptional. Specifies the headertext should be formatted using the stringfunction.
stringfunctionOptional. The function to use to format the column headers. This can be any function that takes a single string parameter and returns a string result
SEPARATORSpecifies the field delimiters.
delimitersA single string constant (or comma-delimited list of string constants) that define the character(s) used to delimit the data in the CSV file.
TERMINATORSpecifies the record delimiters.
QUOTESpecifies the quotation delimiters for string values that may contain SEPARATOR or TERMINATOR delimiters as part of their data.
ASCIISpecifies all output is in ASCII format, including any EBCDIC or UNICODE fields.
EBCDICSpecifies all output is in EBCDIC format except the SEPARATOR and TERMINATOR (which are expressed as ASCII values).
UNICODESpecifies all output is in Unicode UTF8 format

If none of the ASCII, EBCDIC, or UNICODE options are specified, the default output is in ASCII format with any UNICODE fields in UTF8 format. The other default csvoptions are:

           CSV(HEADING('',''), SEPARATOR(','), TERMINATOR('\n'), QUOTE())

Example:

//SINGLE option writes the header only to the first file part:
OUTPUT(ds,,'~thor::outdata.csv',CSV(HEADING(SINGLE)));

//This example writes the header and footer to every file part:
OUTPUT(XMLds,,'~thor::outdata.xml',CSV(HEADING('<XML>','</XML>')));

//FORMAT option writes the header using the specified formatting function:
IMPORT STD;
OUTPUT(ds,,'~thor::outdata.csv',CSV(HEADING(FORMAT(STD.Str.ToUpperCase))));

OUTPUT XML Files

[attr := ] OUTPUT(recordset, [ format ] ,file ,XML [ (xmloptions) ] [,ENCRYPT( key ) ] [, CLUSTER( target ) ] [, OVERWRITE ][, UPDATE] [, EXPIRE( [ days ] ) ] )

CLUSTEROptional. Specifies writing the file to the specified list of target clusters. If omitted, the file is written to the cluster on which the workunit executes. The number of physical file parts written to disk is always determined by the number of nodes in the cluster on which the workunit executes, regardless of the number of nodes on the target cluster(s).
targetA comma-delimited list of string constants containing the names of the clusters to write the file to. The names must be listed as they appear on the ECL Watch Activity page or returned by the Std.System.Thorlib.Group() function, optionally with square brackets containing a comma-delimited list of node-numbers (1-based) and/or ranges (specified with a dash, as in n-m) to indicate the specific set of nodes to write to.
ENCRYPTOptional. Specifies writing the file to disk using both 256-bit AES encryption and LZW compression.
keyA string constant containing the encryption key to use to encrypt the data.
OVERWRITEOptional. Specifies overwriting the file if it already exists.
UPDATESpecifies that the file should be rewritten only if the code or input data has changed.
EXPIREOptional. Specifies the file is a temporary file that may be automatically deleted after the specified number of days.
daysOptional. The number of days after which the file may be automatically deleted. If omitted, the default is seven (7).

This form writes the recordset to the specified file as XML data with the name of each field in the specified format becoming the XML tag for that field's data. The valid set of xmloptions are:

'rowtag'

HEADING( headertext [, footertext ] )

TRIM

OPT

rowtagThe text to place in record delimiting tag.
HEADINGSpecifies placing header and footer records in the file.
headertextThe text of the header record to place in the file.
footertextThe text of the footer record to place in the file.
TRIMSpecifies removing trailing blanks from string fields before output.
OPTSpecifies omitting tags for any empty string field from the output.

If no xmloptions are specified, the defaults are:

         XML('Row',HEADING('<Dataset>\n','</Dataset>\n'))

Example:

R := {STRING10 fname,STRING12 lname};
B := DATASET([{'Fred','Bell'},{'George','Blanda'},{'Sam',''}],R);

OUTPUT(B,,'fred1.xml', XML); // writes B to the fred1.xml file
/* the Fred1.XML file looks like this:
<Dataset>
  <Row><fname>Fred </fname><lname>Bell</lname></Row>
  <Row><fname>George</fname><lname>Blanda </lname></Row>
  <Row><fname>Sam </fname><lname></lname></Row>
</Dataset> */

OUTPUT(B,,'fred2.xml',XML('MyRow', HEADING('<?xml version=1.0 ...?>\n<filetag>\n','</filetag>\n')));
/* the Fred2.XML file looks like this:
<?xml version=1.0 ...?>
<filetag>
  <MyRow><fname>Fred </fname><lname>Bell</lname></MyRow>
  <MyRow><fname>George</fname><lname>Blanda</lname></MyRow>
  <MyRow><fname>Sam </fname><lname></lname></MyRow>
</filetag> */

OUTPUT(B,,'fred3.xml',XML('MyRow',TRIM,OPT));
/* the Fred3.XML file looks like this:
<Dataset>
  <MyRow><fname>Fred</fname><lname>Bell</lname></MyRow>
  <MyRow><fname>George</fname><lname>Blanda</lname></MyRow>
  <MyRow><fname>Sam</fname></MyRow>
</Dataset> */

OUTPUT JSON Files

[attr := ] OUTPUT(recordset, [ format ] ,file ,JSON [ (jsonoptions) ] [,ENCRYPT( key ) ] [, CLUSTER( target ) ] [, OVERWRITE ][, UPDATE] [, EXPIRE( [ days ] ) ] )

CLUSTEROptional. Specifies writing the file to the specified list of target clusters. If omitted, the file is written to the cluster on which the workunit executes. The number of physical file parts written to disk is always determined by the number of nodes in the cluster on which the workunit executes, regardless of the number of nodes on the target cluster(s).
targetA comma-delimited list of string constants containing the names of the clusters to write the file to. The names must be listed as they appear on the ECL Watch Activity page or returned by the Std.System.Thorlib.Group() function, optionally with square brackets containing a comma-delimited list of node-numbers (1-based) and/or ranges (specified with a dash, as in n-m) to indicate the specific set of nodes to write to.
ENCRYPTOptional. Specifies writing the file to disk using both 256-bit AES encryption and LZW compression.
keyA string constant containing the encryption key to use to encrypt the data.
OVERWRITEOptional. Specifies overwriting the file if it already exists.
UPDATESpecifies that the file should be rewritten only if the code or input data has changed.
EXPIREOptional. Specifies the file is a temporary file that may be automatically deleted after the specified number of days.
daysOptional. The number of days after which the file may be automatically deleted. If omitted, the default is seven (7).

This form writes the recordset to the specified file as JSON data with the name of each field in the specified format becoming the JSON tag for that field's data. The valid set of jsonoptions are:

'rowtag'

HEADING( headertext [, footertext ] )

TRIM

OPT

rowtagThe text to place in record delimiting tag.
HEADINGSpecifies placing header and footer records in the file.
headertextThe text of the header record to place in the file.
footertextThe text of the footer record to place in the file.
TRIMSpecifies removing trailing blanks from string fields before output.
OPTSpecifies omitting tags for any empty string field from the output.

If no jsonoptions are specified, the defaults are:

         JSON('Row',HEADING('[',']'))

Example:

R := {STRING10 fname,STRING12 lname};
B := DATASET([{'Fred','Bell'},{'George','Blanda'},{'Sam',''}],R);

OUTPUT(B,,'fred1.json', JSON); // writes B to the fred1.json file
/* the Fred1.json file looks like this:
{"Row": [
{"fname": "Fred      ", "lname": "Bell        "},
{"fname": "George    ", "lname": "Blanda      "},
{"fname": "Sam       ", "lname": "            "}
]}
*/
OUTPUT(B,,'fred2.json',JSON('MyResult', HEADING('[', ']')));
/* the Fred2.json file looks like this:
["MyResult": [
{"fname": "Fred      ", "lname": "Bell        "},
{"fname": "George    ", "lname": "Blanda      "},
{"fname": "Sam       ", "lname": "            "}
]]

OUTPUT PIPE Files

[attr := ] OUTPUT(recordset, [ format ] ,PIPE( command [, CSV | XML]) [, REPEAT] )

PIPEIndicates the specified command executes with the recordset provided as standard input to the command. This is a "write" pipe.
commandThe name of a program to execute, which takes the file as its input stream.
CSVOptional. Specifies the output data format is CSV. If omitted, the format is raw.
XMLOptional. Specifies the output data format is XML. If omitted, the format is raw.
REPEATOptional. Indicates a new instance of the specified command executes for each row in the recordset.

This form sends the recordset in the specified format as standard input to the command. This is commonly known as an "output pipe."

Example:

OUTPUT(A_People,,PIPE('MyCommandLIneProgram'),OVERWRITE);
   // sends the A_People to MyCommandLIneProgram as
   // standard in

Named OUTPUT

[attr := ] OUTPUT(recordset [, format ] ,NAMED( name ) [,EXTEND] [,ALL])

This form writes the recordset to the workunit with the specified name. This must be a valid label (See Definition Name Rules)

The EXTEND option allows multiple OUTPUT actions to the same named result. The ALL option is used to override the implicit CHOOSEN applied to interactive queries in the Query Builder program. This specifies returning all records.

Example:

OUTPUT(CHOOSEN(people(firstname[1]='A'),10));
  // writes the A People to the query builder
OUTPUT(CHOOSEN(people(firstname[1]='A'),10),ALL);
  // writes all the A People to the query builder
OUTPUT(CHOOSEN(people(firstname[1]='A'),10),NAMED('fred'));
  // writes the A People to the fred named output
  
//a NAMED, EXTEND example:
errMsgRec := RECORD
  UNSIGNED4 code;
  STRING text;
END;
makeErrMsg(UNSIGNED4 _code,STRING _text) := DATASET([{_code, _text}], errMsgRec);
rptErrMsg(UNSIGNED4 _code,STRING _text) := OUTPUT(makeErrMsg(_code,_text),
                                                  NAMED('ErrorResult'),EXTEND);

OUTPUT(DATASET([{100, 'Failed'}],errMsgRec),NAMED('ErrorResult'),EXTEND);
  //Explicit syntax.

//Something else creates the dataset
OUTPUT(makeErrMsg(101, 'Failed again'),NAMED('ErrorResult'),EXTEND);
  
//output and dataset handled elsewhere.
rptErrMsg(102, 'And again');

OUTPUT Scalar Values

[attr := ] OUTPUT( expression [, NAMED( name ) ] )

This form is used to allow scalar expression output, particularly within SEQUENTIAL and PARALLEL actions.

Example:

OUTPUT(10) // scalar value output
OUTPUT('Fred') // scalar value output

OUTPUT Workunit Files

[attr := ] OUTPUT( recordset , THOR )

This form is used to store the resulting recordset as a file on disk "owned" by the workunit. The name of the file in the DFU is scope::RESULT::workunitid. This is useful when you want to view a large result recordset in the Query Builder program but do not want that much data to take up memory in the system data store.

Example:

OUTPUT(Person(per_st='FL'), THOR)
  // output records to screen, but store the 
  // result on disk instead of in the workunit

See Also: TABLE, DATASET, PIPE, CHOOSEN