Language:

Generations and Flavours of Invantive SQL

One of the most familiar questions at our support desk is "what functions are available" in Invantive SQL to query data in Exact Online.

This second-generation SQL parser is an extensive implementation of many commonly found SQL constructs from the ANSI SQL standard.

It includes in addition to the features of the first-generation SQL parser also:

  • joins,
  • outer joins,
  • cross joins,
  • group functions such as stddev, avg and listagg,
  • value functions such as xmlescape and round.

There are two flavours shipped:

  • Free version: second-generation SQL parser without joins and some upcoming non-ANSI standard advanced mapping functions for large volume financial analysis and reporting.
  • Paid version: identical to the free version but with joins and advanced mapping functions.

The EBNF-grammar below depicts the possibilities.

161213
Content pane
 

Invantive SQL v2.0 Grammar

sqlBatch:

Compatibility

The Invantive implementation of SQL is based upon ANSI SQL, extended by aspects from popular SQL implementations such as PostgreSQL, MySQL, Oracle, Teradata and Microsoft SQL Server. It is topped of with Invantive-specific extensions, especially for procedural SQL, distributed SQL and distributed transactions. The basis is to implement functions such that as little as possible changes are necessary to run a SQL statement originating from another SQL implementation on Invantive SQL. For instance, to retrieve the current time you can use 'sysdate', 'now', 'getdate()' and 'sysdatetime' to name a few. The same holds for the procedural extension Invantive Procedural SQL, which reflects SQL/PSM and makes it easy to port Oracle PL/SQL or PostgreSQL PL/pgSQL statements.

Distributed SQL, Databases and Data Containers

It is easy to exchange and/or combine data across the supported platforms with data. To each platform (such as Salesforce or Exact Online Belgium) multiple connections can be active with the same or different platform-specific connection settings. Each open connection to a platform is named a 'data container'.

All opened connections together are named a 'database'.

When multiple data containers have been opened, each one has an alias to refer it by in Invantive SQL statements. For instance, a connection can be open for two different customer accounts on Exact Online Netherlands aliased as 'eolnl_comp1' and 'eolnl_comp55') and one for an Exact Online Belgium custom, aliased as 'eolbe_my_new_company'. The aliases can be freely chosen as long as they are valid identifiers and defined in the databases configuration file 'settings.xml'.

Service Providers

A number of special connections are always made, each of which can occur at most once. These are the 'service providers' such as:

  • 'datadictionary': metadata of the current database, such as list of tables and executed SQL statements performance.
  • 'os': information on the operating system running the SQL engine, such as reading file contents.
  • 'smtp': synchronously send mails through SMTP.

Partitioning

Especially online platforms have a multi-tenant structure, in which the data is partitioned per customer, company or person. When the data model is identical across tenants, Invantive SQL considers them 'partitions'. SQL statements can run across multiple or one partitions, often in parallel. This enables consolidation scenarios across partitions (such as Exact Online or Nmbrs companies) as well as high-performance in MPP environments.

The partitions to be used can be specified with the 'use' statement, either through an explicit list of partitions to be selected across data containers, or through a SQL select statement returning the list of partitions to use. Please note that although the 'use' statement resembles the 'use DATABASE' statement on Microsoft SQL Server or PostgreSQL you can on Invantive SQL have multiple partitions active at the same time in one user session.

Identifiers

For identifiers, the regular conventions hold for the set of allowed characters. Depending on the platform, the identifiers are case sensitive or not. In general, it is best to assume that the identifier are case insensitive. There is no length limit on an identifier imposed by Invantive SQL.

Procedural SQL

Invantive Procedural SQL (or "PSQL" for short) is a procedural extension on top of Invantive SQL. It is based on the ISO-standard 9075-4:2016 (SQL/PSM) and extends Invantive SQL with procedural options like blocks, variables, conditional execution and loops. The procedural code is - together with the Invantive SQL contained - as a whole into pseudo-code and then executed.

The procedural code does not lean on the procedural options of the platforms being used, so it is easy to retrieve and change data in all supported cloud, file and database platforms. The pre-compiled procedural code does not perform context switches between procedural and SQL logic.

Licensing

Features

The available functionality of Invantive SQL features is based upon the license features. For instance the free implementation of Invantive SQL is limited to 1.000 rows and no access to group functions. Please consult the data dictionary contents for your license features.

Usage Fees

For paid products, the fee depends on a number of factors: users, devices, billable partitions. All fees depend on actual use. Additionally, the number and volume in KB of reads and writes is registered to enable communication with the platform partners on the performance of their platform.

A rough estimate is that the billable partitions reflect the product of number of legal entities times number of source systems. The number of billable partitions is determined as follows, where using twice or more often the same billable partition doesn't influence the number:

  • Accessing a provider which does not have a concept of "partition" count as 1 partition per different connection. A so-called data container ID is determined which reflects the backend connected to. For instance, connectiong to my-ftp-host.com and another-ftp-host.com counts as two partitions. Examples of such providers are FTP, SQL Server and cbs.nl.
  • Accessing a provider which has a concept of "partition" count as the number of partitions used during the connection. For instance, using the Exact Online company 234 and 456 counts as two partitions. Examples of such providers are Exact Online, NMBRS, Loket and XML Audit File Financieel.
  • Accessing data through xmltable, csvtable, jsontable and exceltable counts as the number of different parameter combination used. For instance, accessing an Excel range 'Sheet1' and a named range 'mydata' counts as two partitions.

settings.xml

The file settings.xml defines for a user or program the list of defined databases. Databases are grouped in 'database groups' for visual display. Database groups have no further functionality. Each database consists of one or multiple data containers.

The file 'settings.xml' is most often found on Microsoft Windows in your '%USERPROFILE%\invantive' folder, such as 'c:\users\john.doe\invantive\settings.xml'. It is shared across all Invantive SQL product installations for the user.

There are many scenarios to share database specifications across a user community, such as WAN-scenarios with Invantive Web Service, large corporate scenarios using DNS-entries as well as file shares, included files as well as single user solutions. Please involve a consultant when you want to deploy across thousands of users or more.

For user communities of up to 10 users, we recommend that company-specific settings are grouped per role in a separate file named 'settings-ROLE.xml' and placed in the default folder. Invantive SQL will automatically merge these files in the main settings.xml file.

Group Functions

The Invantive implementation of SQL is based upon ANSI SQL, extended by aspects from popular SQL implementations such as PostgreSQL, MySQL, Oracle, Teradata and Microsoft SQL Server. It is topped of with Invantive-specific extensions, especially for distributed SQL and distributed transactions. The basis is to implement functions such that as little as possible changes are necessary to run a SQL statement originating from another SQL implementation on Invantive SQL. For instance, to retrieve the current time you can use 'sysdate', 'now', 'getdate()' and 'sysdatetime' to name a few.

Popular group functions such as 'stddev' are available. However, currently you can not combine in one unnested SQL statement both group functions as well as expressions on the variables. In that case use an inner (nested) SQL statement to apply the expressions on the data, and execute the group functions in the outer SQL statement with the syntax 'select group() from ( select ... from ... )'.

Locking

An Invantive SQL statement can work with many traditional and online platforms. There are no locking features on data and objects, since few online and traditional platforms connected provide these and the typical use of distributed transactions leave even less opportunity for data and object locking.

Transactions

Invantive SQL has limited support for transactions. DML is forwarded to a platform and depending on the platform an error can cause part of the work to be registered or everything to be rolled back. Within the SQL engine, multiple changes can be collected and forwarded to the platform at once. For instance, when creating an EDIFACT message you need to combine an invoice header with invoice lines into one EDIFACT message. Collection of multiple changes is done using the 'identified by' and 'attach to' syntax, optionally preceded by 'begin transaction'.

sqlOrPSqlStatement BATCHSEPARATOR BATCHSEPARATOR

no references


sqlOrPSqlStatement:

sqlStatement pSqlStatement pSqlCreateFunction pSqlCreateProcedure pSqlAlterFunction pSqlAlterProcedure pSqlDropFunction pSqlDropProcedure

referenced by:


sqlStatement:

An Invantive SQL can retrieve data from many traditional and online platforms. Many platforms also support the use of DML (Data Manipulation Language) statements to change the data contained. On a few platforms you can execute DDL (Data Definition Language) statements to create new data structure or objects such as tables, procedures or sequences.

selectStatement insertStatement updateStatement deleteStatement ddlStatement setStatement useStatement transactionStatement executeFileStatement

referenced by:


selectStatement:

A SQL select statement retrieves data from one or multiple data containers. A select statement can be composed of multiple data sets retrieved from many platforms, combined by set operators such as 'union'.

Often the performance of cloud platforms is less than traditional database platforms. With the 'limit' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'. An alternative for a 'limit' clause is to use the 'top' clause.

A sequence of Invantive SQL statements, separated by the semi-colon separator character.

Each statement in the SQL batch will be executed consecutively. Execution will be stopped when an error occurs during execution of a statement.

uniqueSelectStatement setOperatorSelectStatement orderBy limitClause

referenced by:


inSelectStatement:

A SQL select statement retrieves data from one or multiple data containers. This variant makes this data available to a containing SQL select statement. This feature is also known as an 'inline view'.

selectStatement

referenced by:


setOperatorSelectStatement:

SQL is based upon a solid mathematical foundation named 'set theory' with some exceptions. The set operators of Invantive SQL enable you to combine sets of data sets such as merging two sets of data. Please note that SQL actually uses 'bags', which opposed to 'sets', allow duplicates. To change bags of data into sets, either use 'distinct' or the 'union' set operator without 'all'. In general, the extensive use of 'distinct' signals bad database design.

The 'union' set operator returns the union of the data on the left and right side of the union while removing duplicate rows. The 'union all' set operator returns the union of the data on the left and right side of the union without removing duplicate rows. The 'minus' set operator returns all rows from the left side which do not occur in the right side. The 'intersect' set operator returns all rows that occur both in the left and right side.

UNION ALL MINUS_C INTERSECT uniqueSelectStatement

referenced by:


uniqueSelectStatement:

Retrieves a data set from one or more data containers.

select executionHints distinct topClause selectList INTO pSqlVariableList FROM dataSource joinStatements whereClause groupBy

referenced by:


dataSource:

A data source can be a table, a table with parameters or a nested select (an 'inline view'). Also, a data source can be a binary or text string being interpreted as XML, CSV, JSON or binary Excel Open XML format.

tableOrFunctionSpec embeddedSelect xmlTableSpec csvTableSpec jsonTableSpec excelTableSpec aliased

referenced by:


select:

SELECT

referenced by:


executionHints:

Execution hints allow you to control individually the execution of SQL statements. Whenever possible, the hints will be used. In contrary to other platforms, Invantive SQL requires a hint to be valid according to the grammar when specified. This reduces the engineering risk that hints become invalid by accident.

EXECUTION_HINT_START joinSet noJoinSet ods resultSetName lowCost httpDiskCache httpMemoryCache EXECUTION_HINT_END

referenced by:


httpDiskCache:

The http_disk_cache-hint specifies whether messages may be cached on disk when the provider uses HTTP to exchange data with the backing platform. This typically holds only for cloud-based platforms such as Exact Online, Teamleader or Salesforce. The default setting is false. The first parameter is a boolean whether data may be taken from the disk cache, the second parameter is a boolean whether data retrieved must be stored also in the disk cache and the third parameter is an integer that specifies the number of seconds before a disk cache hit found is to considered stale.

The use of the http_disk-cache-hint is recommended for data which is known to change seldom such as seeded or reference data. The contents of the disk cache are persistent across Invantive SQL sessions.

The disk cache is located in the Cache folder of the Invantive configuration folder.

HTTP_DISK_CACHE PARENTHESIS_OPEN booleanConstant COMMA booleanConstant COMMA intervalConstant PARENTHESIS_CLOSE

referenced by:


httpMemoryCache:

The http_memory_cache-hint specifies whether messages may be cached in memory when the provider uses HTTP to exchange data with the backing platform. This typically holds only for cloud-based platforms such as Exact Online, Teamleader or Salesforce. The default setting is false. The first parameter is a boolean whether data may be taken from the memory cache, the second parameter is a boolean whether data retrieved must be stored also in the memory cache and the third parameter is an integer that specifies the number of seconds before a memory cache hit found is to considered stale.

The use of the http_memory-cache-hint is recommended for data which is known to change seldom such as seeded or reference data. The contents in the memory cache are forgotten across Invantive SQL sessions.

The memory cache is located in the Cache folder of the Invantive configuration folder.

HTTP_MEMORY_CACHE PARENTHESIS_OPEN booleanConstant COMMA booleanConstant COMMA intervalConstant PARENTHESIS_CLOSE

referenced by:


ods:

The ods-hint controls the use of the Invantive Data Cache stored in a relational database. The Invantive Data Cache is also the basis of the Operational Data Store managed by Invantive Data Replicator and the data warehouses managed by Invantive Data Vault. The ods-hint specifies the maximum age data from the data cache eligible for use.

The boolean specifies whether the Data Cache may be used to answer a query. Set it to false to disable use of Data Cache for the duration of the query. Keep it on the default true to use Data Cache.

The interval specifies the period of time during which cached results are considered sufficiently fresh for use, such as '30 minutes'.

When no interval is present, the actual platform is consulted. The default with Invantive Data Cache enabled is to always use the data cache contents when not stale according to the metadata of the data cache. In general, that defaults to a maximum age of 7 days.

ODS PARENTHESIS_OPEN booleanConstant COMMA intervalConstant PARENTHESIS_CLOSE

referenced by:


resultSetName:

RESULT_SET_NAME PARENTHESIS_OPEN stringConstant PARENTHESIS_CLOSE

referenced by:


joinSet:

Control join approach between two data sources. A column-indexed lookup will be used instead of a full table scan when the number of rows on the left-hand side does not exceed the maximum number of rows specified in the hint. When not specified, a hash lookup will only be used when the number of rows on the left-side does not exceed 5.000.

The actual implementation of a hash lookup depends on the platform on which the data container runs. For instance with OData, a number of requests will be made using an in-construct with a limited number of in-values. With a relation database platform, a native SQL 'in' will be used.

The first identifier is the alias of the table on the right-hand side of the join. The second identifier is the name of the column used to join upon in the right-hand side. The numeric constant specifies upto what number of rows on the left-hand side of the join will allow the join set hint to be used. When the number of rows exceeds the numeric constant, a full table join is made.

The following example takes for instances 5.000 sales invoices from an Exact Online environment with 100.000 sales invoices. Each sales invoice has 4..10 lines. The join does not retrieve all sales invoices nor all invoice lines, but instead fetches the 5.000 sales invoices using the where-clause, and then retrieves the related invoice lines using a column-indexed lookup by invoiceid. Since Exact Online is an OData source, the approximately 30.000 invoice lines will be retrieves in 300 session I/Os each having an in-construct for 100 lines on invoiceid.

select /*+ join_set(sil, invoiceid, 10000) */ * from ExactOnlineREST..SalesInvoices sik join ExactOnlineREST..SalesInvoiceLines sil on sil.invoiceid = sik.invoiceid where sik.status = 50 and sik.InvoiceDate between to_date( :P_RECEIPT_DATE_FROM, 'yyyymmdd') and to_date( :P_RECEIPT_DATE_TO, 'yyyymmdd')
JOIN_SET PARENTHESIS_OPEN identifier COMMA identifier COMMA numericConstant PARENTHESIS_CLOSE

referenced by:


noJoinSet:

The no_join_set hint disables the use of hash-joins. It can be enabled using the join_set hint.

NO_JOIN_SET PARENTHESIS_OPEN identifier COMMA identifier PARENTHESIS_CLOSE

referenced by:


lowCost:

The low_cost-hint specifies that the select with the hint must be considered a select with low execution costs. Low execution costs trigger early evaluation during parsing. By default, select statements using solely in memory storage, dummy and data dictionary are considered low cost and evaluated early. The evaluation of all others is delayed as long as possible.

The use of the low_cost-hint is recommended when the select is used with a 'in ( select ... )' syntax and the developer knows beforehand that it will evaluate fast to values and that the use of these values will allow the use of server-side filtering for the outer select.

LOW_COST

referenced by:


distinct:

Addition of the 'distinct' keyword to a SQL select statement de-duplicates the rows returned. Rows are considered duplicates when the values in all selected columns are identical, with two null-values considered equal.

DISTINCT

referenced by:


topClause:

With the 'top' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'.

TOP numericConstant

referenced by:


limitClause:

With the 'limit' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'.

LIMIT numericConstant

referenced by:


embeddedSelect:

An embedded select, also known as an 'inline view', retrieves rows using the specified select statement. These rows are consumed by the outer select as were it the results of retrieving the rows from a table.

Invantive SQL does not allow grouping rows with expressions as columns. An embedded select is typically used to evaluate expressions to rows with solely constants. After applying the embedded select the group operators can be applied.

parenthesisOpen selectStatement parenthesisClose

referenced by:


tableSpec:

A table specification without parameters. The optional alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.

A number of special so-called 'service providers' are always present, such as 'datadictionary' for use by an alias.

fullTableIdentifier distributedAliasDirective

referenced by:


tableOrFunctionSpec:

A table specification requiring a comma-separated list of parameters to determine the rows to be retrieved.

Traditional SQL syntax did not provide for parameterized queries, matching set theory. Modern variants such as pipelined table functions allow a stored procedure or other imperative language-based approaches to generate rows based upon parameter values. Many data containers support queries that returns rows based upon parameter values. This holds especially for SOAP web services. Table specifications with parameters ease queries on such data containers.

The optional alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.

fullTableIdentifier tableFunctionSpec distributedAliasDirective

referenced by:


tableFunctionSpec:

A comma-separated list of parametres to determine the rows to be retrieved by a tableOrFunctionSpec.

parenthesisOpen expression COMMA parenthesisClose

referenced by:


distributedAliasDirective:

The distributed alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.

A number of special so-called 'service providers' are always present, such as 'datadictionary' for use by an alias.

AT dataContainerAlias

referenced by:


dataContainerAlias:

When multiple data containers have been defined in settings.xml for a database, each one is assigned an alias. An alias typically takes the form of a limited number of characters. The presence of an alias allows Invantive SQL to precisely determine to what data container forward a request for data.

identifier

referenced by:


passingSourceOrPathExpression:

The passing option specifies the source of the contents to be interpreted. The contents can be specified as the outcome of an expression such as from a previous read_file() table function, a URL downloaded using httpget() or a string concatenation. The contents can also be specified as to be taken from a specific file identified by it's file name and path as an expression.

PASSING FILE expression

referenced by:


xmlTableSpec:

Data stored in XML format can be interpreted as a data source using the xmltable keyword.

The expression specifies a master XPath expression within the context of which the rows are evaluated using the column specifications.

The passing option specifies the source of the data in XML format. The source is often the outcome of a read_file() table function, a URL download using httpget() or a previous xmlformat() SQL function.

The columns are specified using their XPath relative to the master path.

select xtable.item_code from ( select '<root><item><code>mycode</code><description>description</description></item></root>' xmlfragment ) xmlsource join xmltable ( '/root' passing xmlsource.xmlfragment columns item_code varchar2 path 'item/code' , item_description varchar2 path 'item/description' ) xtable
XMLTABLE parenthesisOpen expression null xmlTablePassing xmlTableLiteral xmlTableColumns parenthesisClose

referenced by:


xmlTablePassing:

passingSourceOrPathExpression

referenced by:


xmlTableLiteral:

LITERAL expression

referenced by:


xmlTableColumns:

A list of XML table column specifications.

COLUMNS xmlTableColumnSpec COMMA

referenced by:


xmlTableColumnSpec:

The columns are specified using their XPath relative to the master path.

identifier dataType PATH stringConstant

referenced by:


jsonTableSpec:

Data stored in JSON format can be interpreted as a data source using the xmltable keyword.

The expression specifies a master JSON expression within the context of which the rows are evaluated using the column specifications.

The passing option specifies the source of the data in JSON format. The source is often the outcome of a read_file() table function or a URL download using httpget().

The columns are specified using their JSON path relative to the master path.

select json.* from ( select '{ "name":"John", "age":30, "cars": { "car1":"Ford", "car2":"BMW", "car3":"Fiat"} }' json from dual@datadictionary d -- -- Generate 25 copies. -- join range(25, 1)@datadictionary r ) jsondata join jsontable ( '' passing jsondata.json columns orderName varchar2 path 'name' ) json
JSONTABLE parenthesisOpen expression null jsonTablePassing jsonTableLiteral jsonTableColumns parenthesisClose

referenced by:


jsonTablePassing:

passingSourceOrPathExpression

referenced by:


jsonTableLiteral:

LITERAL expression

referenced by:


jsonTableColumns:

A list of JSON table column specifications.

COLUMNS jsonTableColumnSpec COMMA

referenced by:


jsonTableColumnSpec:

The columns are specified using their JSON path relative to the master path.

identifier dataType PATH stringConstant

referenced by:


csvTableSpec:

Data stored in CSV format can be interpreted as a data source using the csvtable keyword.

The passing option specifies the source of the data in CSV format. The source is often the outcome of a read_file() table function or a URL download using httpget().

The interpretation process can be controlled on table level using the table options and the specification ends with the CSV columns being mapped to data source columns using their relative position within a row.

select t.* from (select 'a;1;2' || chr(13) || chr(10) || 'b;3;4' csvfragment ) x join csvtable ( passing x.csvfragment columns text_column varchar2 position 1 , next_column varchar2 position 2 , a_number number position 3 ) t
CSVTABLE parenthesisOpen csvTablePassing csvTableLiteral csvTableOptions csvTableColumns parenthesisClose

referenced by:


csvTableOptions:

The interpretation process can be controlled on table level using the table options.

The row delimiter is a text that separates two CSV rows, such as "chr(13) || chr(10)" or simply a pipe character "'|'". The default is the operating system-specific variant of new line.

The column delimiter is a text that separates two CSV columns such as "';'". The default is comma.

Data in CSV will typically have one or more header CSV rows labeling the individual columns. The 'skip lines' statement excludes the first number of CSV rows from processing.

ROW DELIMITER expression COLUMN DELIMITER expression SKIP_ LINES expression

referenced by:


csvTableLiteral:

LITERAL expression

referenced by:


csvTablePassing:

passingSourceOrPathExpression

referenced by:


csvTableColumns:

A list of CSV table column specifications.

COLUMNS csvTableColumnSpec COMMA

referenced by:


csvTableColumnSpec:

Each CSV column is mapped to a data source column using it's relative position within the CSV row. Position 1 represents the first CSV column.

identifier dataType POSITION numericConstant

referenced by:


excelTableSpec:

Excel file contents in Open XML, such as used with the file extensions xlsx and xlsm, can be interpreted as a data source using the exceltable keyword.

The rectangle specifies the rectangular area from which should be taken. Rows in Excel are interpreted as rows from the data source, whereas columns in Excel are interpreted as columns.

The passing option specifies the source of the (binary) contents in zipped Open XML format such as used with the file extensions xlsx and xlsm. The source is often the outcome of a read_file() table function or URL download using httpget().

The interpretation process can be controlled on table level using the table options and the specification ends with the Excel columns being mapped to data source columns using their relative position within the rectangular area.

-- -- Create an in-memory table using data taken from an -- Excel sheet. -- -- The Excel sheet has a named range called 'salesdata' with -- region, period, revenue (EUR), returns (EUR) and quantity -- (pieces). -- create or replace table salesdatacopy@inmemorystorage as select * from exceltable ( name 'salesdata' passing file 'FOLDER\sales.xlsx' columns region varchar2 position 1 , period varchar2 position 2 , revenue number position 3 , returns number position 4 , qty number position 5 )
EXCELTABLE parenthesisOpen excelDataRectangle excelTablePassing excelTableOptions excelTableColumns parenthesisClose

referenced by:


excelDataRectangle:

The rectangle specifies the rectangular area from which data should be taken. Rows in Excel are interpreted as rows from the data source, whereas columns in Excel are interpreted as columns. All cells within the rectangle are considered data; headings should be excluded from the rectangle specification.

A rectangle can be either:

  • the contents of a complete worksheet, specified by an expression returning the worksheet name;
  • a named range within a specific worksheet, specified by expressions for worksheet name and named range name;
  • a cell range identified by an expression with it's dimensions;
  • an Excel table identified by an expression with the table name;
  • a named range identified by an expression with the named range name.
WORKSHEET expression NAME AREA TABLE NAME expression

referenced by:


excelTablePassing:

passingSourceOrPathExpression

referenced by:


excelTableOptions:

The interpretation process can be controlled on table level using the table options.

Empty rows will typically be found when consuming a rectangular area larger than the actual data, such a complete worksheet. The 'skip empty rows' statement eliminates all completely empty rows from the output.

SKIP_ EMPTY ROWS

referenced by:


excelTableColumns:

A list of Excel table column specifications.

COLUMNS excelTableColumnSpec COMMA

referenced by:


excelTableColumnSpec:

Each Excel column is mapped to a data source column using it's relative position within the rectangular area. Position 1 represents the first Excel column falling within the Excel rectangular area.

identifier dataType POSITION numericConstant

referenced by:


dataType:

BFILE BIGINT BIGSERIAL BIT BLOB BOOL BOOLEAN BPCHAR BYTE BYTEA CHAR CHARACTER CLOB DATE DATETIME DATETIMEOFFSET DEC DECIMAL DOUBLE FLOAT FLOAT4 FLOAT8 GUID IMAGE INT INT16 INT2 INT32 INT4 INT64 INT8 INTEGER INTERVAL LONGBLOB LONGTEXT MEDIUMBLOB MEDIUMINT MEDIUMTEXT MONEY NAME NCHAR NUMBER NUMERIC NVARCHAR OID PLS_INTEGER RAW REAL SERIAL SMALLDATETIME SMALLINT SMALLMONEY SMALLSERIAL TEXT TIME TIMESTAMP TIMESTAMPTZ TIMETZ TINYBLOB TINYINT TINYTEXT UINT16 UINT32 UINT64 UNIQUEIDENTIFIER UUID VARBINARY VARCHAR VARCHAR2 XML XMLTYPE YEAR

referenced by:


groupBy:

Grouping of multiple rows into groups is specified by the groupBy. A group will be introduced for each distinct combination of column values for the columns listed. The values of grouped columns can be used in the select clause. Columns not being grouped upon can only be used within the context of a group function listed as 'aggregateFunction'.

GROUP BY columnList

referenced by:


orderBy:

Sort the rows returned as specified by the list of columns. Values are either sorted ascending (the default) or descending.

ORDER BY column sortDirection COMMA

referenced by:


sortDirection:

A sort direction can be either 'asc' for 'ascending' (the default) or 'desc' for 'descending'.

asc desc

referenced by:


columnList:

A comma-separated list of columns.

column COMMA

referenced by:


column:

A column is identified by an identifier, possibly prefixed by the name of the table or the alias of the table from which the column is to be taken.

identifier DOT identifier

referenced by:


whereClause:

The where-clause restricts the number of rows in a result set by applying one or more boolean condiditions which rows must satisfy.

WHERE booleanExpression

referenced by:


joinStatements:

A list of join statement.

joinStatement

referenced by:


joinStatement:

A join statement combines two result sets. Only combinations of rows taken from both result sets are returned when they meet the join conditions.

joinCategory join dataSource joinConditions

referenced by:


joinCategory:

The join category specifies what combinations of rows are considered. The following variants can be used:

  • inner join, as indicated by 'join' or 'inner join': an inner join returns all combinations of rows from both result sets that meet the join conditions.
  • left outer, as indicated by 'left outer join': a left outer join returns the same rows as an inner join, extended by one row for each row in the left result set having no matching rows in the right result set. Each column that originates from the right result set is assigned a null value.
  • right outer, as indicated by 'right outer join': a right outer join returns the same rows as an inner join, extended by one row for each row in the right result set having no matching rows in the left result set. Each column that originates from the left result set is assigned a null value.
  • full outer, as indicated by 'full outer join': a full outer join returns the same rows as an inner join, extended by one row for each row in the right result set having no matching rows in the left result set. Each column that originates from the left result set is assigned a null value. The results are also extended by one row for each row in the left result set having no matching rows in the right result set. Each column that originates from the right result set is assigned a null value.
  • cross join, as indicated by 'cross join': a cross join returns a Cartesian product of the rows from both result sets. A 'Cartesian product' is a term from set theory, which indicates that all combinations are returned.
inner joinSubCategory outer cross

referenced by:


joinSubCategory:

The join sub-category refines the join category. Please see 'joinCategory' for an explanation.

left right full

referenced by:


join:

JOIN

referenced by:


inner:

INNER

referenced by:


outer:

OUTER

referenced by:


left:

LEFT

referenced by:


right:

Extracts a substring from a value with the given length from the right side.
Parameters:

  • Input: Text to extract substring from.
  • Length: Maximum length of the substring.
Returns: Substring from the right side of the input.

RIGHT

referenced by:


full:

FULL

referenced by:


cross:

CROSS

referenced by:


sum:

Group function to sum together individual numerical values. Occurrences of null are considered 0, unless there are only null values. In that case the outcome is null.

SUM

sum      ::= SUM

referenced by:


product:

Group function to multiply together individual numerical values. Multiplying large values can quickly exceed the range of the resulting Decimal data type. The product group function is typically used in financial and probability calculations with values near 1.

PRODUCT

referenced by:


min:

Group function to find the minimum value from a group of numerical values.

MIN

min      ::= MIN

referenced by:


max:

Group function to find the maximum value from a group of numerical values.

MAX

max      ::= MAX

referenced by:


avg:

Group function to find the average value from a group of numerical values.

AVG

avg      ::= AVG

referenced by:


first:

Group function to the first non-null value in an ordered result set.

FIRST

referenced by:


last:

Group function to the last non-null value in an ordered result set.

LAST

referenced by:


stddev:

Group function to find the standard deviation from a group of numerical values.

STDDEV

referenced by:


count:

Group function to find the number of values from a group of values.

COUNT

referenced by:


listagg:

Group function which concatenates all individual values, separated by the separator when provided and comma plus space otherwise.

LISTAGG

referenced by:


asc:

ASC

asc      ::= ASC

referenced by:


desc:

DESC

referenced by:


joinConditions:

ON booleanExpression

referenced by:


selectList:

selectPart COMMA

referenced by:


selectPart:

part aliased labeled

referenced by:


aliased:

AS alias

referenced by:


labeled:

LABEL stringConstant

referenced by:


part:

expression aggregateFunction allColumnsSpec

referenced by:


aggregateFunction:

sum product avg stddev parenthesisOpen distinct min max first last parenthesisOpen arithmeticExpression count parenthesisOpen distinct part listagg parenthesisOpen distinct arithmeticExpressionList parenthesisClose WITHIN GROUP parenthesisOpen orderBy parenthesisClose

referenced by:


allColumnsSpec:

allColumnsSpecId allColumnsSpecColumnNamePrefix allColumnsSpecColumnNamePostfix allColumnsSpecLabelPrefix allColumnsSpecLabelPostfix

referenced by:


allColumnsSpecId:

alias DOT ASTERIX

referenced by:


allColumnsSpecColumnNamePrefix:

PREFIX WITH stringConstant

referenced by:


allColumnsSpecColumnNamePostfix:

POSTFIX WITH stringConstant

referenced by:


allColumnsSpecLabelPrefix:

LABEL PREFIX WITH stringConstant

referenced by:


allColumnsSpecLabelPostfix:

LABEL POSTFIX WITH stringConstant

referenced by:


ddlStatement:

All available Data Definition Language statements.

createTableStatement dropTableStatement alterPersistentCacheStatement alterDataDictionaryStatement

referenced by:


alterPersistentCacheStatement:

Besides an in-memory cache valid during the duration of a session, Invantive SQL offers an integrated cache storing data persistently using an on-premise or cloud relation database such as SQL Server or PostgreSQL. When configured, Invantive SQL first tries to find sufficiently fresh data in the cache. This reduces the number of data loads from slow data containers such as some cloud platforms. In general, the performance increase when the rows can be fully retrieved from a cache is between a factor 25 and 2.500.

Invantive SQL itself manages the table structure and table contents in the relation database used as a data cache. On initial use just provide an empty database. Invantive SQL installs a repository consisting of a few tables. The repository tables have names starting with 'dc_'.

For each table partition version, a so-called facts table is created. A facts table contains a full copy of the rows retrieved from the data container. Facts tables have names starting with 'dcd_', followed by a unique hash signaling the table partition version. When necessary, additional database objects are maintained such as indexes to improve performance. As with facts table names, all column names are also hashed based upon an algorithm including the original column name. These facts tables are not intended for direct use using native SQL.

Each facts table has a unique state from the following state, with Ready state signaling the now current version:

  • Initializing ('I'): the facts table will be created.
  • View creation ('V'): logical views will be created.
  • Prepared ('P'): the facts table has been created, but contains yet no rows.
  • Seeding ('S'): the facts table is being seeded with the contents of the previously current version.
  • Loading ('L'): loading new facts from data container using water shed or another algorithm.
  • Ready ('R'): the facts table is available and the current one to be used.
  • Obsoleted ('O'): the facts table still exists, but the data has passed it's conservation period. Often a newer version is now current.
  • Dropped ('D'): the facts table now longer exist, but the metadata is still present in the repository tables.

The persistent cache in the database can be used with native SQL when extended by Invantive Data Replicator. Invantive Data Replicator can create and maintain a database view (a so-called 'partition view') for the now current version of table partition. Similarly, it can create an 'overall view', showing the rows across all partitions of the now current versions per partition.

The overall views are typically used for consolidation purposes, bringing together data across multiple companies or persons.

alterPersistentCacheSetStatement alterPersistentCacheDownloadStatement alterPersistentCachePurgeStatement alterPersistentCacheRefreshStatement alterPersistentCacheLoadStatement alterPersistentCacheTableRefreshStatement alterPersistentCachePartitionRefreshStatement alterPersistentCacheDropStatement alterPersistentCacheConfigureWebhooksStatement

referenced by:


alterPersistentCachePurgeStatement:

ALTER PERSISTENT CACHE PURGE UNKNOWN OBSOLETE READY DROPPABLE ALL TABLE PARTITION VERSIONS DATA_CONTAINER stringConstant

referenced by:


alterPersistentCacheDownloadStatement:

ALTER PERSISTENT CACHE DOWNLOAD FEED LICENSE CONTRACT CODE stringConstant TOKEN stringConstant DATA_CONTAINER stringConstant PARTITION partitionSimpleIdentifier LIMIT numericConstant

referenced by:


alterPersistentCacheConfigureWebhooksStatement:

Trickle loading is an advanced strategy to efficiently and fast update the replicate data. It requires all changes (insert, update and delete) on the replicated data to be registered as event messages, possibly with a delay.

The registration of changes can be activated on selected platforms using webhooks. Configuration of the webhooks can be done using this statement. When not specified, the license contract code of the current subscription will be used.

ALTER PERSISTENT CACHE ENABLE DISABLE WEBHOOKS LICENSE CONTRACT CODE stringConstant TABLE tableSpec PARTITION partitionSimpleIdentifier DATA_CONTAINER stringConstant

referenced by:


alterPersistentCacheRefreshStatement:

This statement triggers the refresh of replicated data. A refresh of the replicated data can also be triggered by executing a SQL statement specifying the use of replicated data of a specific age.

In default mode (without 'force'), a refresh is only executed on the specified data when the age of the replicated data exceeds the configured maximum age. With a forced refresh, the replicated data is always replaced by a recent version of the source data.

The maximum number of parallel processes to replicate the data can be configured to increase throughput and decrease runtime of the replication process.

By default the approach as configured on each table partition is used to update the replica. However, a deviating approach can be specified. This is typically done after switching a table partition to trickle loading to run a one-time replication process with the copy approach, effectively ensuring that no changes have gone unnoticed.

ALTER PERSISTENT CACHE FORCE REFRESH DATA_CONTAINER dataContainerAlias PARALLEL numericConstant APPROACH COPY TRICKLE SAMPLE DEFAULT

referenced by:


alterPersistentCacheLoadStatement:

ALTER PERSISTENT CACHE LOAD

referenced by:


alterPersistentCacheTableRefreshStatement:

Refresh all data of a specificied table. The options are explained at alterPersistentCacheRefreshStatement.

ALTER PERSISTENT CACHE TABLE tableSpec FORCE REFRESH PARTITION partitionIdentifier PARALLEL numericConstant APPROACH COPY TRICKLE SAMPLE DEFAULT

referenced by:


alterPersistentCachePartitionRefreshStatement:

Refresh all data of a specificied partition. The options are explained at alterPersistentCacheRefreshStatement.

ALTER PERSISTENT CACHE PARTITION partitionIdentifier FORCE REFRESH PARALLEL numericConstant APPROACH COPY TRICKLE SAMPLE DEFAULT

referenced by:


alterPersistentCacheDropStatement:

ALTER PERSISTENT CACHE DROP TABLE tableSpec PARTITION partitionIdentifier PARTITION partitionIdentifier DATA_CONTAINER stringConstant

referenced by:


alterPersistentCacheSetStatement:

ALTER PERSISTENT CACHE SET FRESH RETENTION FORWARDED INCOMING MESSAGES METADATA RECYCLEBIN DATA MODEL VERSION numericConstant TOKEN stringConstant LOGICAL OVERALL PARTITION VIEW NAME PREFIX POSTFIX stringConstant MAINTAIN booleanConstant LOAD MY MESSAGES booleanConstant AUTO UPGRADE ONCE alterPersistentCacheSetBackingConnectionString alterPersistentCacheSetTableOptions

referenced by:


alterPersistentCacheSetBackingConnectionString:

BACKING ENCRYPTED CONNECTION STRING stringConstant

referenced by:


alterPersistentCacheSetTableOptions:

TABLE tableSpec LOGICAL OVERALL VIEW MAINTAIN booleanConstant NAME stringConstant PARTITION VIEW MAINTAIN booleanConstant NAME PREFIX POSTFIX stringConstant STATE OBSOLETE DROPPED PARTITION partitionIdentifier APPROACH COPY TRICKLE SAMPLE

referenced by:


alterDataDictionaryStatement:

alterDataDictionarySetStatement

referenced by:


alterDataDictionarySetStatement:

ALTER DATA DICTIONARY SET alterDataDictionarySetBackingConnectionString

referenced by:


alterDataDictionarySetBackingConnectionString:

BACKING ENCRYPTED CONNECTION STRING stringConstant

referenced by:


createTableStatement:

Create a table on the specified platform, filled with data from the select statement. An error is raised when a table with the same name already exists. When 'or replace' is specified, a previously existing table with the same name is dropped before creating the new table.

A new table is assigned a technical primary key column. Also indexes are created by default where deemed useful.

CREATE orReplace TABLE tableSpec parenthesisOpen createTableArgument COMMA parenthesisClose AS selectStatement

referenced by:


createTableArgument:

identifier dataType

referenced by:


dropTableStatement:

Drop the specified table on the specified platform. An error is raised when no table exists by that name.

DROP TABLE tableSpec

referenced by:


orReplace:

OR REPLACE

referenced by:


setStatement:

Replaces the value of a provider attribute by a new value.

SET setIdentifier expression

referenced by:


setIdentifier:

attributeIdentifier distributedAliasDirective

referenced by:


transactionStatement:

beginTransactionStatement rollbackTransactionStatement commitTransactionStatement

referenced by:


executeFileStatement:

Execute a file loaded from the operating system with Invantive SQL statements. The file may contain Invantive SQL and Procedural SQL. The use of Invantive Script is not allowed.

FILE_PATH

referenced by:


beginTransactionStatement:

A begin transaction statement initiates a transaction. Invantive SQL typically provides no transaction logic given the distributed nature and the limitations of the possible platforms. Some platforms enable collection of transaction data, which are to be handed over to the backing platform all together.

BEGIN TRANSACTION

referenced by:


rollbackTransactionStatement:

Forgets all collected transaction data not yet handed over to the backing platform.

ROLLBACK TRANSACTION

referenced by:


commitTransactionStatement:

Hand over all collected transaction to the backing platform for registration.

COMMIT TRANSACTION

referenced by:


useStatement:

The use statement enables you to specify which partitions should be accessed by subsequent select, insert, update and delete statements. You can specify one or multiple partitions as a comma-separated list, possibly for a specific data container by appending an at-sign plus data container alias to the partition code. The value 'default' has a special meaning; it specifies to use the partition(s) originally selected when you logged on. The value 'all' also has a special meaning: it selects all partitions available.

For instance, to select partition '35' in the data container with alias 'eolnl' and partition '57345' in the data container with alias 'nmbrsnl', you can execute: 'use 35@eolnl, 57345@nmbrsnl'.

For complex scenarios, you can specify any valid Invantive SQL select statement which returns one or two columns. Each row from the query specifies one partition to select. The first column specifies the partition code, whereas the optional second column specifies a specific data container alias.

For instance, to select partition '35' in the data container with alias 'eolnl' and partition '57345' in the data container with alias 'nmbrsnl', you can execute: 'use select '35', 'eolnl' from dual@datadictionary union all select '57345', 'nmbrsnl' from dual@datadictionary'.

USE partitionIdentifiersList selectStatement

referenced by:


partitionIdentifiersList:

partitionIdentifierWithAlias COMMA

referenced by:


partitionIdentifier:

A partition identifier uniquely identifies one or more partitions from the currently available data containers.

The special partition identifer 'default' stands for all partitions that were chosen as default partitions during setup of the database connection. The special partition identifier 'all' stands for all available partitions across all available database connections.

parameterExpression numericConstant identifier ALL DEFAULT

referenced by:


partitionIdentifierWithAlias:

partitionIdentifier distributedAliasDirective

referenced by:


partitionSimpleIdentifier:

numericConstant identifier

referenced by:


insertStatement:

bulk insert into tableSpec insertFieldList valuesExpression insertFieldList selectStatement identifiedByClause attachToClause

referenced by:


valuesExpression:

values_ insertValues

referenced by:


bulk:

BULK

referenced by:


into:

INTO

referenced by:


insert:

INSERT

referenced by:


values_:

VALUES

referenced by:


insertFieldList:

parenthesisOpen columnList parenthesisClose

referenced by:


insertValues:

parenthesisOpen insertValuesList parenthesisClose

referenced by:


insertValuesList:

arithmeticExpression COMMA

referenced by:


identifiedByClause:

IDENTIFIED BY arithmeticExpression

referenced by:


attachToClause:

ATTACH TO arithmeticExpression

referenced by:


updateStatement:

UPDATE FROM tableSpec SET updateValuesList whereClause

referenced by:


updateValuesList:

updateValue COMMA

referenced by:


updateValue:

column EQ arithmeticExpression

referenced by:


deleteStatement:

delete FROM tableSpec whereClause

referenced by:


delete:

DELETE

referenced by:


expression:

booleanExpression arithmeticExpression

referenced by:


booleanExpression:

not booleanExpression and or booleanExpression parenthesisOpen booleanExpression parenthesisClose predicateExpression true false

referenced by:


caseExpression:

case caseWhenThenExpression caseElseExpression end

referenced by:


caseWhenThenExpression:

when expression then arithmeticExpression

referenced by:


caseElseExpression:

else expression

referenced by:


parenthesisOpen:

PARENTHESIS_OPEN

referenced by:


parenthesisClose:

PARENTHESIS_CLOSE

referenced by:


case:

CASE

referenced by:


when:

WHEN

referenced by:


then:

THEN

referenced by:


else:

ELSE

referenced by:


end:

END

end      ::= END

referenced by:


not:

NOT

not      ::= NOT

referenced by:


is:

IS

is       ::= IS

referenced by:


are:

ARE

are      ::= ARE

referenced by:


and:

AND

and      ::= AND

referenced by:


or:

OR

or       ::= OR

referenced by:


true:

TRUE

referenced by:


false:

FALSE

referenced by:


predicateExpression:

arithmeticExpression not in_ parenthesisOpen arithmeticExpression COMMA inSelectStatement parenthesisClose between arithmeticExpression and arithmeticExpression gt ge lt le eq neq arithmeticExpression isNullComparingExpression isLikeComparingExpression isEqualComparingExpression

referenced by:


parameterExpression:

COLON identifier

referenced by:


gt:

Greater then is a binary operator which returns true when the left value is greater than the right value. When one of both values is null, the outcome is null. Otherwise it is false.

GT

gt       ::= GT

referenced by:


ge:

Greater or equal is a binary operator which returns true when the left value is greater than or equal to the right value. When one of both values is null, the outcome is null. Otherwise it is false.

GE

ge       ::= GE

referenced by:


lt:

Less then is a binary operator which returns true when the left value is less than the right value. When one of both values is null, the outcome is null. Otherwise it is false.

LT

lt       ::= LT

referenced by:


le:

Less or equal is a binary operator which returns true when the left value is less than or equal to the right value. When one of both values is null, the outcome is null. Otherwise it is false.

LE

le       ::= LE

referenced by:


eq:

EQ

eq       ::= EQ

referenced by:


neq:

NEQ

neq      ::= NEQ

referenced by:


like:

LIKE

referenced by:


between:

BETWEEN

referenced by:


in_:

IN

in_      ::= IN

referenced by:


isNullComparingExpression:

is not NULL

referenced by:


isEqualComparingExpression:

are EQUAL

referenced by:


isLikeComparingExpression:

not like arithmeticExpression

referenced by:


arithmeticExpression:

minus plus arithmeticExpression times divide plus minus concat arithmeticExpression parenthesisOpen arithmeticExpression selectStatement parenthesisClose functionExpression parameterExpression caseExpression fieldIdentifier constant

referenced by:


arithmeticExpressionList:

arithmeticExpression COMMA

referenced by:


functionExpression:

abs acos anonymize ascii asin atan atan2 base64_decode base64_encode basename bit_length octet_length camel ceil chr coalesce concat_func cos covfefify compress uncompress dateadd datepart date_ceil date_floor date_round date_trunc day dayofweek dayofyear dense_rank double_metaphone double_metaphone_alt exp_func floor from_unixtime hour httpget httpget_text httppost initcap instr jsondecode jsonencode left length levenshtein ln log lower lpad ltrim md5 metaphone metaphone3 metaphone3_alt microsecond millisecond minute mod month newid number_to_speech normalize nvl power quarter quote_ident quote_literal quote_nullable raise_error random random_blob rand rank regexp_instr regexp_replace regexp_substr remainder replace repeat reverse right round row_number rpad rtrim second sign sin soundex sql_variant sqrt substr sys_context tan to_binary to_char to_date to_number to_guid to_hex translate translate_resources trim trunc unistr unix_timestamp upper urldecode urlencode user unzip zip xmlcomment xmldecode xmlencode xmlelement xmlformat xmltransform year add_months zero_blob IDENTIFIER parenthesisOpen arithmeticExpressionList parenthesisClose random rand row_number now utc user

referenced by:


abs:

Returns the absolute value of a number.
Parameters:

  • Input: A number that is greater than or equal to System.Double.MinValue, but less than or equal to System.Double.MaxValue.
Returns: decimal.

ABS

abs      ::= ABS

referenced by:


acos:

Returns the angle of the provided cosine.
Parameters:

  • Input: the cosine to get the angle of.
Returns: A number which represents the angle of the provided cosine.

ACOS

referenced by:


anonymize:

Anonymize a text or number. Anonymization is executed such that when the same original value is anonymized within the same session, the anonymized value will be identical. The anonymized value also uniquely matches the original value. With no access to the anonymization map however, the original value can however not be calculated from the anonymized value.
In mathematics, the anonymization function is a bijection: each element of the original set is paired with exactly one element of the anonymized set, and each element of the anonymized set is paired with exactly one element of the original set.
Parameters:

  • Value: A text or number to be obfuscated.
  • Maximum length (optional): Maximum length in digits for numbers or characters for text of anonymized value. Null means no restriction on maximum length.
  • Mapping (optional): algorithm to use. The default algorithm is 'DEFAULT' which maps text values to a range of hexadecimal characters and numbers to a range of numbers. Alternative mappings are described below.
The following anonymization maps are available on installation:
  • DEFAULT: the default algorithm.
  • IVE-GL-JOURNAL-DESCRIPTION: general ledger journal descriptions: no preferred anonymizations, leave familiar and non-confidential descriptions in original state.
  • IVE-GL-ACCOUNT-DESCRIPTION: general ledger account descriptions: no preferred anonymizations, leave familiar and non-confidential descriptions in original state.
  • IVE-PSN-FIRST-NAME: person first names: prefer readable alternative first names, anonymize all.
  • IVE-PSN-LAST-NAME: person last names: prefer readable alternative last names, anonymize all.
  • IVE-ADS-CITY-NAME: address city names: prefer readable alternative city names, anonymize all.
  • IVE-ADS-STREET-NAME: address street names: prefer readable alternative street names, anonymize all.
The data dictionary contains the anonymization maps used sofar in the session and their corresponding values:
select * from SystemAnonymizationMaps@DataDictionary select * from SystemAnonymizationMapValues@DataDictionary select * from SystemAnonymizationPredefinedMaps@DataDictionary
Returns: Anonymized value.

ANONYMIZE

referenced by:


ascii:

Get the position of a character on database character set.
Parameters:

  • Input: character to get position from.
Returns: the position of the character on database character set.

ASCII

referenced by:


asin:

Returns the angle of the provided sine.
Parameters:

  • Input: the sine to get the angle of.
Returns: A number which represents the angle of the provided sine.

ASIN

referenced by:


atan:

Returns the angle of the provided tangent.
Parameters:

  • Input: the tangent to get the angle of.
Returns: A number which represents the angle of the provided tangent.

ATAN

referenced by:


atan2:

Returns the angle of the provided tangent.
Parameters:

  • First number: the first number to get the angle of.
  • Second number: the second to get the angle of.
Returns: A number which represents the angle of the provided tangent.

ATAN2

referenced by:


add_months:

Add an amount of months to a datetime.
Parameters:

  • Date: datetime to ass the months to.
  • Months: the amount of months to add.
Returns: A new datetime with the amount of months added.

ADD_MONTHS

referenced by:


base64_decode:

Converts the base64_encoded value back to the binairy value as defined on Wikipedia.
Parameters:

  • Input: value to convert back to the original.
Returns: the input decoded back to the binairy value.

BASE64_DECODE

referenced by:


base64_encode:

Converts a binairy value to base64_encoded characters as defined on Wikipedia.
Parameters:

  • Input: value to convert to base64 characters.
Returns: the input encoded to base64 characters.

BASE64_ENCODE

referenced by:


basename:

Extract file name from a file path.
Parameters:

  • File path: Full path of a file, consisting of drive, folders, file name and possible extension.
  • Extension: Optional extension to remove, including leading '.' when necessary.
Returns: the file name from the file path.

BASENAME

referenced by:


camel:

Converts provided string to Camel case.
Parameters:

  • Input: the string that will be converted to Camel case.
Returns: A string converted to Camel case.

CAMEL

referenced by:


ceil:

Rounds the input to the largest following integer. Unless an amount of decimals is defined, in which case it rounds to the largest integer number with the amount of decimals or date with the amount of positions.
Parameters:

  • Input: A number or datetime to ceil.
  • Decimals [optional]: A number to specify how many decimals it may ceil to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
Returns: the ceiling of the input.

CEIL

referenced by:


chr:

Get a character from database character set.
Parameters:

  • Input: a numeric value of a character.
Returns: A character from the database character set.

CHR CHAR

chr      ::= CHR
           | CHAR

referenced by:


bit_length:

Get the number of bits needed to represent a value. For a blob, this is the number of bits for the bytes of the blob. For all other data types, the value is first converted to a string and then the number of bits of the UTF8 representation is determined.
Parameters:

  • Value: value to determine length in bits for.
Returns: number of bits needed to represent the value.

BIT_LENGTH

referenced by:


octet_length:

Get the number of bytes needed to represent a value. For a blob, this is the number of bytes of the blob. For all other data types, the value is first converted to a string and then the number of bytes of the UTF8 representation is determined.
Parameters:

  • Value: value to determine length in bytes for.
Returns: number of bytes needed to represent the value.

OCTET_LENGTH

referenced by:


repeat:

Get a concatenation of the text by a number of times.
Parameters:

  • Text: text to repeat.
  • Times: number of time to repeat the text.
Returns: the text repeated a number of times.

REPEAT

referenced by:


raise_error:

RAISE_ERROR

referenced by:


coalesce:

Performs a coalescing operation.
Parameters:

  • Left: an object.
  • Right: an object.
Returns: the left value if right is empty, otherwise the right value.

COALESCE

referenced by:


concat:

Concatenate the left and right values together as a text.

CONCAT_OP

referenced by:


concat_func:

Concatenate a list of values together as a text.

CONCAT

referenced by:


cos:

Returns the cosine of the provided angle.
Parameters:

  • Input: the angle to get the cosine of.
Returns: A number which represents the cosine of the provided angle.

COS

cos      ::= COS

referenced by:


covfefify:

COVFEFIFY

referenced by:


compress:

COMPRESS

referenced by:


uncompress:

UNCOMPRESS

referenced by:


dateadd:

Adds an amount of time to a date.
Parameters:

  • Interval: the date interval to be added.
  • Number: the number of intervals to add.
  • Date: the date to wich the interval should be added.
Returns: the original date with the number of intervals added.

DATEADD

referenced by:


datepart:

Get the specified datepart from a datetime.
Parameters:

  • datepart: a part of a date.
  • date: a datetime to get the datepart from.
Returns: a part of a datetime.

DATEPART

referenced by:


date_ceil:

DATE_CEIL

referenced by:


date_floor:

DATE_FLOOR

referenced by:


date_round:

DATE_ROUND

referenced by:


date_trunc:

DATE_TRUNC

referenced by:


day:

Collect the day from a date.
Parameters:

  • Input: A dateTime.
Returns: the day as an integer.

DAY

day      ::= DAY

referenced by:


dayofweek:

Collect the day of a week from a date.
Parameters:

  • Input: A dateTime.
Returns: the day of a week as an integer.

DAYOFWEEK

referenced by:


dayofyear:

Collect the day of a year from a date.
Parameters:

  • Input: A dateTime.
Returns: the day of a year as an integer.

DAYOFYEAR

referenced by:


dense_rank:

DENSE_RANK

referenced by:


double_metaphone:

DOUBLE_METAPHONE

referenced by:


double_metaphone_alt:

DOUBLE_METAPHONE_ALT

referenced by:


divide:

Divide one number by the second number.
Parameters:

  • first: a number to divide.
  • second: a number to divide with.
Returns: the divided output.

DIVIDE

referenced by:


exp:

Returns the provided number raised to the specified power.
Parameters:

  • Input: the number to raise by the specified power.
Returns: A number which is the provided number raised to the specified power.

EXP_OP

no references


exp_func:

EXP

referenced by:


floor:

Rounds the input to the smallest following integer. Unless an amount of decimals is defined, in which case it rounds to the smallest integer with the amount of decimals or date with the amount of positions.
Parameters:

  • Input: A number or datetime to floor.
  • Decimals [optional]: A number to specify how many decimals it may floor to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
Returns: the floor of the input.

FLOOR

referenced by:


from_unixtime:

Get the date/time from an integer representing a UNIX epoch time.
Parameters:

  • Input: An integer.
Returns: the date/time which the UNIX epoch time represents.

FROM_UNIXTIME

referenced by:


hour:

Collect the hour from a date.
Parameters:

  • Input: A dateTime.
Returns: the hour as an integer.

HOUR

referenced by:


initcap:

Changes the first letter of each word in uppercase, all other letters in lowercase.
Parameters:

  • Input: Text to convert.
Returns: the input with the first letter of each word in uppercase.

INITCAP

referenced by:


instr:

Get a number which is a position of the first occurrence of substring in the string.
Parameters:

  • String: String to be searched.
  • Substring: Text to search for.
  • StartPosition [optional]: Position of string to start searching.
  • occurrence [optional]: Return the position of the occurrence.
Returns: the position of the substring inside the original string.

INSTR

referenced by:


jsondecode:

JSONDECODE

referenced by:


jsonencode: