User Guide Version 2022.2.3 Last Revision: 2/16/2023 Upland Objectif Lune Inc. 2409 46e Avenue Lachine QC H8T 3C9 Canada www.objectiflune.com All trademarks displayed are the property of their respective owners. © Upland Objectif Lune Inc. 1994-2023. All rights reserved. No part of this documentation may be reproduced, transmitted or distributed outside of Upland OL by any means whatsoever without the express written permission of Upland OL.
Table of Contents Welcome to PlanetPress Connect 2022.
Connect file types 119 OL Connect projects 120 Automation with Workflow 121 Using the REST API 121 Versioning 122 Versioned projects 123 Creating versioned projects 124 Viewing project history 126 Viewing project content 127 Using tags 128 Versioned projects in the cloud 129 Before you start 129 Creating a cloud-based versioned project 130 Keeping the local and online projects in sync 132 Sample Projects 134 Sample Project: Basic Email 135 Sample Project: COTG Timesheets 141
Setting and moving msg properties 188 Iterating over items in an array 188 Concatenating strings 188 OL Connect Startup flow 189 Triggering a startup flow 189 Initializing global variables 189 Deploying OL Connect resources 190 An OL Connect email flow in Node-RED 190 The structure of an OL Connect email flow 190 Files used in an OL Connect email flow 191 An OL Connect print flow in Node-RED 191 The structure of a print flow 192 Files used in a print flow 193 An OL Connect preview
About records 261 Creating a Data Model 261 Editing the Data Model 262 Using the Data Model in templates 263 Fields 264 Detail tables 269 Data types 275 Data Model file structure 284 DataMapper User Interface 285 Keyboard shortcuts 286 Menus 290 Panes 293 Toolbar 357 Welcome Screen 359 DataMapper Scripts API 360 Using scripts in the DataMapper 362 Setting boundaries using JavaScript 364 Objects 369 Functions 404 The Designer Designer basics 412 413 Features 414 Temp
Web pages 498 Forms 502 Using Form elements 507 Using JavaScript 512 Capture OnTheGo 516 COTG Forms 516 Creating a COTG Form 516 Filling a COTG template 517 Sending the template to the Workflow tool 519 Receiving and extracting data from a COTG Form 519 Using COTG data in a template 519 Designing a COTG Template 522 Capture OnTheGo template wizards 525 Using Foundation 528 COTG Elements 531 Using COTG Elements 536 Testing a Capture OnTheGo Template 541 Using the COTG plugin
Renaming a snippet 663 Translating a snippet 663 HTML snippets 663 JSON snippets 665 Handlebars templates 665 Partials 669 Styling and formatting 671 Local formatting versus style sheets 671 Layout properties 672 Styling templates with CSS files 672 Styling text and paragraphs 682 How to position elements 686 Rotating elements 688 Styling a table 689 Styling an image 693 Background color and/or image 695 Border 697 Colors 699 Fonts 703 Locale 706 Spacing 707 Persona
Preferences 782 General preferences 783 Clean-up Service preferences 783 DataMapper preferences 786 Database Connection preferences 787 Editing preferences 790 Email preferences 793 Emmet preferences 794 Engines preferences 797 Hardware for Digital Signing preferences 797 Language preferences 798 Logging preferences 799 Parallel Processing preferences 801 Print preferences 801 Sample Projects preferences 802 Save preferences 802 Scripting preferences 803 Servers preferences
Welcome Screen Print options 996 997 Job Creation Presets Wizard 1069 Output Creation Presets Wizard 1084 Advanced Print Wizard navigation options 1102 Designer Script API 1168 Standard Script API 1169 Control Script API 1271 Post Pagination Script API 1297 Generating output 1314 Print output 1315 Fax output 1315 Email output 1315 Web output 1316 Generating Print output 1316 Generating Print output from the Designer 1317 Generating Print output from Workflow 1318 Print settin
PlanetPress Connect Release Notes OL PlanetPress Connect Release Notes 2022.2.3 1353 1353 License Update Required for Upgrade to OL Connect 2022.x 1354 Backup before Upgrading 1354 Overview 1355 OL Connect 2022.2.3 Fixes 1356 OL Connect 2022.2.1 Fixes 1356 OL Connect 2022.2 Improvements 1357 OL Connect 2022.2 Designer Improvements 1360 OL Connect 2022.2 DataMapper Improvements 1363 OL Connect 2022.2 Output Improvements 1364 Workflow 2022.
Welcome to PlanetPress Connect 2022.2 PlanetPress Connect is a series of tools designed to optimize and automate customer communications management. They work together to improve the creation, distribution, interaction and maintenance of your communications. The PlanetPress Connect Datamapper and Designer are designed to create output for print, email and the web within a single template and from any data type, including formatted print streams.
l "System requirements" on page 25 l "Database Considerations" on page 16 l "Environment considerations" on page 19 l "Known Issues" on page 102 l "Language and Encoding Considerations" on page 21 l "Antivirus Exclusions" below l "Performance considerations" on page 23 Antivirus Exclusions The information on this page is designed to assist IT managers and IT professionals decide what antivirus strategy to follow with consideration to PlanetPress and their internal requirements and needs.
AFP Input Performance issues have been reported with the AFP Input option under Windows Server versions from Windows Server 2012 onwards. The issues have been specifically associated with Windows Servers running Windows Defender, but the performance degradation might also be encountered when using other Antivirus applications. Consequently, we recommend that an exclusion be made for the afp2pdf.exe executable file in your Antivirus application. The afp2pdf.
However the responsible person for the computer protection has to decide about the monitoring of such temporary folders following the company guidelines. Database 2 Another database instance for Connect will be hold and used under the folder, which is intended to hold data, accessible by and for all users. The path to this folder is stored in the standardized system variable %PROGRAMDATA%. The Connect database instance is located in the subfolder "Objectif Lune\OL Connect\MariaDB".
l character-set-server = utf8 , collation-server = utf8_unicode_ci , default-characterset=utf8 : These indicate database support for UTF-8/Unicode. l The database configuration must allow the use of mixed case table names. This is particularly an issue on Linux MySQL installations. l The SQL instance must be open to access from other computers. This means the bind-address option should not be set to 127.0.0.1 or localhost.
Note: Since PlanetPress Connect version 1.6 the minimum required version of the MS SQL Server is SQL Server 2012. l When MS SQL is selected, the default values for root user are sa and 1433 for the port. l If database settings from a previous OL Connect installation are found, the pre-exising settings will be displayed for the matching database type. For MS SQL settings, this will only work if they were created with Server Config Tool 1.5.0 or later, or the Installer for OL Connect 1.6.0 or later.
Please be aware: The key word depend must be followed immediately by the equal sign, but between the equal sign and the forward slash there must be a space. Additional information can be found here: http://serverfault.com/questions/24821. 7. After the dependency has been removed, it is possible to stop the supplied MariaDB \ MySQL service (OLConnect_MySQL).
Remote Desktop Support Tests have demonstrated that PlanetPress Connect can be used through Remote Desktop. It is however possible that certain combination of OS could cause issues. If problems are encountered, please contact OL Support and we will investigate. PlanetPress Connect 1.3 and later have been certified under Remote Desktop. 32-bit or 64-bit Operating Systems? PlanetPress Connect is a 64-bit software and can only be installed on 64-bit operating systems.
line switches are accepted, when one of Connect components is started and run. Please be therefore advised, that any non-whitelisted ini entry or command-line switch will be accepted and will - if tried to be used - lead to the respective application’s “sudden death”. If you should encounter such a behaviour then please double-check your Connect log file/s for respective entries.
Firewall/Port considerations The following describes all of the ports that can be used by an OL Connect solution. IT staff may decide the firewall strategy to follow for their internal requirements and needs depending on the statements outlined herein.
OL Connect Preferences. The ports used by the HTTP Client Input task, Legacy SOAP Client and SOAP Client plugin depend on the configured URL. Performance considerations In order to get the most out of PlanetPress Connect, it is important to determine how best to maximize performance.
A DataMapper engine extracts data from a data file. A Merge engine merges the template and the data to create Email and Web output, or to create an intermediary file for Printed output. The intermediary file is in turn used by a Weaver engine to prepare the Print output. Configuring these engines to match both the hardware configuration and the typical usage situation is probably the most effective way to improve Connect's performance.
l Use a high-performance, low-latency hard drive. Connect benefits from fast I/O. This is especially true for DataMapper engines (see "DataMapper engine" on page 90). Preferably use a Solid State Drive (SSD) or similar for storage. l Use at least 8+ GB High-Quality RAM. Check memory usage while the Print command is being executed to see if you need more than the minimum of 8GB.
Virtual Environments l VMWare/VSphere l Hyper-V (8.0) l Azure l Amazon Web Services (AWS). Note that only EC2 M4 was certified, other instances may not work as expected. Minimum hardware requirements As with any software application, minimum hardware requirements represent the most basic hardware on which the software will run. Note however that settling for the minimum specification is unlikely to produce the performance you expect from the system.
*1 This requirement depends upon the amount of data you process through OL Connect. For instance, a PostScript file containing several thousands of documents could easily take up several GBs. Note: As with any JAVA application, the more RAM that is available, the faster PlanetPress Connect will execute. Requirements for individual Connect modules OL Connect Products comprises multiple modules that can be operated separately on multiple PCs.
Note: A PDF version of this guide is available for use in offline installations. Click here to download it. PlanetPress Connect 2022.2 is comprised of 2 different installers: one for the PlanetPress Connect software and one for PlanetPress Workflow 2022.2. Where to obtain the installers The installers for PlanetPress Connect 2022.2 and PlanetPress Workflow 2022.
Installation prerequisites l Make sure your system meets the "System requirements" on page 25. l PlanetPress Connect Version 2022.2 can be installed under a regular user account with Administrator privileges., see "User accounts and security" below. l PlanetPress Connect must be installed on an NTFS file system. l PlanetPress Connect requires Microsoft .NET Framework 4.5 already be installed on the target system. l Connect 2019.1 requires updated Connect License and/or Update Manager.
It does not require administrative rights and only needs permission to read/write in any folder where templates or data mapping configurations are located . If generating Print output, PlanetPress Connect Designer requires permission on the printer or printer queue to send files. Permissions for PlanetPress Connect Server The PlanetPress Connect Server module, used by the Automation module, requires some special permissions to run.
Updating Connect Updating to Connect 2019.1 from earlier Connect version In order to update PlanetPress Connect to 2019.1 it is first necessary to update the Connect License. For details on how to upgrade the Connect License offline see the Upgrading Connect on machines with no internet access section in the document.
CRLs would never be retrievable without internet access, anyway. Advantage of the switch will not only be found during the installation and operation of Connect, but also in some speed improvements for any application which use signed binaries. To switch off CRL retrieval on the computer, complete the following steps: 1. Open the “Internet Options” via the Control Panel 2. Select the “Advanced” tab and scroll down to “Security” node. 3.
Prerequisites Installation The PlanetPress installer will check for prerequisite technologies as the first step in the installation process. If this check finds some technologies are missing, it will install those technologies, before continuing with the installation. Welcome screen After any prerequisites are installed, the PlanetPress installer Welcome screen appears. Click Next to continue with the PlanetPress installation.
l Server: The Connect Server back-end that provides Connect production capabilities such as production content creation (print output, HTML content for emails and web pages), automation, commingling and picking. It is also referred to as the Connect Master Server in Connect clustered environments. l MariaDB Server: A supplied MariaDB database used by PlanetPress Connect. The database is used for referencing temporary Connect files and for sorting temporarily extracted data, and similar.
The installer calculates how much disk space is required for installing the selected components, along with how much space is available. l Total Required Space : Displays the amount of disk space required for the selected components. l Space Remaining: Displays the amount of space available after installation on the drive currently in the Installation Path.
PlanetPress Connect Server Connection Set the Connect Server Connection internal username and password. The options available are as follows: l Port: Enter the port to use to communicate with the Connect Server. By default the Connect Server controlled by the OLConnect_Server service communicates through port 9340. l User: Enter the internal username for connection to the OL Connect Server. The default username for new installations is olc-user.
l a lower case character (a, b, c ... ) l an upper case character (A, B, C ...) l a numeric digit (1, 2, 3 ...) l a punctuation character (@, $, ~ ...) For example: "This1s@K" Note: When updating from an earlier Connect version, the appropriate MariaDB password must be entered or the update will fail. If the password is subsequently forgotten, then MariaDB must be uninstalled and its database deleted from disk before attempting to reinstall.
l Host: Enter the IP Address or alias of the server where database resides. l Database Instance Name: Enter an existing Microsoft SQL Server's instance name. This option only applies to existing Microsoft SQL Server instances, and not for MariaDB or MySQL. l Port: Enter the port on which the database server expects connections. For MariaDB and MySQL, this is 3306 by default. For Microsoft SQL Server it is 1433 by default. l Schema: Enter the name of the database into which the tables will be created.
Note: This test does not check whether the remote user has READ and WRITE permissions to the tables under the objectiflune schema. It is solely a test of database connectivity. Ready to install This page confirms and lists the installation selections made. If components have been selected which have a shortcut associated with then (Designer, Server) then you will presented with the option to Create desktop shortcuts. Select if you wish for desktop icons to be created.
l Note that the Product Update Manager can also be called from the “Objectif Lune Update Manager” option in the Start menu. l It can be uninstalled via Control Panel | Programs | Programs and Features. Product Activation After installation, it is necessary to activate the software. See "Activating a License" on page 50 for more information. Before activating the software, please wait 5 minutes for the database to initialize.
1. If MySQL was previously installed as an OL Connect component AND the database contains some user-defined schemas: n The native OL Connect schema is removed from the MySQL database. n The MySQL database files (C:\ProgramData\Objectif Lune\OL Connect\MySQL) are kept intact, as user-defined schemas mean that the user did not have only the OL Connect native schema content in their database.
l l C:\ProgramData\Objectif Lune\OL Connect\.settings l C:\ProgramData\Objectif Lune\OL Connect\CloudLicense l C:\ProgramData\Objectif Lune\OL Connect\ErrorLogs l C:\ProgramData\Objectif Lune\OL Connect\LiquibaseUpdate Files are removed from the root of the data folder C:\ProgramData\Objectif Lune\OL Connect\. If the folder is empty following this (i.e. no license or user folders were present) then the C:\ProgramData\Objectif Lune\OL Connect\ folder itself is removed.
Running Connect installer in Silent Mode Updating from Connect versions predating 2019.1 In order to update PlanetPress Connect to 2022.2 from Connect versions prior to 2019.1 it is first necessary to update the Connect License. For details on how to upgrade the Connect License see "Users of Connect prior to 2019.
option is more suitable for debugging purpose. If set to true, then a verbose log file is created in the logging path specified in the INI file. If no logging path is specified in the INI file, then the default one is used. If set to false, standard logging is done. l path: String (Default: %PROGRAMDATA%\Objectif Lune\Installation Logs) Sets the folder to which the installation log will be written. Only the log folder should be specified here, not the log file name.
l RegisterService.connectServer: Boolean (Default: true) Register the Server services or not (such as in the case of a container). l server.username: String (Default: the current user/domain installing the service) Determines the domain and username to be used when configuring the Server service. The username can use the following syntax formats: l username l domain\username (Note: the backslash between the domain and user names needs to be escaped by another backslash. For example: server.
If product.MariaDB = True, then the port should be set to 3306. l database.rootpassword: String (Default: there is no default for this setting) Database root password. There is no default value, and if this is left unspecified the installation will fail. l database.username: String (Default: olconnect) The username that PlanetPress Connect will use to connect to the database. l database.
l English - en-US l French - fr-FR l German - de-DE l Italian - it-IT l Japanese - ja-JP l Korean - ko-KR l Portuguese (Brazil) - pt-BR l Spanish - es-SP Example: ; Installation settings [Installation] product.Designer = true product.Server = true product.PrintManager = true product.ServerExtension = false product.MariaDB = true product.Messenger = true RegisterService.connectServer = true server.username = Administrator server.password = ObjLune server.connection.user = olc-user server.
Note: If the Remove action is taken, then the equivalent of an uninstall is done, while a Modify action will change the components installed on the system based on the ones defined in the INI file [Installation] section, allowing removal or addition of components in the current installation. l keepdata: Boolean (Default: True) Allows the user to specify if they wish to keep or remove user data (located under %PROGRAMDATA%\Objectif Lune\OL Connect when performing a product uninstall.
Exit Codes Success l 0 = Installation completed successfully / no specific error code was returned.
l 404: The installer brand does not match the brand of the OL Connect version currently installed. (Printshop Mail) License file validation (500s) l 501: PlanetPress Connect license file is in older format l 502: License Care Date does not allow installation of product l 503: License brand mismatch with installer brand Destination and selected product check (600s) l 601: Server and Server Extension both were selected to be installed. Only one of the two may be installed on any one system.
l Open the Connect Software Activation shortcut. l The PlanetPress Connect Software Activation application consists of the following: l License Information subsection: l l Magic Number: Displays the PlanetPress Connect Magic Number. Copy the magic number to the clipboard: Click to copy the Magic Number to the clipboard. It can then be pasted in the activation request email using the Windows CTRL+V keyboard shortcut.
l Customers must submit their Magic Number and serial number to Objectif Lune via the Web Activations page: http://www.objectiflune.com/activations. The OL Customer Care team will then send the PlanetPress Connect license file via email. l Resellers can create an evaluation license via the Objectif Lune Partner Portal by following the instructions there: http://extranet.objectiflune.com/ Note that if you do not have a serial number, one will be issued to you by the OL Activations team.
l Click Install License to activate the license. The license will then be registered on the computer and you will be able to start using the software. Caution: After installation message will appear warning that the Server services will need to be restarted. Just click OK to proceed. Migrating to a new workstation The purpose of this document is to provide a strategy for transferring a OL Connect (and/or Workflow) installation to a new workstation.
C:\ProgramData\Objectif Lune\PlanetPress Workflow 8 Here are a few important points when transferring these files: l If you are upgrading to the latest version of Connect, it is recommended to open each template in Designer, produce a proof making sure the output is correct. Then send the template with its data mapping configuration, Job Creation and Output Creation preset files to Workflow by clicking on File > Send to Workflow...
l On the new workstation if the "TCP/IP Print Server" service is running in Windows, it is requested to disable that service so that it does not interfere with the Workflow LPD/LPR services. l Configure the Workflow services account as in the previous installation. If accessing, reading and writing to network shares, it is recommended to use a domain user account and make it a member of the local Administrators group on the new workstation. Once the user account has been chosen: 1.
l OL Connect Print Manager Configuration files (.OL-ipdsprinter): C:\Users\[UserName]\Connect\workspace\configurations\PrinterConfig l OL Printer Definition Files (.OL-printerdef): C:\Users\[UserName]\Connect\workspace\configurations\PrinterDefinitionConfig l OMR Marks Configuration Files (.hcf): C:\Users\[UserName]\Connect\workspace\configurations\HCFFiles Where [username] is replaced by the appropriate Windows user name. Tip: Actually, the path may not begin with 'C:\Users', as this is language-depe
Capture 1. Download the latest version of the Anoto PenDirector. 2. Before installing the PenDirector, make sure the pen’s docking station isn’t plugged into the server. Then install the PenDirector. 3. Stop the Messenger 8 service on the old and new server from the Workflow menu bar: Tools > Service Console > Messenger > right-click and select Stop. 4. Import the following files and folders from the old server into their equivalent location on the new server: C:\ProgramData\Objectif Lune\PlanetPress Workf
2. Configure the Merge and Weaver Engines scheduling preferences as in the previous installation l Open the Server Configuration from: C:\Program Files\Objectif Lune\OL Connect\Connect Server Configuration\ServerConfig.exe l Configure the DataMapper, Merge and Weaver engines preferences (see "Parallel Processing preferences" on page 95). As of version 2018.1 these preferences include the minimum (Xms) and maximum (Xmx) memory utilization for the Server, Merge and Weaver engines.
l If you want to transfer your licenses to the new machine right away, you may ask your local Customer Care department for a 30day Transition activation code for your old machine. l Upgrades cannot be activated using the automated Activation Manager. Contact your local Customer Care department. To apply the license file received from the Activation Team: 1. Ensure that all services are stopped on your old machine before activating and starting the services on the new machine.
l If you are a Customer, the installer can be downloaded from the Objectif Lune Web Activations page: http://www.objectiflune.com/activations l If you are a Reseller, the installer can be downloaded from the Objectif Lune Partner Portal: http://extranet.objectiflune.com/ PlanetPress Workflow can be installed in parallel on the same machine as an existing PlanetPress® Suite 7.x installation. Note however: l If both versions need to be hosted on the same machine, PlanetPress Workflow 2022.
Upgrading from previous Connect versions Always backup before upgrading It is recommended that you always backup your existing Connect preferences before upgrading to a new version. This will enable you to revert back to the previous version, in a worst case scenario in which the new version introduces issues with your existing production processes. Whilst the probability of such a worst case scenario is remote, it cannot hurt to take some simple precautions, just in case.
Backup existing Connect version It is recommended that you always backup your existing Connect preferences before upgrading to a new version. This will enable you to revert back to the previous version, in a worst case scenario in which the new version introduces issues with your existing production processes. Whilst the probability of such a worst case scenario is remote, it cannot hurt to take some simple precautions, just in case.
Backup your database If you want to be completely thorough and be able to exactly replicate your existing system, you should also backup your existing Connect database. If the default (pre Connect 2022.1) MySQL database were being used as the Connect back-end database, we would recommend the MySQLDump tool be used for this. See for details on this utility program: mysqldump (https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html).
n The native OL Connect schema is removed from the MySQL database. n The MySQL database files (C:\ProgramData\Objectif Lune\OL Connect\MySQL) are kept intact, as user-defined schemas mean that the user did not have only the OL Connect native schema content in their database. n A message at the end of the upgrade will advise the user that some non-OL schemas were found in the database, so the database files were not removed. 2.
l l C:\ProgramData\Objectif Lune\OL Connect\.settings l C:\ProgramData\Objectif Lune\OL Connect\CloudLicense l C:\ProgramData\Objectif Lune\OL Connect\ErrorLogs l C:\ProgramData\Objectif Lune\OL Connect\LiquibaseUpdate Files are removed from the root of the data folder C:\ProgramData\Objectif Lune\OL Connect\. If the folder is empty following this (i.e. no license or user folders were present) then the C:\ProgramData\Objectif Lune\OL Connect\ folder itself is removed.
l If MariaDB was previously installed as an OL Connect component: l All schemas from the MariaDB database are kept, allowing the user to use those database files if they reinstall the software. Upgrading from PReS Classic PReS Classic and PlanetPress Connect are very different products. Whilst PlanetPress Connect provides considerably more options for email and web output, one need not abandon existing PReS Classic print jobs.
l PlanetPress Capture is still supported in PlanetPress Workflow 2022.2 but only with documents created with the PlanetPress Suite Design 7. l PlanetPress Connect Designer. This is a design tool based on completely new technology. It is not backwards compatible and therefore cannot open PlanetPress Suite Design 7 documents. If you want to continue editing those documents you can keep doing so in PlanetPress Suite Design 7. l PlanetPress Connect Server.
l There is insufficient memory in the computer currently running PlanetPress Workflow 2022.2 to also run PlanetPress Connect Server. l You want to use a more powerful computer with more RAM and more cores to run the Server to achieve maximum performance (see "Performance considerations" on page 23).
Upgrade steps 1. To upgrade to PlanetPress Connect, the first step is to stop your PlanetPress Workflow services. You can do so from the PlanetPress Workflow configuration tool or from the Windows Service Management console. 2. Then, using the PlanetPress Connect setup, install the Designer and/or Server on the appropriate computers. 3. Then, using the PlanetPress Workflow 2022.2 setup, install PlanetPress Workflow and/or PlanetPress Image on the appropriate computers.
l PlanetPress Messenger configuration 5. If you installed PlanetPress Workflow 2022.2 on a different computer, please see "How to perform a Workflow migration" on page 75 for help importing all those settings, if you wish to import them. 6. To launch the Upgrade wizard, open the PlanetPress Workflow 8 configuration tool and, from the Tools menu, launch the Upgrade Wizard. IMPORTANT: Before you start this process, make sure you have a backup of your current installation/computer.
7.
8. Then select the product from which you wish to upgrade: 9.
10.
11. After that you will need to get the activation file for your product. To obtain your activation file, download the PlanetPress Connect installer from the Web Activation Manager (http://www.objectiflune.com/webactivationmanager/), follow the instructions for the installation using the serial number provided to you. You can activate your license through the Web Activation Manager. 12.
How to perform a Workflow migration What do you need to consider when upgrading from PlanetPress Suite 7 to PlanetPress Connect Workflow 2022.2 on a new computer? Installing and Activating Workflow 2022.2 on a new computer Points to consider: l Before installing, be sure to read "Installation and Activation" on page 27. There you will find detailed Connect Workflow installation steps as well as system requirements, notes on license activation and much more.
l Login to our Web Activation Manager (www.objectiflune.com/activations) using your customer number and password to get your Printer Activation Codes. l If you do not have access to the computer in which PlanetPress Suite was previously installed, print a Status Page for each printer from your Connect Workflow 8 Configuration. Do this via the Tools > Printer Utilities menu option. Select “Print Status Page” and then select your printers from the list. Email the Status Page(s) to activations@ca.
Workflow 8\PlanetPress Watch\Documents" 3. Use the File > Send To menu option in PlanetPress Suite Designer and select the PlanetPress Connect Workflow 8 to which you want to send the PlanetPress Suite Designer document. This should work with PlanetPress Suite versions 6 and 7. Make sure that ports 5863 and 5864 are not blocked by firewall on either machine. Also make sure you add the PlanetPress Suite machine’s IP address to the permissions list in Connect Workflow 8 from Tools > Access Manager.
Workflow Plug-ins Back up any custom PlanetPress Suite Workflow configuration Plug-ins (.dll) and copy them onto the new computer. The PlanetPress Suite Workflow plug-ins folder can be found here: "C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\Plugins". Make sure that you copy only the custom plug-ins. Alternatively, you can download custom plug-ins from http://planetpress.objectiflune.com/en/suite/resources/support onto the new computer.
l All PostScript and TrueType host based fonts must be reinstalled. Make sure you restart the computer after this step. l If necessary, reconfigure local ODBC connections. (i.e. create local copies of databases or recreate required DSN entries) l Manually install all external executables that will be referenced by the Connect Workflow processes in the configuration file. If possible, retain the local path structure as used on the older installation.
These steps must be executed after a proper Workflow Migration has been completed. Instructions on how to do such can be found here: "How to perform a Workflow migration" on page 75. Failure to do so will result in unexpected problems. Note: It is recommended that you first update your PlanetPress Suite to version 7.6 before crossgrading to PlanetPress Connect. Using PlanetPress Connect Workflow 2022.2 on the same computer as PlanetPress Suite 7.6 Steps to migrate: 1.
to this folder: "C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\PGC" 7. Restart the PlanetPress Connect Workflow 8 Messenger. To do this, a. Open Workflow Service Console. This can be done either via the Windows Start Menu, or from within Workflow Configuration application (via menu option Tools > Service Console). b. Select Messenger in the tree list, right click and select Start from the context menu options. 8.
4. Do the following for both PlanetPress Suite version 7.6 and PlanetPress Connect Workflow 8. a. Open Workflow Service Console. This can be done either via the Windows Start Menu, or from within Workflow Configuration application (via menu option Tools > Service Console). b. Select Messenger in the tree list, right click and select Stop from the context menu. Note: These steps must be done for both PlanetPress Suite Workflow 7 and PlanetPress Connect Workflow 8. 5. Copy the file PPCaptureDefault.
The Connect Server settings are maintained by the Connect Server Configuration utility tool which is installed alongside PlanetPress Connect. Connect Server Configuration can be launched from the Start Menu, as seen in the following screenshot: The Connect Server Configuration dialog is separated into individual pages, where each page controls certain aspects of the software.
Connection preferences Background The Connection preferences are a way to control precisely how to connect to the PlanetPress Connect Server. This preference page was added in Connect 2018.2 to simplify management of HTTP communication with Connect. HTTPS communication options were then added in Connect 2021.1.
Note: If HTTPS were selected as the REST Services protocol, these internal Port settings must be entered. Note: Please note that local security settings (including firewalls) must be taken into consideration when setting Port entries. l IPC Port: Set the internal connection port number. This cannot be the same at the primary REST Services port number. l REST Port: Set the internal REST connection port number. This cannot be the same at the primary REST Services port, or the IPC Port.
If these precautions are not taken, data saved in the server may be accessible from the outside! l Enable server security: Enable to add authentication to the REST server. - When enabled, the username and password (which cannot be blank) of an authorized user must be entered in any remote Connect Designer that links to this Server. See the "Connect Servers preferences" on page 804 sub-section of the Designer Preferences dialog.
Engine configuration The Connect Server cooperates with different engines to handle specific tasks. A DataMapper engine extracts data from a data file. A Merge engine merges the template and the data to create Email and Web output, or to create an intermediary file for Printed output. The intermediary file is in turn used by a Weaver engine to prepare the Print output. (For more information see: "Connect: a peek under the hood" on page 114).
l Your licence, which imposes a speed quota (see "Speed quota: Pages Per Minute" below). l The processing power of your machine. How many cores it has determines how many engines can be launched (see "Launching multiple engines" on the next page). l The size and number of jobs of one kind that need to be handled, sequentially or simultaneously. In other words, your use case.
speed units and the maximum 'pages' per minute to all running jobs in proportion to the number of engines they are using. Note: Output speed is the speed at which the output is created by the engine in question. Data mapping and other steps in a production process are not taken into account. The throughput speed is the speed of the entire production process. This will always be lower than the output speed.
Merge engine Generally, launching a relatively high number of Merge engines results in better performance, as Merge engines are involved in the creation of output of all kinds (Print, Email and Web) and because content creation is relatively time-consuming. DataMapper engine Adding DataMapper engines might be useful in the following circumstances: l When large data mapping operations have to be run simultaneously for many jobs. l When frequently using PDF or XML based input.
These settings only control the maximum size of the Java heap memory that an engine can use; the total amount of memory that will used by an engine is actually a bit higher. Also keep in mind that the Connect Server and the operating system itself will need memory to keep running. Allocating processing power to jobs Which engine configuration is most efficient in your case depends on how Connect is used.
Running a job as fast as possible Number of parallel engines per Print job Two or more engines of a kind can be combined to work on the same Print job. Generally jobs will run faster with more than one engine, because sharing the workload saves time. However, running one job with multiple engines reduces the number of jobs that can be handled at the same time by that kind of engine, because there are only so many engines (and speed units) available.
reserved for HTML output can help performance. l By reserving a number of parallel engines for Print jobs of a certain size (see "Number of parallel engines per Print job" on the previous page). More parallel engines will make them run faster, but they will have to wait (longer) if the required number of engines isn't available when they come in. l By specifying target speeds for simultaneous Print jobs of a certain size.
Web requests. In online communication, response times are critical. If the Server receives a lot of Web requests, it should handle as many as possible, as quickly as possible, at the same time. It is recommended to launch as many Merge engines as possible and to reserve most of them for HTML output. The jobs will generally be small and can do with just one Merge engine. Mixed jobs that are processed in parallel.
l Data Mapper Engine (MB): Enter the memory limit for Data Mapper Engine. l Merge Engine (MB): Enter the memory limit for Merge Engine. l Weaver Engine (MB): Enter the memory limit for Weaver (Output) Engine. Language preferences l Display language: Select a language from the drop-down list to be used as the language of the OL Connect Server Configuration tool, and all log files created by OL Connect Server and its components (after the software/service is restarted).
Merge engine available. How many Merge engines to use is based on the number of records in the input data. Select from the following options: l Optimize per task: This runs each task with as many Merge engines as needed (until engines are exhausted). Using this option means that Merge engines will not be reassigned when new tasks come in. This option is better suited for batch processing. l Maximize simultaneous tasks: Merge engines will be reassigned from a running task to new tasks when they arrive.
there are situations where these assumptions will not apply. Note: Currently, it’s only the print and PDF content creation tasks that use multiple Merge engines. Parallel Processing properties (Server Configuration) Whether options are available for selection on this page or not is entirely dependent upon the Number of engines selection made in the Engines preferences page.
Content Creation Tab (Server Configuration) A Tab with data that relates solely to Content Creation. The options are: l Total Merge engines configured read only display: This is a read only entry that shows the total number of Merge engines available. To change this value, you must update the Merge Engines in the Engines preferences page. l Multi tasking group: When starting a new Content Creation task, the task will immediately commence if there is a Merge engine available.
Note: These entries aren't applied instantaneously. There is often a lag. That is why you can reserve a specific number of engines for new jobs, in the options below. Those reservations operate in real time. The default of 100 records was chosen purely because it is an easily multiplied number, not because it has been proven to have any significant value. It means that on an average system (i.e., less than 10 Merge engines) any decently sized task is allowed to use all Merge engines.
Output Creation Tab (Server Configuration) A Tab with data that relates solely to job Output Creation. If only the single Weaver Engine is configured in the Engines preferences page, then this whole tab will be disabled. l Licensed speed limit (pages per minute): This read only entry shows the current license speed limitations, in pages per minute. The speed limitations are determined by your Connect license.
speed for small jobs, this will automatically allow more for the large and medium jobs. l Medium job (engines): Optionally enter the number of Weaver engines to reserve for Medium jobs. l Total Weaver engines configured: This read only entry shows the number of Weaver engines still available. This is the Total engine count, minus the number of engines assigned to both Small and Medium jobs. To change this value, you must update the total amount of Weaver Engines in the Engines preferences page.
n Do you need to change speeds? In many cases there will likely be no need to change the target speed. n The target speed is not a guaranteed actual speed, but a speed limit that the engine is allowed to exceed in order to utilize the licensed speed. n When changing the target speed, don’t be overly precise, you are unlikely to get that exact value anyway. It will likely be a matter of trial and error.
To solve the issue you might need to kill the PID and restart the process. This will be fixed in a later release. Installer issues The new 2022.1 installer has some minor issues that will be fixed in a subsequent release. The issues are: l After installation, the "recent files" list is cleared and the measurement units are reset from 'cm' to 'inch'. l When updating from earlier 2022.1.x versions the bundled MariaDB connection settings are reset.
CSS inlining colour values now converted to RGB As of PlanetPress Connect 2021.2 when using the CSS inlining mode "Apply CSS properties on elements" for emails, all colour values are now converted to RGB, rather than to HEX. Issues running Connect on Hyper-V 9.0 Some customers have reported difficulties running PlanetPress Connect on Hyper-V version 9.0. In some instances PlanetPress Connect cannot install and in others the PlanetPress Connect Server service sometime stops with a signature error.
To get around the problem, please close and reopen the plugin. The problem only occurs on the initial opening, and should work fine thereafter. The license update introduced in OL Connect 2019.1 does not cater for existing AFP input licenses AFP Input is an add on option for OL Connect licenses. Unfortunately, the update to the 2019.1 version of the OL Connect license does not cater for existing AFP input licenses.
set @a=null,@c=null,@b=concat("show tables where",ifnull(concat(" `Tables_in_ ",database(),"` like '",@c,"' and"),'')," (@a:=concat_ws(',',@a,`Tables_in_ ",database(),"`))"); Prepare `bd` from @b; EXECUTE `bd`; DEALLOCATE PREPARE `bd`; set @a:=concat('optimize table ',@a); PREPARE `sql` FROM @a; EXECUTE `sql`; DEALLOCATE PREPARE `sql`; set @a=null,@b=null,@c=null; If using Microsoft SQL Server run the following command in a query window: sp_updatestats Windows 10 Search service impacting Connect The Window
will not only no longer apply, but can cause scheduling preference conflicts for the Merge and Weaver engines. To fix this, any pre-existing Connect installation that was running a mixture of internal and external Merge and Weaver Engines must first restore their scheduling preferences to the default values. This can be done by clicking on the Restore Defaults button in the Scheduling pages of the Server Preference or the Designer Preference dialogs.
dialog (see "Selecting data for a Business Graphic" on page 628 in the Online Help: https://help.objectiflune.com/en/PlanetPress-connect-user-guide/2022.2. Known Font issues The following font(s) are known to have issues in PlanetPress Connect 2022.2: l Benton Sans CFF font Minor differences in PCL output introduced in 2018.1 The browser component (Mozilla Gecko) used in the WYSIWYG editor of the Designer was updated for Connect 2018.1. This allows use of new CSS properties, such as flexbox.
The result is that now every document in the job becomes a booklet without any empty pages between the first page and the last page. With some exceptions. Booklet Impositionings that require a multiple of 4 pages (Saddle binding and Perfect binding) will still get empty pages added, when needed. Issues with Microsoft Edge browser The Microsoft Edge browser fails to display web pages when the Workflow's CORS option (in the HTTP Server Input 2 section) is set to "*".
MySQL Compatibility The minimum supported MySQL version is MySQL 5.6. PostScript print presets The print presets for PostScript were changed from Version 1.1 onwards meaning that some presets created in Version 1.0 or 1.0.1 may no longer work. Any PostScript print preset from Version 1.0 that contains the following will not work in Version 2022.2: *.all[0].* Any preset containing this code will need to be recreated in Version 2022.2.
l The engine processing the job will look on the local file system for the direct file path leading to the “resource not found” issue mentioned above. Caution: The Designer itself and Proof Print do not use processes that run as services and they may find local files with non-UNC paths which can lead to the false impression that the resources are correct.
until the required engines become available. The Server will log when it is waiting for an engine and when it becomes available. Note that there is no way to cancel any commands other than stopping the Server. Print Content and Email Content in PlanetPress Workflow In PlanetPress Workflow’s Print Content and Email Content tasks, the option to Update Records from Metadata will only work for fields whose data type is set to String in the data model.
Important: Stop any active Anti-Virus software before uninstalling Connect backend database. Some anti-virus systems are known to block the uninstallation of MariaDB datafiles, as well as blocking the uninstallation of the MariaDB database application itself. If you wish to uninstall the Connect backend database it is highly recommended that any anti-virus application be stopped prior to uninstalling PlanetPress Connect , as otherwise the Connect uninstallation might not work correctly.
You can find additional information that complements the user manuals, such as error codes and frequently asked questions about PlanetPress Connect, in the Knowledge base. Connect: a peek under the hood Connect consists of visible and invisible parts. The visible parts are the tools you use to create templates, data mapping configurations, and Print Presets (the Designer/DataMapper), and to create Workflow configurations (the Workflow configuration tool).
There are a number of services related to Workflow. The PlanetPress Messenger service, for example, receives the files sent to Workflow from the Designer and the Workflow configuration tool. The Workflow Service Console lets you start and stop the different services, except the Connect server, and see their log files (see Workflow Service Console). Note that Workflow isn't limited to Connect functionality. It was originally developed as part of the PlanetPress Suite.
The Connect server is one of the components that has to be installed with Connect (see "Installation Wizard" on page 32). In the Workflow Configuration Tool preferences you have to set the OL Connect server settings to enable Workflow to communicate with the server (see Workflow Preferences). The Connect Server Configuration tool lets you change the settings for the Connect server, the engines and the service that cleans up the database and the file store.
Store. The files can be accessed through the REST API, which means web portals could potentially access the files directly without having to go through a Workflow process (see The Connect REST API CookBook). The engines DataMapper engines. A DataMapper engine extracts data from a data file. The number of DataMapper engines is configurable (Engines preferences). Merge engine/s. A merge engine merges data with a template using the scripts in the template, in order to create content items.
Printing and emailing from the Designer To print or send email from within the Designer, the PlanetPress Connect service has to be running. The service is started automatically when the Designer starts, but it may not be running if the Connect Server and the Designer are installed on different computers. The PlanetPress Connect service can be found on the Services tab in the Task Manager. For a proof print the Connect server is not used. Proof printing is always done locally, by the Designer.
Note that actions of the Cleanup service are only logged in the Server's log file. (See also: "Clean-up Service preferences" on page 783.) Note: Workflow services write their logs to an entirely different location. See Accessing the Workflow logs. Tip: For more information about Connect's architecture, see: "Connect: a peek under the hood" on page 114. Name The name of a log file consists of the component's name, a time stamp and the Windows process ID.
l .OL-template: A Designer Template file (see "Templates" on page 414). Is linked to a data mapping configuration by default, but not necessarily. l .OL-datamapper: A data mapping configuration file, which can include sample data (excluding database source files such as MariaDB, MySQL, Oracle, etc). See "Data mapping configurations" on page 199. l .OL-datamodel: A data model file which can be imported or exported into either a data mapping configuration or a template.
"Connect: a peek under the hood" on page 114). Typically, an OL Connect project aims at automating (part of) a company's communication with its customers, suppliers, or other parties. Data is received - in whatever form -, processed, stored, and used in communications which are sent out through one or more output channels (print, email, web, etc), immediately or at a scheduled time. Automation with Workflow The process automation server included in PlanetPress Connect is called Workflow.
web page or portal, which means you could potentially access the OL Connect Server, database and File Store directly, without having to go through a Workflow process, or with another automation tool such as Node-RED. The OL Connect REST API gives access to a number of areas including its processes, data entity management and File Store operations.
Use the Versioning History option in the Project menu to display the history of all changes. The Versioning History panel contains a complete record of who did what, on which date, and for which reason. You can elect to revert to a version from within the history if something turns out to be flawed in your current project. Both the Designer and the DataMapper use Git integration to maintain the history of a Project.
As you work on your project files, you can commit files in the project at any time. When you commit files in the project, a snapshot of the project is saved as a version, along with your name, your commit message, and the date. The status of each file - whether it was added, modified, etc. - is recorded as well. OL Connect uses Git integration to maintain the history of a versioned project in a local repository. Projects can be created and committed from within OL Connect Designer and DataMapper.
2. Each versioned project needs to be stored in its own folder. Select the folder in which to store the project. The folder name will actually become the name of the project. The software suggests to store all versioned projects in the OL Connect folder that is located in the Documents folder, with each project having its own folder under Documents/OL Connect. Tip: If the folder doesn't exist yet, right-click in the right part of the dialog, where the existing folders are listed, and select New.
Once created or opened, a versioned project remains open until it is closed via the menu: Project > Close, or until (a file from) another project is opened. As long as a project is open, the software will open the project folder every time you want to open or save a file. Note: Versioned projects have a hidden .git folder. Do not remove this folder. It contains the version history and, in the case of an online versioned project, information about the remote repository.
1. With the project open in Designer, select Project > Versioning history from the menu bar. 2. The history displays in the Versioning History tab. You can select a version from the list to display more information about the changes made in that version. Restoring a version To restore a previous version, select the version, then select the Restore icon. The project is reset to the state it was in when that version was committed.
The files comprising a project display in the Project files panel. Using tags The Tag functionality allows any commit to be identified with a user-defined label. Think of it as setting a flag on your versioning history whenever you want to highlight that, at that specific stage, something is more remarkable than on other commits. For instance, you can mark the latest commit with a tag named Version_1, thereby indicating that this is the first version of the entire project that goes into production.
Note: Tag names must be unique. A different capitalization does not make a name unique. Spaces are not allowed. The Versioning history panel displays any tags immediately before the message title: You can add as many tags as you like, and you can delete them (right-click the tag, then click Delete) and re-use them on a different commit (which is the equivalent of moving a tag).
like GitHub, BitBucket and Azure DevOps offer a Git repository hosting service that allows projects to be worked on collaboratively. The first thing you need is an account with a Git repository hosting service. GitHub, BitBucket and Azure DevOps have been tested with OL Connect. Others haven’t been tested, but if they are Gitbased, they should work the same way. Obtain a token Secondly, you have to log on to your Git repository hosting service and go into your profile settings to obtain a token.
2. Make sure that the remote repository contains at least one file. If it is completely empty, add one file to it. This could be an empty readme.txt or .gitignore file. With BitBucket this isn't necessary; a new Bitbucket repository always has a .gitignore file. Note: A remote repository that is completely empty will cause an error when cloning it in OL Connect. This will be fixed in a future release. 3. Once the new repository has been created, you are presented with the URL of the new project.
5. Enter in the URL of the remote repository. 6. In the Destination field, enter the name of the folder where the local copy of the versioned project should be stored. The software suggests to store all versioned projects in the OL Connect folder that is located in the user's Documents folder, with each project having its own folder under Documents/OL Connect. The folder name will actually become the name of the project.
Note: The .gitignore file in the project folder tells Git which subfolders and files should not be committed. Tags that you add to versions have to be published to the remote repository via the Project > Publish tags… option. See also: "Using tags" on page 128. Downloading remote changes When you select Project > Check and apply changes OL Connect retrieves changed files from the remote repository and downloads them to your local project.
Half a circle on the left hand side of the axis indicates the commit was made locally but has yet to be published to the online repository. If someone were to make changes to the online version that the local version doesn’t have yet, then that half circle would be on the right hand side of the axis. See also: "Viewing project history" on page 126. Sample Projects A Sample Project generates a small Connect solution that is ready to be tested and deployed.
o A single PDF for the entire job (in which the invoices are grouped per customer). o One PDF per customer. o One PDF per invoice. (See: "Sample Project: Print Transactional Jobs" on page 153.) The Workflow process implements the typical Print plugins (see "Print processes with OL Connect tasks" on page 172). l Basic web page. This project serves a simple web page, personalized via URL parameters. (See: "Sample Project: Serving a Web Page" on page 164.) l Submitting data with webforms.
The wizard lets you select the folder in which you want the solution to be installed. Since Sample Projects are versioned (see "Versioned projects" on page 123), the software suggests to store it in the OL Connect folder that is located in your Documents folder. By default, all versioned project have their own folder under Documents/Connect. In the selected folder, the Sample Project will create two subfolders: Configurations and Workspace.
1. Open the Create Email Content task and select the Email Info tab; then uncheck the Send emails to sender (test mode) option. 2. Replace the test data with real data: a. Open the Sample Data.xml file. You will find it in the Configurations\Data folder. b. Replace the email addresses in this file by real email addresses that you have access to. c. Copy the modified sample data file from the Configurations\Data folder to the Workspace\In folder.
l The To and Subject scripts apply to email fields. (Click Email Fields at the top of the email to expand all email fields.) For information about this kind of scripts, see: "Email header settings" on page 483. l Finally, there are two custom scripts: l The Personalize Support link script adds the order number to the 'support team' link (which is a mailto link). l The Year script puts the current year in the footer.
figurations\Resources\Data folder). The sample data yields 5 records with customer data including a detail table with invoice details. Much of the information in the extracted records isn't used in the email, but was used to create the delivery notes. The email addresses are used in the template (in the To field), but ignored in the Workflow process because it sends emails in test mode (see "The Workflow configuration" on the previous page).
To capture input data from a different source: 1. Replace the Folder Capture Input task by the appropriate Input task. See: Input tasks in Workflow's Online Help. 2. Add a Send to Folder task directly after the new Input task and set its output folder to the Workspace\Debug folder (%{global.pr_prom_workspace}\Debug). This task writes the job file to a file, which can then be used as sample data file when creating a data mapping configuration and debugging the Workflow process. 3.
Finally, in Workflow, adjust the process: double-click the Create Email Content task to open it, and select the new template. This is only necessary when the file name has changed. Workflow configuration The current Workflow configuration is very simple. In reality, a process that generates email output will be part of a larger project, in which, for example, invoices are produced in a separate process, stored in a folder and attached to an email at a later time.
Tip: If you own PlanetPress Connect or PReS Connect, free COTG trial licenses may be available to you; see http://www.captureonthego.com/en/promotion/. Note: Your network setup may make it impossible for the COTG app to communicate with the OL Connect Workflow service. The app needs to be able to communicate with OL Connect Workflow in order to download forms and submit data. Network and firewall settings may block these requests.
Project details The templates The form The COTG Timesheet Form template contains a Web context with one Web section: Section 1 (see "Web pages" on page 498 and "Forms" on page 640). The form has regular Form elements as well as COTG elements (see "Form Elements" on page 644 and "COTG Elements" on page 633). The template was started with the Time Sheet Wizard (see "Capture OnTheGo template wizards" on page 525), which also provides the necessary JavaScript files and style sheets.
The data mapping configurations This project has two data mapping configurations, made with "The DataMapper" on page 198. To open one of them, select File > Open from the menu in the Designer and browse to the Configurations\Resources folder. COTG Timesheet Form The COTG Timesheet Form data mapping configuration is designed to extract data from the sample data file (Sample Data.xml).
l The Set Job Infos and Variables task reads data from the (current record in the) Metadata and puts them into variables. l The Create Web Content task merges the record with the COTG Timesheet Form template. l The Send to Folder task saves the form to the Forms folder in the project folder, using the value of one variable as the file name. l Finally, the Output to Capture OnTheGo task sends information about the form to the COTG Server.
Do you intend to expand the project into a solution where Workflow runs on a different machine that also has a different regional setting? Then indicate the desired encoding in the Designer preferences (see "Sample Project deployment settings" on page 802) before installing the project. The form If you want to add inputs to the form and extract the submitted data, here's how to do that. 1.
The report Using different data in the report requires changing the COTG Timesheet Report template (see "Personalizing content" on page 708). Tip: The Designer can have one data mapping configuration and one template open at the same time. Use the tabs at the top of the workspace to switch between the two. Click the synchronize button on the Data Model pane to make sure that the Data Models are the same in both.
Sample Project: Print Promotional Jobs The Print Promotional Jobs Sample Project creates a simple, yet complete OL Connect project that produces promotional print output. The project extracts data from an XML file and uses that data to personalize a promotional letter. The output is a single file containing all the letters, in the format that was selected in the wizard (PDF, PCL or PostScript Level 3).
1. Locate the Workflow configuration in the Configurations\Workflow folder and open it in OL Connect Workflow. 2. Select the pr_prom_generate_output process. 3. Open the Debug ribbon and click Run. In Debug mode, the first Input task is skipped and the process is executed using a sample data file. This project is pre-configured to use the file: Sample Data.xml. A successful test run results in a subfolder in the Workspace\Out folder, named after the current month and year.
l The year script changes the year in the conditional paragraph to the current year. This script only has to look for @year@ in an element that has the ID 'promo', instead of in the entire letter, which makes it run faster. l The Dynamic Signature script switches the signature, with a file name based on a data field. (See: "Dynamic images" on page 741.) l The sender's address is adjusted depending on where the customer lives. The two different sender's addresses are saved in snippets.
Note that the Output Type, on the Print Options page in the Output Creation dialog, is set to Prompt for file name. This setting is overruled in the Workflow configuration (see below). Workflow configuration Whenever new input data appears in the Workspace\In folder, the letter template is automatically merged with it and then printed. That is, if the Workflow server is running with the Workflow configuration installed by the Sample Project.
1. Create a new data mapping configuration to match your input data. (See .) 2. When it's finished, send the new data mapping configuration to Workflow (see "Sending files to Workflow" on page 418). 3. Open the Workflow configuration: Print Promotional Data. 4. Double-click the Folder Capture Input task and change the file mask, or replace the task by the appropriate Input task. See: Input tasks in Workflow's Online Help. 5.
Print output To save the output to another kind of file, you could use one of the other Output Creation Presets. To do that, adjust the process in Workflow: double-click the All In One task to open it, and select the Output Creation Preset of your choice on the Output Creation tab. To change the settings in an Output Creation Preset, open it in the Designer: 1. Select File > Output Creation Presets from the menu 2. Click the Import button and browse to the Configurations\Resources\Output presets folder to
The selected folder's path is saved to a global variable in the Workflow configuration (see "Workflow configuration" on page 157). That variable is used in the settings of the Capture Folder task. The path is also copied to the Output Creation Presets which are used in the Create Output tasks. Finally, enter the username and password that will allow the software to access the Connect Server.
Styling is done via style sheets (see "Local formatting versus style sheets" on page 671). The style rules are in the context_print_styles.css file. Note how they combine the HTML tag, ID and class of elements to select them. (See also: "Selectors in OL Connect" on page 825.) Scripts Scripts personalize content. Most of the scipts in the Information folder (on the Scripts pane) are made with the Text Script Wizard (see "Using the Text Script Wizard" on page 729).
page 198). The data mapping configuration first extracts the common invoice fields, and then the transactional data, in a loop. For information about how to extract transactional data from an XML file, see: "From an XML file" on page 238. Of course, this will only work with the appropriate data files. This data mapping configuration was designed for XML files that are structured like this the sample file: Sample Data.xml. It is located in the Configurations\Data folder, but you will also see it when you ope
information in the output file names. They do that by using a variable in the file output mask field. The variable refers to certain meta data attached to items at a certain level (the document or document set, respectively). For more information see "Print output variables" on page 1330. Workflow configuration Whenever new input data appears in the Workspace\In folder, the invoices are automatically merged with the data and printed to one file, one file per customer, and one file per invoice.
5. Double-click the All In One task and select the new data mapping configuration on the Data Mapper tab. Note: If the input data is JSON, you don't need a data mapping configuration: JSON data can be used in a template as is. See: "Adding JSON sample data" on page 720. However, if you want the data to be saved in the Connect database, let the XML/JSON Conversion plugin convert the JSON to XML and create an XML data mapping configuration to extract the data.
1. Select File > Output Creation Presets from the menu. 2. Click the Import button and browse to the Configurations\Resources\Output presets folder to select the preset. 3. Click Next and adjust the Printer and Output options. To separate the output differently, for example, by city in which the customers live, you need to change the Output Creation Preset as well as the Job Creation Preset. 1. Open the Job Creation Preset in the Designer: select File > Job Creation Presets from the menu. 2.
The Workspace folder is used for debugging or running the solution. It has an In folder that may be used to monitor incoming data and an Out folder to write output files to. The selected folder's path is saved to a global variable in the Workflow configuration (see "The Workflow configuration" on the next page). Finally, enter the username and password that will allow the software to access the Connect Server.
Tip: The saved file can be used to create a data mapping configuration. Project details The web templates Both web pages are designed in the WEB_FORM Web Page template. It contains a Web context with two Web pages: form and thank_you (see "Web pages" on page 498). Styling is done via style sheets (see "Local formatting versus style sheets" on page 671). The style rules are in the context_web_styles.css file. Note that they use the HTML tag (e.g. section), ID (#theID) and/or the class (.
saved in Workflow's Data Repository. Otherwise the main branch renders and returns the web form. Note how the Section setting in the Create Web Content tasks determines which web page is outputted (double-click the task to open the properties). The Delete task is an Output task that does nothing, actually; it doesn't even remove anything. However, this step is useful when running the project step by step in Debug mode.
4. Use the saved file to add the new data to the data mapping configuration (see "Opening a data mapping configuration" on page 204). Send the data mapping configuration to Workflow. 5. Open the thank_you Web section and use the new data fields to personalize the page (see: "Personalizing content" on page 708). Then send the template to Workflow again. Tip: The Designer can have one data mapping configuration and one template open at the same time.
For general information about processes in Workflow see About Processes and Subprocesses, in the Online Help of Workflow. Sample Project: Serving a Web Page The Serving a Web Page Sample Project creates an OL Connect project that responds to a request by serving a web page. This project extracts data from a parameter in the given URL and shows the value on the web page.
1. Send the Workflow configuration to the OL Connect Workflow service; see Saving and sending a Workflow Configuration in Workflow's Online Help. 2. Access the web page by entering the following URL in a browser on the machine that runs Workflow: http://localhost:9090/hello. 3. Follow the instructions on the page to see how values in the URL change the text of the page. Saving input as sample data Saving input as sample data Testing a process in Debug mode is only possible with a sample data file.
Tip: Hover over the name of a script in the Scripts pane to highlight parts of the template that are affected by the script. l The My name is script looks for an element that has the ID: hero. Inside that element it looks for the text: @name@ and replaces that with either the default name ("John Doe") or the name given in the URL. l The Year script puts the current year in the footer. For more information about writing scripts, see: "Writing your own scripts" on page 808.
Customizing the project A project is a great starting point for building an OL Connect solution. This part explains how to make changes to the project. Do you intend to expand the project into a solution where Workflow runs on a different machine that also has a different regional setting? Then indicate the desired encoding in the Designer preferences (see "Sample Project deployment settings" on page 802) before installing the project.
Once the template is ready, send it to Workflow (see "Sending files to Workflow" on page 418). Finally, in Workflow, adjust the process: double-click the Create Web Content task to open it, and select the new template. This is only necessary when the file name has changed. Send the Workflow configuration to the server (see Saving and sending a Workflow Configuration in Workflow's Online Help).
Note: Workflow was originally developed - and is still used - as part of PlanetPress Suite. Nevertheless, most plugins are just as useful in Connect as in PlanetPress Suite. Where plugins are restricted to one software package or the other, it is indicated in Workflow's Online Help. Common OL Connect Workflow processes In an OL Connect project there are typically a number of Workflow processes that communicate with the Connect Server and/or database through one or more of the OL Connect tasks.
This topic describes the available OL Connect tasks, which are commonly used in Workflow processes in OL Connect projects (see "Workflow processes in OL Connect projects" on page 168). Data extraction The Execute Data Mapping task is likely to appear in a lot of OL Connect Workflow processes. It generates a record set in the OL Connect database by executing a data mapping configuration on a data source. Output creation Merging the records with a template is the job of one of the following tasks.
OL Connect database The following tasks let you act directly upon the OL Connect database: l The Set Properties task adds properties as tags to items/sets in the OL Connect database. l The Retrieve Items task retrieves items (records, or content items, etc.) or sets of items from the OL Connect database, by ID or by property. Note: Combined, the Set Properties and Retrieve Items tasks make it possible to batch and commingle Print content items.
If the template doesn't need any data, you can set the Data Source of this task to JSON and enter an empty JSON string: {}. However, if the template should be merged with data, you will need to add one or more tasks to provide the required data. The Create Email Content task must receive either Metadata containing information regarding a valid Record Set, or JSON data. This can be the output of tasks like: l An Execute Data Mapping task which retrieves data from the job file (such as the request XML).
Tip: An easy way to setup a print project in OL Connect, including the print process and the files that it needs, is to use a Sample Project. There are two Sample Projects that create a sample print project. See "Sample Projects" on page 918. There is also a Walkthrough sample that helps you build a Print process for Connect documents in the Workflow Configuration tool by yourself, step-by-step: Creating a Print process in Workflow.
l The necessary Print Content Items have already been created, whether in the same or in another Workflow process. Print Content Items can be retrieved from the OL Connect database using the Retrieve Items task. Subsequently, the Create Job and Create Output tasks can generate print output from them.
Tip: An easy way to start an OL Connect web project including the web process and the files that it needs, is to use a Sample Project. There are two Sample Projects that generate a sample web project. See "Sample Projects" on page 918. Note: With a trial or reseller license, Connect Web output is limited to the localhost. This means that the Connect Server and Workflow must be on the same workstation in order to create Web output.
Of course, numerous other tasks could be added to the process. If you'd want to save the output of the Create Web Content task - the web page - to a file, for example, the task would have to be followed by a Send to Folder task. The Create Web Content task can be found on the OL Connect tab of the Plug-In Bar in Workflow. For a description of all mentioned OL Connect tasks, see "OL Connect tasks" on page 169.
Tip: An easy way to create a COTG solution, including the Workflow configuration and other files that it needs, is to use the COTG Timesheets Sample Project. See "Sample Project: COTG Timesheets" on page 141. Batching and commingling A Connect Print process in its simplest form merges data with a template and creates the print job(s) in one go, as shown in "Print processes with OL Connect tasks" on page 172.
What to retrieve: content sets or content items The Retrieve Items task can retrieve only one type of entity from the Connect database at a time: records, record sets, content items and so forth. Since the Create Job task can only work with print content items or content sets, in a batching print process the choice is narrowed down to these two possibilities. Here are a few things to consider: l If you want the Create Job task to use a Job Creation Preset, you must retrieve content sets.
l Using the Set Properties task. Ideally, the Set Properties task directly follows the Create Print Content task in a Workflow process. The Create Print Content task returns the IDs of the content items as well as the ID of the content set to the process via the Metadata. Using those IDs, the Set Properties task can either set properties on all new Content Items or on the Content Set that was just created. Use two consecutive Set Properties tasks to set properties on both levels.
l Values: Print content items and sets don't contain data fields, but they do have a link to the data record with which they were created, so selecting and sorting them by value is still a possibility. l Properties are key/value pairs that can be set on entities in the Connect database. There are two ways to do that: l Using the Set Properties task. Ideally, the Set Properties task directly follows the Create Print Content task in a Workflow process.
However, any properties that you want to be used for filtering, grouping or sorting must be set on the content items. Batching/Commingling tab of Retrieve Items task The Batching/Commingling tab of the Retrieve Items task allows you to group and sort print content on two levels. You can: l Bundle content items into "documents" (mail pieces) and sort the items within each document. l Put documents in "groups", and define how documents are sorted within a group.
Tips and techniques regarding standard nodes and tasks in such solutions can be found in another topic: "Node-RED: nodes and common techniques" on page 184. For general information about Node-RED, please refer to Node-RED's website: nodered.org. Installation Follow the instructions on Downloading and installing Node.js and npm | npm Docs (npmjs.com). Start the Node-RED editor.
This node allows to set properties on data in the OL Connect Database. l cotg publish, cotg delete Using these nodes, Capture OnTheGo forms can be published to or deleted from the Capture OnTheGo repository. In addition, one configuration node is used by all nodes except the cotg nodes: l Add new OL Connect Server config node This node allows entering a URL and credentials to connect to an OL Connect server.
l For instructions on sending files to the File Store from the Designer, see "Sending files to Connect Server or to another server" on page 419. l OL Connect's file upload node sends files to the File Store from within a flow. When used in a Startup flow the uploaded files will be available to all flows in the project. See: "OL Connect Startup flow" on page 189. Flows in an OL Connect application These are some of the typical flows in a Node-RED OL Connect solution.
For the user documentation of Node-RED, please refer to Node-RED's website: nodered.org. See also: "OL Connect Startup flow" on page 189 Tip: Add a debug node after a node to verify that the contents of a property of the msg object are changed as expected. Nodes used in OL Connect flows In addition to the "OL Connect nodes" on page 182, these standard nodes will often be used in OL Connect applications: l The inject node triggers the flow.
l The fs-ops-dir node (package: node-red-contrib-fs-ops) lists files in a directory. l The watch-directory node captures incoming files. This node is preferable to the standard watch node as the watch node may trigger the flow before a file is completely written, which can become problematic when processing larger input files. Reading a JSON file In order to load a JSON file you can use a read file node. Set the Filename property to the full path.
"olsg-invoice-XML.OL-datamapper" ] } 1. Add a JSON node after the read file node. Make sure the JSON node is connected to the output port of the read file node. 2. Add a debug node and connect the JSON node to the input port of that debug node so that the result can be viewed in the debug message console, once the flow is deployed. After the JSON file is parsed, the msg object will have the following properties: msg.payload.email, msg.payload.someApi, msg.payload.workspace and msg.payload.resources.
5. Add a debug node and connect the change node to the input port of that debug node so that the result can be viewed in the debug message console. Setting and moving msg properties There are various ways to set and move values of properties in the msg object. l Via the change node. Select 'Set' to set a value; select 'Move' in order to move a value from one property to another property. l Via the function node. The value of a property can be set or replaced using JavaScript.
Example: A startup flow needs to upload an OL Connect resource to the OL Connect server, but the resource name in msg.payload lacks the path. The path is stored in msg.resourceFolder. To construct the full path and pass it via msg.fileName, the flow can use a change node. l Add a change node and a file upload node. l Double-click the change node and create a rule to 'Set' msg.fileName to msg.resourceFolder & payload. The latter is a JSONata expression.
Deploying OL Connect resources OL Connect's file upload node uploads a single file to the File Store. This node requires the path to the resource. This should be the full path to the resource or a path relative to either the Node-RED installation or a path relative to the current Node-RED project. If not configured in the node's properties, the node expects this information in msg.filename. Make sure to check the Mark as permanent option.
l An OL Connect data mapping task which retrieves data from the job file (such as the request XML). In addition to creating records in the Connect database, this task can output the (IDs of the) records. l An OL Connect data get node which retrieves an existing record set from the Connect database. l A standard create file node that creates a JSON file. l Etc. Which node or nodes fit best, depends on where the data come from. Tip: A number of nodes accept runtime parameters.
The structure of a print flow In its simplest form, a print flow may consist of only two nodes: one node that captures a data file, such as a watch, watch-directory or read file node, and the OL Connect all in one node. The all in one node combines the following four OL Connect nodes: l The data mapping node that extracts data from a file and stores a record set in the database, or the data get node that retrieves previously extracted data from the database.
l The record set, created by the Execute Data Mapping task, is also needed to create another kind of output in the same flow. l The input is JSON data which can be used directly and doesn't need to be stored in the database. In this case there is no need to use the data mapping node or data get node. l The Print Content Items have already been created, either in the same flow or in another flow. Print Content Items can be retrieved from the OL Connect database using the data get node.
l A data mapping configuration, if the documents should contain variable data that is extracted from some data source. (See "Creating a new data mapping configuration" on page 200.) l A Job Creation Preset. (See "Job Creation Presets Wizard" on page 1069.) A Job Creation Preset defines where the output goes and makes it possible to filter and sort records, group documents, and add metadata. l An Output Creation Preset. (See "Output Creation Presets Wizard" on page 1084.
The preview pdf node accepts runtime parameters. These can be passed via the parameters property of the msg object which is passed between nodes. For example, a runtime parameter named brandId would be passed via msg.parameters.brandId. Creating and saving multiple PDF files A PDF preview is usually needed in online solutions, but the preview PDF node can also be used in situations where a PDF file can be produced without an Output Creation Preset.
The html content node creates a set of web pages, using the Web context in a Connect template, and stores them in the File Store or serves it. If the template doesn't need any data, set msg.payload to an empty JSON string: {}. If the template should be merged with data, the data can be the output of another node. Nodes that output data that can be used by the html content node are: l An HTTP in node. It will pass any query parameters via the payload.
Server or to another server" on page 419) or in a startup flow (see "OL Connect Startup flow" on page 189). Capture OnTheGo flows in Node-RED Capture OnTheGo is an OL Connect solution that allows to create and send digital forms to the COTG App(iOS, Android or Windows 10) and processes any data that is returned by the app after a form has been filled out. A Capture OnTheGo solution typically consists of three basic flows. l The flow that makes a document available to COTG App users.
Tip: Store the form ID in a text file or database along with the order ID and/or GUID. This makes it possible to find and delete the form (using the cotg delete node) when form data is submitted. Serving the form As soon as a COTG app user taps a button to download a new form, the second flow springs into action to serve the requested COTG form. l The http in node receives the request from the COTG app.
mapping workflow, consisting of multiple steps (extractions, loops, conditions and more) (see "Data mapping workflow" on page 221 and "Extracting data" on page 229). When this process is complete, the result is a Data Model. This model contains the necessary information to add variable data to OL Connect Designer templates. (see "The Data Model" on page 260 for more information).
Data mapping configurations are used in the Designer to help add variable data fields and personalization scripts to a template. In fact, only a Data Model would suffice (see "Importing/exporting a Data Model" on page 261). The advantage of a data mapping configuration is that it contains the extracted records to merge with the template, which lets you preview a template with data instead of field names.
l Comma Separated Values or Excel (CSV/XLSX/XLS), l Microsoft Access l PDF, PS, PCL l Text l XML l JSON 3. Click the Browse button and open the file you want to work with (for a database, you may have to enter a password). 4. Click Finish. l From the File menu 1. Click the File menu and select New. 2.
XML naming rules and best naming practices, see: XML elements on W3Schools. l Excel files saved in "Strict Open XML" format are not supported yet. l PCL and PostScript (PS) files are automatically converted to PDF format by the Connect Server. To allow for this, the default Connect Server and (if it is secured) an authenticated user must be configured via the Preferences (see "Connect Servers preferences" on page 804). Note that when used in a production environment (e.g.
There are two ways to open a data file with a wizard: from the Welcome screen or from the File menu. l From the Welcome screen 1. Open the PlanetPress Connect Welcome page by clicking the icon at the top right or select the Help menu and then Welcome. 2. Click New DataMapper Configuration. 3. From the Using a wizard pane, select the appropriate file type. l From the File menu 1. In the menu, click File > New. 2. Click the Data mapping Wizards drop-down and select the appropriate file type.
l Starting Value: The starting number for the counter. Defaults to 1. l Increment Value: The value by which to increment the counter for each record. For example, an increment value of 3 and starting value of 1 would give the counter values of 1, 4, 7, 10, [...] l Number of records: The total number of counter records to generate. This is not the end value but rather the total number of actual records to generate.
of the software, so that you may restore a data mapping configuration that was accidentally opened and saved in a newer version, or share a data mapping configuration with users of a previous version of Connect. Note that it may not always be possible to down-save a data mapping configuration to a an older version. For instance, a JSON-based data mapping configuration cannot be saved to a version earlier than 2021.1 because the JSON data type did not exist prior to that version.
l From the File menu 1. In the menu, click File > New. 2. Click the Data mapping Wizards drop-down and select From CSV/XLSX/XLS File. 3. Click Next. 4. Click the Browse button and open the file you want to work with. 5. Click Next. Note: Excel files saved in "Strict Open XML" format are not supported yet. After selecting the file, take a look at the preview to ensure that the file is the right one and the encoding correctly reads the data. Click Next.
Tip: The Sort on option, combined with the Stop data mapping option of the "Action step" on page 257, allows to process only a group of items without having to examine all records. (See also: "Action step properties" on page 333.) Verify that the data are read properly. Finally click Finish. All data fields are automatically extracted in one extraction step. Using the wizard for databases The DataMapper wizard for database files helps you create a data mapping configuration for a database file.
Note: After creating the initial data mapping configuration you may use a custom SQL query via the Input Data Settings; see "Settings for a database" on page 224. Tip: The Sort on option, combined with the Stop data mapping option of the "Action step" on page 257, allows to process only a group of items without having to examine all records. (See also: "Action step properties" on page 333.) MariaDB, MySQL, SQL Server or Oracle l Server: Enter the server address for the database.
ODBC Data Source l ODBC Source: Use the drop-down to select an ODBC System Data Source. This must be a data source that has been configured in the 64-bit ODBC Data Source Administrator, as PlanetPress Connect is a 64-bit application and thus cannot access 32-bit data sources. l This ODBC source is MSSQL: Check this option if the ODBC source is MSSQL (SQL Server).
l Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text. Note: To instruct the SQL Server driver to not use encryption, the ";encrypt=false" parameter needs to be present in the connection string. For more information see "Known Issues" on page 102.
arrays, not key-value pairs - can be seen as individual source records. If the root is selected, there will be only one source record. Whether source records are output as individual records depends on the trigger. Either: l Select On element to create a new record in the output for each object or array in the parent element. l Select On change to create a new record each time the value in a certain key-value pair changes. Only key-value pairs that exist at the root of a child element can be evaluated.
l From the Welcome screen 1. Open the PlanetPress ConnectWelcome page by clicking the icon at the top right or select the Help menu and then Welcome. 2. Click New DataMapper Configuration. 3. From the Using a wizard pane, select PDF/VT or AFP. 4. Click the Browse button and open the PDF/VT or AFP file you want to work with. Click Next. l From the File menu 1. In the menu, click File > New. 2. Click the Data mapping Wizards drop-down and select From PDF/VT or AFP. 3. Click Next. 4.
a page has the same orientation as the page, not when text has been rotated after the page was rotated. The page number and rotation of a page are shown in the status bar at the bottom, next to the region selection information. Using the wizard for XML files The DataMapper wizard for XML files helps you create a data mapping configuration for an XML file. The wizard lets you select the type of node and the trigger that delimit the start of a new record.
4. Click the Browse button and select the file you want to work with. For a JSON file, change the file type to JSON first. Click Next. After selecting a file, you have to set the split level and trigger type: l XML Elements: This is a list of node elements that have children nodes. Select the level in the data that will define the source record.
Input license. Calling LincPDFC via an external program To set advanced conversion parameters, the PCL input file has to pass through an additional Workflow process before entering the process in which the data mapping takes place. The extra Workflow process should call the LincPDF command line module LincPDFC via the External Program plugin with the desired advanced conversion parameters. To create such a Workflow process: 1. Add a local variable to your process and name it lincPDFOptions. 2.
Key Name Argument Value Description Type EdgeToEdgePrinting -z 0 or 1 Set the flag of “Allows edge-to-edge printing”(default: FALSE). Constant Alpha -n:num 0-1 Specify constant alpha value defined in PDF 1.4 (default: 0.5). UsePolygons *e 0 or 1 When enabled, Lincoln’s PCL interpreter will output vector graphics in a simpler mode (default: FALSE). OutputRgb *f 0 or 1 When set (1), LincPDF will convert CMYK as RGB colorspace before writing (default: FALSE).
however when activated LincPDFC detailed conversion will be added to the Workflow log. 7. Save the output of LincPDFC using the –o folder specifier in the parameter (% {workingDir}\Temp, in this case.) In another Workflow process, import the created PDF with the Folder Capture input plugin, specifying the output folder of the previous process (%{workingDir}\Temp in the example) as input folder, and %O.pdf as the file mask.
Once you have the PDF as job file, you may pass it to the Execute Data Mapping plugin for further processing. LincPDFC Options To view the available options that can be set in LincPDF, run the executable (LincPDFC.exe) in a command prompt window. It will display a help message with available options. /*Open Windows Command Prompt Change directory cd C:\Program Files\Objectif Lune\OL Connect\LPDFConv\Bin C:\Program Files\Objectif Lune\OL Connect\LPDFConv\Bin\LincPDFC.
Copyright (c) 2001-2007 Lincoln & Co., a division of Biscom, Inc. Usage: LincPDF -iInput.PCL [-oOutput.PDF] [options] [options] PCL/PDF Options: -a : Write PDF streams in ASCII format -b : Use form feed for bad ESC command -c : see "PCL Font Options" for details -d : see "Document Information" for details -e : Output non-editable PDF file -f : Do not embed PCL fonts -g : Ignore RG macro ID -j : Use old font substitution -k:num : Select blend mode in PDF 1.
-dKeywords:$s : PDF Keywords -dVersion:num : PDF Version (multiply by 10) Page Setup: -pWidth:num : Page Width (required only if Page Type is Custom) -pHeight:num : Page Height (required only if Page Type is Custom) -pXOff:num : Page X Offset (see also Measurement) -pYOff:num : Page Y Offset (see also Measurement) -pMeasure:num : Page Measurement (0-inch, 1-mm, 2-point) -pOrient:num : Page Orientation (0-Portrait, 1-Landscape) -pType:num : Page Type (0-Letter, 1-A4, 2-B5, 3-Legal, 4-Exec., 5~8-Env.
-yCopyContents : Enable Copying Text and Graphics from Document -yUse128Bit : Use 128-bit Encryption -yAssembleDocument : Enable Assemble Document (128-bit encryption only) -yExtractText : Enable Text and Graphics Extraction (128-bit encryption only) -yLowResolutionPrint : Enable Lower-level Resolution Printing (128-bit encryption only) Tips ---------------------------------------------------. using quotation mark for complicated string, for example, -dKeywords:"key1, key2" .
Creating a data mapping workflow A data mapping workflow always starts with the Preprocessor step and ends with the Postprocessor step. These steps allow the application to perform actions on the data file itself before it is handed over to the data mapping workflow ("Preprocessor step" on page 249) and after the Data Mapping workflow has completed ("Postprocessor step" on page 258).
Rearranging steps To rearrange steps, simply drag & drop them somewhere else on the colored line in the Steps pane. Alternatively you may right-click on a step and select Cut Step or use the Cut button in the Toolbar. If the step is Repeat or Condition, all steps inside it will also be placed on the clipboard. To place the step at its destination, right-click any step and select Paste Step, or use the Paste button in the tool- bar. The pasted steps will be positioned below the selected step.
l Data format settings define how dates, times and numbers are formatted by default in the data source. Input data settings (Delimiters) The Input Data settings (on the Settings pane at the left) specify how the input data must be interpreted. These settings are different for each data type. For a CSV file, for example, it is important to specify the delimiter that separates data fields.
that table. If the database supports stored procedures, including inner joins, grouping and sorting, you can use custom SQL to make a selection from the database, using whatever language the database supports. The query may contain variables and properties, so that the selection will be dynamically adjusted each time the data mapping configuration is actually used in a Workflow process; see "Using variables and properties in an SQL query" on page 320.
individual source records. Any elements at the same level as the parent element or at a higher level are repeated in each source record. See also: "JSON File Input Data settings" on page 313. Record boundaries Boundaries are the division between records: they define where one record ends and the next record begins. Using boundaries, you can organize the data the way you want. You could use the exact same data source with different boundaries in order to extract different information.
Data format settings defined for a data source apply to any new extraction made in the current data mapping configuration. These settings are made on the Settings pane; see "Settings pane" on page 307. Settings for a field that contains extracted data are made via the properties of the Extract step that the field belongs to (see "Setting the data type" on page 268). Any format settings specified per field are always used, regardless of the user preferences or data source settings.
Defining custom properties and runtime parameters Defining properties You can define custom properties under Properties in the "Preprocessor step" on page 249 (see "Preprocessor step properties" on page 323). To add a property: 1. Select the Preprocessor step on the Steps pane. 2. On the Step properties pane, under Properties, click the Add button . See "Properties" on page 325 for an explanation of the settings for properties.
Editing a runtime parameter To modify a runtime parameter, click its name or value in the Parameters pane and enter the new name or value. To remove a runtime parameter, select it and click the Remove button ( ). Accessing properties and runtime parameters There are different ways to access properties and runtime parameters in a data mapping workflow. l Property-based fields. A property-based field is filled with the value of a property. See "Property-based field" on page 266. l Step settings.
Before you start Data source settings Data source settings must be made beforehand, not only to make sure that the data is properly read but also to have it organized in a record structure that meets the purpose of the data mapping configuration (see "Data source settings" on page 223). It is important to set the boundaries before starting to extract data, especially transactional data (see "Extracting transactional data" on page 235).
stays the same. Drop data on empty fields or on the record itself to add new fields. Special conditions The Extract step may need to be combined with another type of step to get the desired result. l Data can be extracted conditionally with a Condition step or Multiple Conditions step; see "Condition step" on page 253 or "Multiple Conditions step" on page 256. l Normally the same extraction workflow is automatically applied to all records in the source data.
Adding fields to an existing Extract step For optimization purposes, it is better to add fields to an existing Extract step than to have a succession of extraction steps. To add fields to an existing Extract step: 1. In the Data Viewer pane, select the data that needs to be extracted. (See "Selecting data" on the next page.) 2. Select an Extract step on the Steps pane. 3. Right-click on the data and select Add Extract Field, or drag & drop the data on the Data Model.
l Set the data type, data format and default value of each field. l Modify the extracted data through a script. l Delete a field. All this can be done via the Step properties pane (see "Extract step properties" on page 326), because the fields in the Data Model are seen as properties of an Extract step. See also: "Fields" on page 264. Testing the extraction workflow The extraction workflow is always performed on the current record in the data source.
multiple lines. To resize a data selection, click and hold one of the resize handles on the borders or corners, move them to the new size and release the mouse button. To move the data selection, click and hold anywhere on the data selection, move it to its new desired location and release the mouse button. Note: In a Text or PDF file, when you move the selection rectangle directly after extracting data, you can use it to select data for the next extraction.
In this tree view you can select elements just like files in the Windows Explorer. Keep the Ctrl key pressed down while clicking on key-value pairs or brackets to select multiple elements, or keep the Shift key pressed down to select consecutive elements. You can select multiple key-value pairs, arrays and objects even if those are in different elements. To get a better overview you can collapse any JSON level.
(For more information about detail tables, multiple detail tables and nested detail tables, see "Detail tables" on page 298.) Detail tables are created when an Extract step is added within a Repeat step. The Repeat step goes through a number of lines or nodes. An Extract step within that loop extracts data from each line or node. How exactly this loop is constructed depends on the type of source data.
From a CSV file or a Database The transactional data (also called line items) appear in multiple rows. 1. Select a field in the column that contains the first line item information. 2. Right-click this data selection and select Add Repeat. This adds a Repeat step with a GoTo step inside it. The GoTo step moves the cursor down to the next line, until there are no more lines (see "Goto step" on page 252). 3. (Optional.
The extraction step is placed inside the Repeat step, just before the GoTo step. From an XML file The transactional data appears in repeated elements.
1. Right-click one of the repeating elements and select Add Repeat. This adds a Repeat step to the data mapping configuration. By default, the Repeat type of this step is set to For Each, so that each of the repeated elements is iterated over. You can see this on the Step properties pane, as long as the Repeat step is selected on the Steps pane. In the Collection field, you will find the corresponding node path.
default name that you can change later on (see "Renaming a detail table" on page 298). The new Extract step will be located in the Repeat step. From a JSON file The transactional data appears in repeated elements. 1. Move the cursor to the parent element of the repeating elements. By default the cursor is located at the top of the page, but previous steps may have moved it. Note that an Extract step does not move the cursor. a. Select the parent element of the repeating elements. b.
Tip: You may edit the JsonPath in the JsonPath Collection field to include or exclude elements from the loop. For an overview of the JsonPath syntax, see https://github.com/json-path/jsonpath. 3. (Optional.) Add an empty detail table via the Data Model pane: right-click the Data Model and select Add a table. Give the detail table a name. 4. Select the Repeat step on the Steps pane. 5. Extract the data: inside the first of the repeating elements, select the data that you want to extract.
1. Add a Goto step if necessary. Make sure that the cursor is located where the extraction loop must start. By default the cursor is located at the top of the page, but previous steps may have moved it. Note that an Extract step does not move the cursor. a. Select an element in the first line item. b. Right-click on the selection and select Add Goto. The Goto step will move the cursor to the start of the first line item. 2. Add a Repeat step where the loop must stop. a.
a. Select the start of the Repeat step on the Steps pane. b. Look for something in the data that distinguishes lines with a line item from other lines (or the other way around). Often, a "." or "," appears in prices or totals at the same place in every line item, but not on other lines. c. Select that data, right-click on it and select Add Conditional. Selecting data - especially something as small as a dot - can be difficult in a PDF file.
4. (Optional.) Add an empty detail table to the Data Model: right-click the Data Model and select Add a table. Give the detail table a name. 5. Extract the data (see "Adding an extraction" on page 230). When you drag & drop data on the name of a detail table in the Data Model pane, the data are added to that detail table.
Extract the sum or totals. If the record contains sums or totals at the end of the line items list, the end of the Repeat step is a good place to add an Extract step for these data. After the loop step, the cursor position is at the end of line items.Alternatively, right-click on the end of the Repeat step in the Steps panel and select Add a Step > Add Extraction.
Finding a condition Where it isn't possible to use a setting to extract data of variable length, the key is to find one or more differences between lines that make clear how big the region is from where data needs to be extracted. Whilst, for example, a product description may extend over two lines, other data - such as the unit price - will never be longer than one line. Either the area above or the one below the unit price will be empty when the product description covers two lines.
Using a script A script could also provide a solution when data needs to be extracted from a variable region. This requires using a Javascript-based field.
1. Add a field to an Extract step, preferably by extracting data from one of the possible regions; see "Extracting data" on page 229. To add a field without extracting data, see "Expression-based field" on page 265. 2. On the Step properties pane, under Field Definition, select the field and change its Mode to Javascript. If the field was created with its Mode set to Location, you will see that the script already contains one line of code to extract data from the original location. 3. Expand the script.
The Preprocessor and Postprocessor steps are special in that the former can be used to modify the incoming data prior to executing the rest of the extraction workflow while the latter can be used to further process the resulting record set after the entire extraction workflow has been executed.
Note that preprocessors are not executed automatically while designing the data mapping workflow; you must therefore execute them manually. The reason for this is that preprocessors can potentially be quite lengthy operations that would hinder the automatic refresh of the display whenever anything is changed in the data mapping workflow. To add a preprocessor: 1. Select the Preprocessor step on the Steps pane. 2. On the Step properties pane, under Preprocessor, click the Add button . 3.
Fields always belong to an Extract step, but they don't necessarily all contain extracted data. To learn how to add fields without extracted data to an Extract step, see "Fields" on page 264. Adding an Extract step To add an Extract step, first select the step on the Steps pane after which to insert the Extract step. Then: l In the Data Viewer, select some data, right-click that data and choose Add Extraction, or drag & drop the data in the Data Model.
The same applies to JSON files. When you select an element in a JSON file and add a Repeat step on it, the Repeat step will automatically loop over all elements on the same level in the JSON file. Tip: To break out of a loop and immediately jump to the next task following the current loop, use an Action task and set its action to Break out of repeat loop. Adding a Repeat step To add a Repeat step: 1. On the Steps pane, select the step after which to insert the Condition step. 2.
The Goto step isn't used in XML extraction workflows in most cases. The DataMapper moves through the file using Xpath, a path-like syntax to identify and navigate nodes in an XML document. The DataMapper moves through JSON files using JsonPath, a path-like syntax to identify and navigate elements in a JSON document. For an overview of the JsonPath syntax, see https://github.com/jsonpath/jsonpath.
Adding a Condition step To add a Condition step: l On the Steps pane, select the step after which to insert the Condition step; then, in the Data Viewer, select some data, right-click that data and choose Add Conditional. In the Step properties pane, you will see that the newly added Condition step checks if the selected position (the left operand) contains the selected value (the right operand). Both operands and the operator can be adjusted.
extracted by the current selection. Repeat this until you are satisfied that the proper data is being extracted. Click on the Use selection button in the Left Operand section to fill out the coordinates. The point of origin of each character is at the bottom left of each of them and extends up and to the right. l Alternatively, right-click the Steps pane and select Add a Step > Add Conditional. Enter the settings for the condition on the Step properties pane.
Renaming a rule To rename a rule, double-click its name in the Rule Tree and type a new name. Multiple Conditions step The Multiple Conditions step is useful to avoid the use of nested Condition steps (Condition steps inside other Condition steps). In a Multiple Conditions step, conditions or rather Cases are positioned side by side. Each Case condition can lead to an extraction. Cases are executed from left to right.
Adding a Multiple Conditions step To add a Multiple Conditions step, right-click the Steps pane and select Add a Step > Add Multiple Conditions. To add a case, click the Add case button to the right of the Condition field in the Step properties pane. Configuring a Multiple Conditions step For information about how to configure the Multiple Conditions step, see "Multiple Conditions step properties" on page 349.
l Stop the processing of the current record and move on to the next one. Normally an extraction workflow is automatically executed on all records in the source data. By stopping the processing of the current record, you can filter out some records or skip records partially.
Configuring the Postprocessor step For an explanation of the settings for post-processors, see "Postprocessor step properties" on page 356. Testing postprocessors Post-processors are not executed automatically while designing the data mapping workflow. The reason for this is that post-processors can potentially be quite lengthy operations that would hinder the automatic refresh of the display whenever anything is changed in the data mapping workflow.
The Data Model The Data Model is the structure of records into which extracted data are stored. It contains the names and types of the fields in a record and in its detail tables. A detail table is a field that contains a record set instead of a single value. The Data Model is shown in the Data Model pane, filled with data from the current record. The Data Model is not related to the type of data source: whether it is XML, JSON, CSV, PDF, Text, or a database does not matter.
About records A record is a block of information that may be merged with a template to generate a single document (invoice, email, web page...) for a single recipient. It is part of the record set that is generated by a data mapping configuration. In each record, data from the data source can be combined with data coming from other sources. Records can be duplicated by setting the number of copies in a script (see "record" on page 393). Duplicates are not shown in the Data Model.
DataMapper immediately discards non-required fields that are not referenced by any Extract step. Editing the Data Model The Data Model is generally constructed by extracting data; see "Extracting data" on page 229. Empty fields and data tables can be added via the contextual menu; see "Adding empty fields via the Data Model pane" on page 266. Editing fields You can modify the fields in the Data Model via the contextual menu that opens when you right-click on something in the Data Model pane.
Grouping fields To group one or more fields, select the field(s), right-click and select Add group. It is also possible to create groups within groups; this is done in the same way. To move a field into an existing group you can simply drag and drop it into the group or onto the name of that group. To delete a group, right-click it and select Ungroup. The fields will be moved up one level in the structure. To move a field out of an existing group you can also simply drag and drop it out of that group.
Workflow process Data can be added to the Data Model in a PlanetPress Connect Workflow process as follows: 1. Use an Execute Data Mapping task or Retrieve Items task to create a record set. On the General tab select Outputs records in Metadata. 2. Add a value to a field in the Metadata using the Metadata Fields Management task. Data added to the _vger_fld_ExtraData field on the Document level will appear in the record's ExtraData field, once the records are updated from the Metadata (in the next step).
Alternatively, you can add fields and detail tables directly in the Data Model pane (see "Adding empty fields via the Data Model pane" on the facing page). After adding a field or detail table this way, you can drag & drop data into it to convert it into a locationbased field. Without data it is not accessible via the Step properties pane. Expression-based field Expression-based fields are filled with the result of a (JavaScript) expression: the script provides a value.
Tip: The default extraction method for fields in a CSV or XLS(X) file is data.extract(columnName, rowOffset). Changing the expression to use the data.extractByIndex(index, rowOffset) method will allow the data mapping configuration to extract data from files that have the same structure, but different column names. Property-based field A property-based field is filled with the value of a property (see "Properties and runtime parameters" on page 227).
Editing fields The list of fields that are included in the extraction, the order in which fields are extracted and the data format of each field, are all part of the Extract step's properties. These can be edited via the Step properties pane (see "Extract step properties" on page 326). Tip: To change the name of a field quickly, right-click it in the Data Model and select Rename.
Setting the data type Fields store extracted data as a String by default. The data type of a field can be changed via the properties of the Extract step that the field belongs to. 1. Select the Extract step that contains the field. You can do this by clicking on the field in the Data Model, or on the step in the Steps pane that contains the field. 2. On the Step properties pane, under Field Definition, set the Type to the desired data type. See "Data types" on page 275 for a list of available types.
JavaScript Expression Alternatively you can change a field's Mode from Location to Javascript: 1. Select the field in the Data Model. 2. On the Step properties pane, under Field Definition, change its Mode to JavaScript. You will see that the JavaScript Expression field is not empty; it contains the code that was used to extract data from the location. This code can be used or deleted. Note: The last value attribution to a variable is the one used as the result of the expression.
1. On the Data Model pane, click one of the fields in the detail table. 2. On the Step Properties pane, under Extraction Definition, in the Data Table field, you can find the name of the detail table: record.detail by default. Change the detail part in that name into something else. Note: A detail table’s name should always begin with ‘record.’. 3. Click somewhere else on the Step Properties pane to update the Data Model. You will see the new name appear.
and give the detail table a name) and drop the data on the name of that detail table. Else the extracted fields will all be added to one new detail table with a default name at first, and you will have to rename the detail table created in each Extract step to pull the detail tables apart (see "Renaming a detail table" on page 269).
Nested detail tables Nested detail tables are used to extract transactional data that are relative to other data. They are created just like multiple detail tables, with two differences: l For the tables to be actually nested, the Repeat step and its Extract step that extract the nested transactional data must be located within the Repeat step that extracts data to a detail table. l In their name, the dot notation (record.services) must contain one extra level (record.services.charges).
"details" such as movie rentals or long distance calls.
The services can be extracted to a detail table called record.services. The "charges" and "details" can be extracted to two nested detail tables.
The nested tables can be called record.services.charges and record.services.details. Now one "charges" table and one "details" table are created for each row in the "services" table. Data types By default the data type of extracted data is a String, but each field in the Data Model can be set to contain another data type. To do this: 1. In the Data Model, select a field. 2. On the Step properties pane, under Field Definition choose a data type from the Type dropdown.
Note: Data format settings tell the DataMapper how to read and parse data from the data source. They don't determine how these data are formatted in the Data Model or in a template. In the Data Model, data are converted to the native data type. Dates, for example, are converted to a DateTime object. How they are displayed in the Data Model depends on the preferences (see "Default Format" on page 786). The following data types are available in PlanetPress Connect.
Note: The value must be all in lowercase: true, false. Any variation in case (True, TRUE) will not work. Boolean expressions Boolean values can be set using an expression of which the result is true or false. This is done using operators and comparisons. Example: record.fields["isCanadian"] = (extract("Country") == "CA"); For more information on JavaScript comparison and logical operators, please see w3schools.com or developer.mozilla.org.
Building Currency values Currency values can be the result of direct attribution or mathematical operations just like Integer values (see "Integer" on page 281). Date Dates are values that represent a specific point in time, precise up to the second. They can also be referred to as datetime values. While dates are displayed as UTC (Coordinated Universal Time) or using the system's regional settings (see "Default Format" on page 786), in reality they are stored unformatted.
DateTime object. How they are displayed in the Data Model depends on the preferences (see "Default Format" on page 786).. Defining a date/time format A date format is a mask representing the order and meaning of each digit in the raw data, as well as the date/time separators. The mask uses several predefined markers to parse the contents of the raw data. Here is a list of markers that are available in the DataMapper: l yy: Numeric representation of the Year when it is written out with only 2 digits (i.e.
Examples of masks Value in raw data Mask to use June 25, 2013 MM dd, YYYY 06/25/13 mm/dd/yy 2013.06.25 yyyy.mm.dd 2013-06-25 07:31 PM yyyy-mm-dd hh:nn ap 2013-06-25 19:31:14.1206 yyyy-mm-dd hh:nn:ss.ms Tuesday, June 25, 2013 @ 7h31PM DD, MM dd, yyyy @ hh\hnnap Entering a date using JavaScript In several places in the DataMapper, Date values can be set through a JavaScript. For example: l In a field in the Data Model. To do this, go to the Steps pane and select an Extract step.
Float Floats are signed, numeric, floating-point numbers whose value has 15-16 significant digits. Floats are routinely used for calculations. Note that Float values can only have up to 3 decimals. They are inherently imprecise: their accuracy varies according to the number of significant digits being requested. The Currency data type can have up to 4 decimals; see "Currency" on page 277. Defining Float values l Preprocessor: l In the Step properties pane, under Properties, add or select a field.
Defining Integer values l l l Preprocessor: l In the Step properties pane, under Properties, add or select a field. l Specify the Type as Integer and set a default value as a number, such as 42. Extraction: The field value will be extracted and treated as an integer. l In the Data Model, select a field. l On the Step properties pane, under Field Definition set the Type to Integer. JavaScript Expression: Set the desired value to any Integer value. Example: record.
Defining Object values l Preprocessor: l In the Step properties pane, under Properties, add or select a field. l Specify the Type as Object and set a default value as a semi-colon. String Strings contain textual data. Strings do not have any specific meaning, which is to say that their contents are never interpreted in any way. Defining String values l Preprocessor: l In the Step properties pane, under Properties, add or select a field.
real life";, and myVar += " or is this just fantasy?";, the value of myVar will be, obviously "Is this the real life or is this just fantasy?". For more information on string variables, see quirksmode.org. Data Model file structure The Data Model file is an XML file that contains the structure of the Data M model, including each field's name, data type, and any number of detail tables and nested tables. Example: promotional data
Example: nested tables (one table into another) PAGE 286Keyboard shortcuts This topic gives an overview of keyboard shortcuts that can be used in the DataMapper. Keyboard shortcuts available in the Designer for menu items, script editors and the data model pane can also be used in the DataMapper; see "Keyboard shortcuts" on page 953. Although some of the keyboard shortcuts are the same, this isn't a complete list of Windows keyboard shortcuts. Please refer to Windows documentation for a complete list of Windows keyboard shortcuts.
Key com- Function bination Ctrl + C or Ctrl + Insert Copy Ctrl + N New Ctrl + O Open file Ctrl + Shift + O Open configuration file Ctrl + S Save file Ctrl + V or Shift + Insert Paste Ctrl + X Cut Ctrl + W or Ctrl + F4 Close file Ctrl + Y or Ctrl + Shift + Y Redo Ctrl + Z or Ctrl + Shift + Z Undo Ctrl + Shift + S Save all Ctrl + Shift + W or Ctrl + Shift + F4 Close all Ctrl + F5 Revert Ctrl + F7 Next view Ctrl + Shift + F7 Previous view Ctrl + F8 Next perspective Ctrl + Shif
Key com- Function bination F4 Ignore step/Reactivate step F6 Add an Extract step F7 Add a Goto step F8 Add a Condition step F9 Add a Repeat step F10 Add an Extract field F11 Add an Action step F12 Add a Multiple Conditions step Alt + F12 Add a Case step (under a Multiple Conditions step) Home Go to the first step in the workflow End Go to the last step in the workflow Alt + V Validate records Shift + F10 or Ctrl + Shift + F10 Open context menu Viewer pane The following key combin
Key combination Function Ctrl + F6 Next editor (when there is more than one file open in the Workspace) Ctrl + Shift + F6 Previous editor (when there is more than one file open in the Workspace) Data Model pane Key combination Function PageUp Go to previous record PageDown Go to next record Alt + CR Property page Alt + PageDown Scroll down to the last field Alt + PageUp Scroll up to the first field Steps tab Key combination Function Ctrl + - Zoom out Ctrl + + Zoom in Edit Script and
Key combination Function Ctrl + J Line break Ctrl + L Go to line; a prompt opens to enter a line number. Ctrl + Shift + D Delete line Shift + Tab Shift selected lines left Tab Shift selected lines right Ctrl + / Comment out / uncomment a line in code Ctrl + Shift + / Comment out / uncomment a code block Menus The following menu items are shown in the DataMapper Module's menu: File Menu l New...
l Save All: Saves all open files. If any of the open files have never been saved, the Save As dialog opens for each new unsaved file. l Save a Copy: Save a copy of the current data mapping configuration in the selected Connect version's format. See "Down-saving a data mapping configuration" on page 204. l Revert: Appears only in the Designer module. Reverts all changes to the state in which the file was opened or created.
Data Menu l Hide/Show datamap: Click to show or hide the icons to the left of the Data Viewer that displays how the steps affect the line. l Hide/Show extracted data: Click to show or hide the extraction selections indicating that data is extracted. This simplifies making data selections in the same areas and is useful to display the original data. l Validate All Records: Runs the Steps on all records and verifies that no errors are present in any of the records.
View Menu l Zoom In: Click to zoom in the "Steps pane" on page 321. l Zoom Out: Click to zoom out the "Steps pane" on page 321. Window Menu l Show View l Messages: Shows the "Messages pane" on page 306 l Data Model: Shows the "Data Model pane" on the facing page. l Steps: Shows the "Steps pane" on page 321. l Parameters: Shows the Parameters pane. See "Properties and runtime parameters" on page 227. l Settings: Shows the "Settings pane" on page 307.
l "Data Model pane" below. The Data Model pane shows one extracted record. l "Messages pane" on page 306. Data Model pane The Data Model pane displays the result of all the preparations and extractions of the extraction workflow. The pane displays the content of a single record within the record set at a time. Data is displayed as a tree view, with the root level being the record table. On the level below that are detail tables, and a detail table inside a detail table is called a nested table.
l Synchronize Fields and Structure : Click to synchronize the Data Model fields and struc- ture in the currently loaded template and data mapping configuration. If you click this button when working on the data mapping configuration, the Data Model gets updated to the one in the template. If you click it when working on the template, the Data Model gets updated to the one in the data mapping configuration. l Show the ExtraData field : Note that this field is not meant to be filled via an extraction.
etc.) via the properties of that Extract step; see: "Editing fields" on page 267 and "Renaming a detail table" on page 298. l Rename: Click to rename the selected table, field or group. Enter the new name and click OK to rename. l Required: Click to indicate that the field should be retained, even if there is no Extract step that references it. The DataMapper immediately discards non-required fields that are not referenced by any Extract step. l Delete: Click to delete the selected table or field.
l The icon to the left of the name indicates the data type of the field (see "Data types" on page 275). l A field name with an asterisk to the right indicates that this field is required. All imported data model fields are initially marked as required to prevent them from being removed, since the DataMapper immediately discards non-required fields that are not referenced by any Extract step. l A field with a grey background indicates this Data Model field does not have any attached extracted data.
l Next Record: Go to the next record in the data sample. This button is disabled if the last record is shown. l Last Record: Go to the last record in the data sample. This button is disabled if the last record is already shown. If a record limit is set in the Settings pane ("Settings pane" on page 307) the last record will be within that limit. Detail tables A detail table is a field in the Data Model that contains a record set instead of a single value. Detail tables contain transactional data.
To create more than one detail table, simply extract transactional data in different Repeat steps (see "Extracting transactional data" on page 235). The best way to do this is to add an empty detail table (right-click the Data Model, select Add a table and give the detail table a name) and drop the data on the name of that detail table.
Page 300
Nested detail tables Nested detail tables are used to extract transactional data that are relative to other data. They are created just like multiple detail tables, with two differences: l For the tables to be actually nested, the Repeat step and its Extract step that extract the nested transactional data must be located within the Repeat step that extracts data to a detail table. l In their name, the dot notation (record.services) must contain one extra level (record.services.charges).
"details" such as movie rentals or long distance calls.
The services can be extracted to a detail table called record.services. The "charges" and "details" can be extracted to two nested detail tables.
The nested tables can be called record.services.charges and record.services.details. Now one "charges" table and one "details" table are created for each row in the "services" table. The Data Viewer The Data Viewer is located in the middle on the upper half of the DataMapper screen. It displays the data source that is currently loaded in the DataMapper, specifically one record in that data.
shows where the loop takes place.Clicking on a Goto step shows where the cursor is moved.Clicking on a Condition step shows which data fulfil the condition.For more information about the different steps that can be added to a data mapping workflow, see "Steps" on page 248. Data Viewer toolbar The Data Viewer has a toolbar at the top to control options in the viewer. Which toolbar features are available depends on the data source type.
Note: The Add Extract Field item is available only after an Extract step has been added to the workflow. Messages pane The Messages pane is shared between the DataMapper and Designer modules and displays any warnings and errors from the data mapping configuration or template. At the top of the Message pane are control buttons: l Export Log: Click to open a Save As dialog where the log file (.log) can be saved on disk. l Clear Log Viewer: Click to remove all entries in the log viewer.
l l Warning: Uncheck to hide any warnings. l Error: Uncheck to hide any critical errors. Limit visible events to: Enter the maximum number of events to show in the Messages Pane. Default is 50. Settings pane Settings for the data source and a list of Data Samples and JavaScript files used in the current data mapping configuration, can be found on the Settings tab at the left. The available options depend on the type of data sample that is loaded.
l Ignore unparseable lines: Ignores any line that does not correspond to the settings above. l Skip empty lines: Ignore any line that has no content. Note that spaces are considered content. l Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.
l Paragraph spacing: Determines the spacing between paragraphs. The default value is 1.5, meaning the space between paragraphs must be equal to at least 1.5 times the average character height to start a new paragraph. l Magic number: Determines the tolerance factor for all of the above values. The tolerance is meant to avoid rounding errors. If two values are more than 70% away from each other, they are considered distinct; otherwise they are the same.
including inner joins, grouping and sorting, you can use custom SQL to make a selection from the database, using whatever language the database supports. The query may contain variables and properties, so that the selection will be dynamically adjusted each time the data mapping configuration is actually used in a Workflow process; see"Using variables and properties in an SQL query" on page 320.
l On lines: Triggers a new page in the Data Sample after a number of lines. l Cut on number of lines: Triggers a new page after the given number of lines. With this number set to 1, and the Boundaries set to On delimiter, it is possible to create a record for each and every line in the file. l l Cut on FF: Triggers a new page after a Form Feed character. On text: Triggers a new page in the Data Sample when a specific string is found in a certain location.
l Use XPath: Enter an XPath to create a delimiter based on the node name of elements. For example:./*[starts-with(name(),'inv')]sets a delimiter after every element of which the name starts with 'inv'. Note thatstarts-with()is an XPath function. For an overview of XPath functions, see Mozilla: XPath Functions. The XPath may also contain JavaScript code. In order to use JavaScript:Note that since the XPath is a string, the return value of the JavaScript statement will be interpreted as a string.
JSON File Input Data settings For a JSON file you can either use the object or array at the root and get one output record, or select an object or array as parent element. Its direct child elements - objects and arrays, not key-value pairs can be output as individual records. l Use root element: Selects the top-level array or object. There will only be one record.
l On script: Defines the boundaries using a custom JavaScript. For more information see "Setting boundaries using JavaScript" on page 364. l On field value: Sets a boundary on a specific field value. l Field name: Displays the fields in the top line. The value of the selected field is compared with the Expression below to create a new boundary. l Expression: Enter the value or Regular Expression to compare the field value to.
l Pages before/after: Defines the boundary a certain number of pages before or after the current page. This is useful if the text triggering the boundary is not located on the first page of the record. l Operator: Selects the type of comparison (for example, "contains"). l Word to find: Compares the text value with the value in the data source. l Match case: Makes the text comparison case sensitive.
l Entire page: Compares the text value on the whole page. Only available withcontains,not contains,is emptyandis not emptyoperators. l Times condition found: When the boundaries are based on the presence of specific text, you can specify after how many instances of this text the boundary can be effectively defined. For example, if a string is always found on the first and on the last page of a document, you could specify a number of occurrences of 2.
l Field: Displays the fields and (optionally) attributes in the XML element. The value of the selected field determines the new boundaries. l Also extract element attributes: Check this option to include attribute values in the list of content items that can be used to trigger a boundary. JSON file boundaries The delimiter for a JSON file is an object or array inside the selected parent element (see "JSON File Input Data settings" on page 313).
Tip: Data samples can be copied and pasted to and from the Settings pane using Windows File Explorer. l Add : Add a new Data Sample from an external data source. The new Data Sample will need to be of the same data type as the current one. For example, you can only add PDF files to a PDF data mapping configuration. Multiple files can be added simultaneously. l Delete l Move up l Move down l Replace : Open a Data Sample and replace it with the contents of a different data source.
time was specified with a date in the original file, the default time (12.00 AM) is used and converted; this may influence the displayed date. Note: Some Korean and Chinese date formats can't be parsed yet, and won't display correctly with any of these settings. External JS Libraries Right-clicking in the box brings up a control menu, with the same options as are available through the buttons on the right. l Add : Add a new external library. Use the standard Open dialog to browse and open the .js file.
l ISO8601: This setting allows for dates with different timestamp formats, or belonging to different time zones, to be parsed inside a single job. Dates that do not include a specific time are automatically considered to use the current locale's time zone. Select the ISO template to be used when parsing the timestamp. Other ISO8601 formats can be handled via the Custom option. l Custom: Set a custom date format. For the markers available in the DataMapper see "Date" on page 278.
l The query must start with = l Any variable or property must be enclosed in curly brackets: { ... }. This effectively inserts a JavaScript statement in the query. Note that all other curly brackets must be escaped with a backslash. Inside the brackets you may enter any of the following property fields defined in the Preprocessor step (see "Fixed automation properties" on page 324 and "Properties" on page 325): l Fixed automation properties.
Moving a step To rearrange steps, simply drag & drop them somewhere else on the colored line in the Steps pane. Alternatively you may right-click on a step and select Cut Step or use the Cut button in the Toolbar. If the step is Repeat or Condition, all steps inside it will also be placed on the clipboard. To place the step at its destination, right-click any step and select Paste Step, or use the Paste button in the tool- bar. The pasted steps will be positioned below the selected step.
l Delete Step: To remove a step, right-click on it and select Delete step from the contextual menu or use the Delete button in the Toolbar. If the step to be deleted is Repeat or Condition, all steps under it will also be deleted. l Copy/Paste Step: To copy a step, right-click on it and select Copy Step or use the button in the Toolbar. If the step is Repeat or Condition, all steps under it will also be placed in the clipboard.
Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Fixed automation properties The Fixed automation properties subsection lists all the fixed runtime parameters available from PlanetPress Workflow.
automation.properties.ProcessName. l TaskIndex: This property contains the index (position) of the task inside the process that is currently executing the data mapping configuration but it has no equivalent in PlanetPress Workflow. To access this property inside of any JavaScript code within the data mapping configuration, use automation.properties.ProcessName. In scripts, fixed automation properties are retrieved via the automation object (see "Objects" on page 369), for example automation.jobInfo.
Note: Since Entire data properties are evaluated before anything else, such as Preprocessors, Delimiters and Boundaries in the Settings pane (see "Data source settings" on page 223), these properties cannot read information from the data sample or from any records. Preprocessor The Preprocessor subsection defines what preprocessor tasks are performed on the data file before it is handed over to the data mapping workflow.
Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Extraction Definition l Data Table: Defines where the data will be placed in the extracted record. The root table is record, any other table inside the record is a detail table. For more information see "Extracting transactional data" on page 235.
l Use JavaScript Editor: Click to display the "boundaries" on page 370 dialog. l Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. l Use selection: Click to use the value of the current data selection for the extraction. Note: If the selection contains multiple lines, only the first line is selected.
A Post function script operates directly on the extracted data, and its results replace the extracted data. For example, the Post function script replace("-", ""); would replace the first dash character that occurs inside the extracted string. l Use JavaScript Editor: Click to display the "boundaries" on page 370 dialog. l Trim: Select to trim empty characters at the beginning or the end of the field. l Concatenation string: The (HTML) string used to concatenate lines when they are joined.
l l Join lines: Join the lines in the selection with the Concatenation string defined below. Concatenation string: The (HTML) string used to concatenate lines when they are joined. Settings for location-based fields in CSV and Database files These are the settings for location-based fields in CSV and Database files. l Column: Drop-down listing all fields in the Data Sample, of which the value will be used.
Note: A JsonPath expression can define more than one item (for example: .* returns anything in the current element). If more than one item is returned, the Extract step will keep an array of all returned items. The full JsonPath to an element is displayed at the bottom left of the window when you select it. To copy the path, right-click it and select Copy. l Use selection: Click to use the value of the current data selection for the extraction.
l Date/Time Format: Set the date format for a date value. l Automatic: Select this option to parse dates automatically, without specifying a format. This is the default setting for new Date fields. l ISO8601: This setting allows for dates with different timestamp formats, or belonging to different time zones, to be parsed inside a single job. Dates that do not include a specific time are automatically considered to use the current locale's time zone.
l Move Up button l Move Down button : Click to move the selected field up one position. : Click to move the selected field down one position. Note: The order of fields in an extraction step isn't necessarily the same as the order of those fields in the Data Model; see "Ordering and grouping fields in the Data Model" on page 262. Action step properties The Action step can run multiple specific actions one after the other in order; see "Action step" on page 257 for more information.
is saved in the database at run time. l Stop data mapping: The extraction workflow stops processing the data. If fields of the current record were already extracted prior to encountering the Action step, then those fields are stored as usual, but the rest of the data is skipped. If no fields were extracted prior to encountering the Action step, then no trace of the current record is saved in the database at run time.
l Expression: The JavaScript expression to run. l Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 362 and "DataMapper Scripts API" on page 360). l Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. l Use selection: Click to use the value of the current data selection for the extraction.
CSV and Database Files l Property: Displays a list of record properties set in the Preprocessor step (see "Preprocessor step" on page 249). l Type: Displays the type of the property. Read only field. l Based on: Determines the origin of the data. l Location: The contents of the data selection set below will be the value of the extracted field. The data selection settings are different depending on the data sample type.
l Currency Sign: Set the currency sign for a currency value. l Treat empty as 0: A numerical empty value is treated as a 0 value. l Date/Time Format: Set the date format for a date value. l Automatic: Select this option to parse dates automatically, without specifying a format. This is the default setting for new Date fields. l ISO8601: This setting allows for dates with different timestamp formats, or belonging to different time zones, to be parsed inside a single job.
l JavaScript : The result of the JavaScript Expression written below the drop-down will be the value of the extracted field. If the expression contains multiple lines, the last value attribution (variable = "value";) will be the value. See "DataMapper Scripts API" on page 360. l Expression: The JavaScript expression to run. l Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 362 and "DataMapper Scripts API" on page 360).
l Use offset from UTC: Select the default time zone, which is to be used to extract any timestamp that does not already include time zone information with the time. Run JavaScript Running a JavaScript expression offers many possibilities. The script could, for example, set record properties and field values using advanced expressions and complex mathematical operations and calculations. l Expression: The JavaScript expression to run (see "DataMapper Scripts API" on page 360).
l Until statement is true: The loop executes until the statement below is true. The statement is evaluated after the loop so the loop will always run at least once. l Until no more elements (for Text, CSV, Database and PDF files only): The loop executes as long as there are elements left as selected below. l For Each (for XML and JSON files only): The loop executes for all nodes (by default) or for selected nodes on the specified level.
Rule Tree The Rule tree subsection displays the full combination of rules (defined below under Condition) as a tree, which gives an overview of how the conditions work together as well as the result for each of these conditions for the current record or iteration. Condition First, the Condition List displays the conditions in list form, instead of the tree form above. Three buttons are available next to the list: l Add condition: Click to create a new condition in the list.
l Value: The text value to use in the comparison. l Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. l Field: The contents of a specific field in the Extracted Record. l l Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l Expression: The JavaScript line that is evaluated.
l Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. CSV and Database files l Based On: l Position: The data in the specified position for the comparison. l Column: Drop-down listing all fields in the Data Sample, of which the value will be used. l Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). l l Use Selection: Click to use the value of the current data selection for the extraction.
l Counter: The value of the current counter iteration in a Repeat step. l Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). l Operators: l is equal to: The two specified value are identical for the condition to be True. l contains: The first specified value contains the second one for the condition to be True. l is less than: The first specified value is smaller, numerically, than the second value for the condition to be True.
l Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. l Data Property: The value of a data-level property set in the Preprocessor step. l Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job.
l Value: A specified static text value. l Value: The text value to use in the comparison. l Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. l Field: The contents of a specific field in the Extracted Record. l l Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l Expression: The JavaScript line that is evaluated.
l is empty: The first specified value is empty. With this operator, there is no second value. l Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. Condition step properties A Condition step is used when the data extraction must be based on specific criteria. See "Condition step" on page 253 for more information. The properties of a Condition step become visible in the Step properties pane when the Condition step is selected on the Steps pane.
l Based On: l Position: The data in the specified position for the comparison. l Left (Txt and PDF only): The start position for the data selection. Note that conditions are done on the current line, either the current cursor position, or the current line in a Repeat step. l Right (Txt and PDF only): The end position for the data selection. l Height (Txt and PDF only): The height of the selection box.
l Data Property: The value of a data-level property set in the Preprocessor (see "Preprocessor step" on page 249). l Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. l Automation Property: The current value of a Document-level property set in the Preprocessor step (see "Preprocessor step" on page 249).
Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Condition Left operand, Right operand The Left and right operand can be Based on: l Position: The data in the specified position for the comparison.
l JavaScript: The result of a JavaScript Expression. l Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. See also: "DataMapper Scripts API" on page 360. l Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 362). l Use selected text: Inserts the text in the current data selection in the JavaScript Expression.
l is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. l is empty: The first specified value is empty. With this operator, there is no second value. l Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. Goto step properties The Goto step moves the pointer within the source data to a position that is relative to the top of the record or to the current position.
l l l From: Defines where the jump begins: l Current Position: The Goto begins at the current cursor position. l Top of record: The Goto begins at line 1 of the source record. Move by: Enter the number of lines or pages to jump. Next line with content: Jumps to the next line that has contents, either anywhere on the line or in specific columns. l Inspect entire page width: When checked, the Next line with content and Next occurrence of options will look anywhere on the line.
PDF file l Target Type: Defines the type of jump . l Physical distance: l l l l Current Position: The Goto begins at the current cursor position. l Top of record: The Goto begins at line 1 of the source record. Move by: Enter distance to jump. Page: Jumps between pages or to a specific page. l l l From: Defines where the jump begins: From: Defines where the jump begins: l Current Position: The Goto begins at the current cursor position.
l Use selection: Click while a selection is made in the Data Viewer to automatically set the left and right values to the left and right edges of the selection. l Expression: Enter the text or Regex expression to look for on the page. l Use selection: Click while a selection is made in the Data Viewer to copy the contents of the first line of the selection into the Expression box. l Use regular expression: Check so that the Expression box is treated as a regular expression instead of static text.
JSON file l Destination: Defines what type of jump to make: l Sibling element: Jumps the number of siblings (elements at the same level) defined in the Move by option. Use a negative value to jump to the previous sibling, or a positive value to go to the next sibling. If there are not enough siblings to make the requested move, the cursor will not move. l Element, from top of record: Jumps to the specified element. The JsonPath in the Absolute JsonPath option starts from the root defined by $.
l Name: The name to identify the Postprocessor. l Type: The type of Postprocessor. Currently there is a single type available. l JavaScript: Runs a JavaScript Expression to modify the Data Sample. See "DataMapper Scripts API" on page 360. l Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 362). l Add Postprocessor: Click to add a new Postprocessor. Its settings can be modified once it is added.
File manipulation l New: Displays the New wizard where a new data mapping configuration or a new template can be created. l Open: Displays the Open dialog to open an existing data mapping configuration. l Save: Saves the current data mapping configuration. If the configuration has never been saved, the Save As... dialog is displayed.
Cut Step: Removes the currently selected step and places it in the clipboard. If the step is a l Repeat or a Condition, all steps under it are also placed in the clipboard. If there is already a step in the clipboard, it will be overwritten. Copy Step: Places a copy of the currently selected step in the clipboard. The same details as l the Cut step applies. l Paste Step: Takes the step or steps in the clipboard and places them after the currently selected step.
Contents l Resources l Documentation: Opens this documentation. l Training: Opens Learn: the Objectif Lune e-Learning Center, with its tutorials, walkthroughs, how-tos, forum, and blog. l l Support: Opens the support page in the PlanetPress Connect website. l Licenses & Activations: Opens the Objectif Lune Web Activation Manager. l License details: Shows your current license's details. l Website: Opens the PlanetPress Connect website.
Objects Name Description Available in scripts of type "Objects" on page 369 A ScriptableAutomation object encapsulating the properties of the PlanetPress Workflow process that triggered the current operation. Boundaries, all steps except Goto "boundaries" on page 370 An object encapsulating properties and methods allowing to define the boundaries of each document in the job. Boundaries "data" on page 374 A data object encapsulating properties and methods pertaining to the original data stream.
Name Description "createTmpFile()" on page 406 Creates a file with a unique name in the temporary work folder and returns a file object. "deleteFile()" on page 407 Deletes a file. "execute()" on page 407 Calls an external program and waits for it to end. isRuntime() Returns true if the data mapping process is currently running in runtime mode, or false if the configuration is running in debug mode (i.e. in the DataMapper). "newByteArray()" on page 408 Returns a new byte array.
A script can be used to set boundaries for a data source (see "Setting boundaries using JavaScript" on the facing page). The script determines where a new record starts. Scripts can also be used in different steps in the extraction workflow. You can: l Modify the incoming data prior to executing the rest of the extraction workflow, via a Preprocessor (see "Preprocessor step" on page 249).
Syntax rules In the DataMapper, all scripts must be written in JavaScript, following JavaScript syntax rules. For example, each statement should end with ; and the keywords that can be used, such as var to declare a variable, are JavaScript keywords. There are countless tutorials available on the Internet to familiarize yourself with the JavaScript syntax. For a simple script all that you need to know can be found on the following web pages: W3Schools website - JavaScript Syntax and https://www.w3schools.
If you know, for instance, that a PDF file only contains documents that are 3 pages long, your script could keep count of the number of times it's been called since the last boundary was set (that is, the count of delimiters that have been encountered). Each time the count is a multiple of 3, it could set a new record boundary. This is basically what happens when setting the trigger to On Page and specifying 3 as the Number of Pages.
Examples Basic example using a CSV file Imagine you are a classic rock fan and you want to extract the data from a CSV listing of all the albums in your collection. Your goal is to extract records that change whenever the artist OR the release year changes.
if (boundaries.getVariable("lastBand")!=null) { if (zeBand[0] != boundaries.getVariable("lastBand") || zeYear[0] != boundaries.getVariable("lastYear") ) { boundaries.set(); } } boundaries.setVariable("lastBand",zeBand[0]); boundaries.setVariable("lastYear",zeYear[0]); l The script first reads the two values from the input data, using the createRegion() method (see: "createRegion()" on page 395). For a CSV/database data type, the parameter it expects is simply the column name.
Beatles Let it be 1970 Rolling Stones Let it bleed 1969 Led Zeppelin Led Zeppelin 3 1970 Led Zeppelin Led Zeppelin 4 1971 Rolling Stones Sticky Fingers 1971 The purpose of the script, again, is to set the record boundary when EITHER the year OR the artist changes. The script would look like this: /* Read the values of both columns we want to check */ var zeBand = boundaries.get(region.createRegion(1,1,30,1)); var zeYear = boundaries.get(region.
to create a region for the Year, the code might look like this: region.createRegion(190,20,210,25) which would create a region located near the upper right corner of the page. That's the only similarity, though, since the script for a PDF would have to look through the entire page and probably make multiple extractions on each one since it isn't dealing with single lines like the TXT example given here. For more information on the API syntax, please refer to "DataMapper Scripts API" on page 360.
Examples To access JobInfo 1 to 9 defined in Workflow (see Job Info variables): automation.jobInfo.JobInfo1; To access ProcessName, OriginalFilename or TaskIndex from Workflow: automation.properties.OriginalFilename; To access Workflow variables (see "Properties and runtime parameters" on page 227): automation.parameters.runtimeparametername; boundaries Returns a boundaries object encapsulating properties and methods allowing to define the boundaries of each document in the job.
Method Description Script type "getVariable()" on the facing page Retrieves a value of a variable stored in the boundaries object. Boundaries "set()" on page 373 Sets a new record boundary. (See: "Record boundaries" on page 226.) Boundaries "setVariable()" on page 374 Sets a boundaries variable to the specified value, automatically creating the variable if it doesn't exist yet. Boundaries find() Method of the boundaries object that finds a string in a region of the data source file.
Example This script sets a boundary when the text TOTAL is found on the current page in a PDF file. The number of delimiters is set to 1, so the boundary is set on the next delimiter, which is the start of the next page. if (boundaries.find("TOTAL", region.createRegion(10,10,215,279)).found) { boundaries.set(1); } get() The get() method reads the contents of a region object and converts it into an array of strings (because any region may contain several lines).
set() Sets a new DataMapper record boundary. set(delimiters) delimiters Sets a new record boundary. The delimiters parameter is an offset from the current delimiter, expressed in an integer that represents a number of delimiters. If this parameter is not specified, then a value of 0 is assumed. A value of 0 indicates the record boundary occurs on the current delimiter. A negative value of -n indicates that the record boundary occurred -n delimiters before the current delimiter.
} } } setVariable() This method sets a variable in the boundaries to the specified value, automatically creating the variable if it doesn't exist yet. Boundary variables are carried over from one iteration of the Boundaries script to the next, while native JavaScript variables are not. setVariable(varName, varValue) Sets variable varName to value varValue. varName String name of the variable of which the value is to be set. varValue Object; value to which the variable has to be set.
Methods The following table lists the methods of the data object. Method Description Script type File type "extract()" below Extracts the text value from a rectangular region. Extract, Condition, Repeat, and Action steps All "extractByIndex(index, rowOffset)" on page 383 Extracts the value from the specified column and row. Extract, Condition, Repeat, and Action steps CSV/ XLSX/ XLS "extractMeta()" on page 384 Extracts the value of a metadata field.
right Number that represents the distance, measured in characters, from the left edge of the page to the right edge of the rectangular region. verticalOffset Number that represents the current vertical position, measured in lines. regionHeight Number that represents the total height of the region, measured in lines. Setting the regionHeight to 0 instructs the DataMapper to extract all lines starting from the given position until the end of the record.
Example 2: The script command data.extract(1,22,9,6,"
"); means that the left position of the extracted information is located at 1, the right position at 22, the offset position is 9 (since the first line number is 10) and the regionHeight is 6 (6 lines are selected). Finally, the "
" string is used for concatenation.
extract(xPath) Extracts the text value of the specified node in an XML file. xPath String that can be relative to the current location or absolute from the start of the record. Example The script command data.extract('./CUSTOMER/FirstName'); means that the extraction is made on the FirstName node under Customer.
extract(columnName, rowOffset) Extracts the text value from the specified column and row in a CSV/XLS/XLSX file. The column is specified by name. To extract data from a column specified by index, use "extractByIndex(index, rowOffset)" on page 383. columnName String that represents the column name.
Number that represents the row index (zero-based), relative to the current position. To extract the current row, specify 0 as the rowOffset. Use moveTo() to move the pointer in the source data file (see "moveTo()" on page 398). Example The script command data.extract('ID',0); means that the extraction is made on the ID column in the first row.
extract(left, right, verticalOffset, lineHeight, separator) Extracts the text value from a rectangular region in a PDF file. All coordinates are expressed in millimeters. left Double that represents the distance from the left edge of the page to the left edge of the rectangular region. right Double that represents the distance from the left edge of the page to the right edge of the rectangular region. verticalOffset Double that represents the distance from the current vertical position.
extract(jPath) Extracts the text value of the specified element in a JSON file. jPath JsonPath expression (String) that can be relative to the current location or absolute from the start of the record. See also: "JsonPath" on page 235. Example The script command data.extract('$[0].FirstName'); means that the extraction is made on the FirstName element found in the first element in the array at the root.
Note that in order to access an extracted object or array in script, the extracted value has to be parsed, for example: var myData = JSON.parse(data.extract('$[0]')); extractByIndex(index, rowOffset) Extracts the value from the specified column and row.
This function can be used to extract data from CSV or XLS(X) files that have an identical structure but don't have the same column names. index Number that represents a column in a CSV or XLS(X) file (1-based). rowOffset Optional. Number that represents the row index (zero-based), relative to the current position. To extract the current row, specify 0 as the rowOffset. Use moveTo() to move the pointer in the source data file (see "moveTo()" on page 398). When omitted, the current row will be extracted.
String, specifying a level in the PDF/VT or AFP file. propertyName String, specifying the metadata field. fieldExists(fieldName) This method returns true if a column with the specified name exists in the current record in a CSV, XLS or XLSX file. To verify whether a column specified by index exists in a CSV, XLS or XLSX file, use "fieldExistsByIndex(index)" below. fieldName String that represents a field name (column) in a CSV, XLS or XLSX file.
find() Method of the data object that finds the first occurrence of a string starting from the current position. find(stringToFind, leftConstraint, rightConstraint) Finds the first occurrence of a string starting from the current position. The search can be constrained to a series of characters (in a text file) or to a vertical strip (in a PDF file) located between the given constraints. The method returns null if the string cannot be found.
Note that the smaller the area is, the faster the search is. So if you know that the word "text" is within 3 inches from the left edge of the page, provide the following: data.find("text", 0, 76.2); //76.2mm = 3*25.4 mm The return value of the function is: Left=26,76, Top=149.77, Right=40,700001, Bottom=154.840302 These values represent the size of the rectangle that encloses the string in full, in millimeters relative to the upper left corner of the current page.
i: Enables case-insensitive matching. By default, case-insensitive matching assumes that only characters in the US-ASCII charset are being matched. Unicode-aware case-insensitive matching can be enabled by specifying the UNICODE_CASE flag (u) in conjunction with this flag. s: Enables dotall mode. In dotall mode, the expression . matches any character, including a line terminator. By default this expression does not match line terminators. L: Enables literal parsing of the pattern.
data.findRegExp("\\d{3}-[A-Z]{3}","gi",50,100);}} Both expressions would match the following strings: 001-ABC, 678-xYz. Note how in the second version, where the regular expression is specified as a string, some characters have to be escaped with an additional backslash, which is standard in JavaScript. db Object that allows to connect to a database. Methods The following table describes the methods of the db object.
The syntax will be specific to the database, or more precisely the JDBC connector. Currently Datamapper supports the following JDBC connectors: l com.mysql.cj.jdbc.Driver l sun.jdbc.odbc.JdbcOdbcDriver l com.microsoft.sqlserver.jdbc.SQLServerDriver l oracle.jdbc.OracleDriver user String that represents the name of the database user on whose behalf the connection is being made. This is used for authentication. password String that represents the user's password. This is used for authentication.
Properties Property Description copies The total number of copies of the current record that must be created. By default, this is 1. This value is used when the record is saved, at the end of the data mapping process for each record. fields The field values that belong to this record. You can access a specific field value using either a numeric index or the field name. index The one-based index of this record, or zero if no data is available. tables The detail tables that belong to this record.
the record parameter are updated in the database, while the contents of all other fields remain unchanged. The call fails if the parameter is omitted or empty, if any of the fields specified in the record doesn't exist in the Data Model, or if a value cannot be converted to the data type that is expected in a field. About data types Where possible, values are automatically converted into the data type of the respective data field.
The mandatory record parameter is a JavaScript object that contains one or more fields specified in the data model at the root level. The record parameter may contain a subset of the fields in the Data Model. Only the fields included in the record parameter are updated in the database, while the contents of all other fields remain unchanged.
Properties Property Description copies The total number of copies of the current record that must be created. By default, this is 1. This value is used when the record is saved, at the end of the data mapping process for each record. fields The field values that belong to this record. You can access a specific field value using either a numeric index or the field name. index The one-based index of this record, or zero if no data is available. tables The detail tables that belong to this record.
Property/method Description Return Type found Field that contains a boolean value indicating if the last call to boundaries.find() was successful. Since the find() method always returns a region, regardless of search results, it is necessary to examine the value of found to determine the actual result of the operation. Boolean "range" on the facing page Read-only object containing the physical coordinates of the region.
Example The following script attempts to match ((n,m)) or ((n)) against any of the strings in the specified region and if it does, a document boundary is set. var myRegion = region.createRegion(170,25,210,35); var regionStrings=boundaries.get(myRegion); if (regionStrings) { for (var i=0;i
These are the custom properties defined in the Preprocessor step that have their Scope set to "Each record". See: "Properties and runtime parameters" on page 227. Properties sourceRecord.properties.property; Property Description properties Returns an array of properties defined in the Preprocessor step with the Record Scope (i.e. dynamically reset with each new record). steps Returns a steps object encapsulating properties and methods pertaining to the current DataMapper process.
Method Description File type "moveTo()" below Moves the pointer in the source data file to another position. All "moveToNext()" on page 400 Moves the position of the pointer in the source data file to the next line, row or node. The behavior and arguments are different for each emulation type: text, PDF, tabular (CSV), or XML. All totalPages An integer value representing the total number of pages inside the current record. Text, PDF Example if(steps.currentPage > curPage) { steps.
Number that may be set to: 0 or steps.MOVELINES1 or steps.MOVEDELIMITERS2: next line with content verticalPosition Number. What it represents depends on the value specified for scope. With the scope set to 0 or steps.MOVELINES, verticalPosition represents the index of the line to move to from the top of the record. With the scope set to 1 or steps.MOVEDELIMITERS, verticalPosition represents the index of the delimiter (as defined in the Input Data settings) to move to from the top of the record.
moveTo(xPath) Moves the current position in a XML file to the first instance of the given node, relative to the top of the record. xPath String that defines a node in the XML file. Tip: TheXML elementsdrop-down (on the Settings pane, under Input Data) lists xPaths defining nodes in the current XML file. moveTo(row) Moves the current position in a CSV file to the given row number. row Number that represents the index of the row, relative to the top of the record.
Number that may be set to: l 0 or steps.MOVELINES: the current position is set to the next line. l 1 or steps.MOVEDELIMITERS: the current position is set to the next delimiter (as defined in the Input Data settings). l 2 (next line with content): the current position is set to the next line that contains any text. Example: The following line of code moves the current position to the next line that contains any text. steps.moveToNext(2); XML scope Number that may be set to: l 0 or steps.
right Double that represents the right edge (in millimeters) of the text to find. moveToNext() Moves the current position in a CSV file to the next row, relative to the current position. table The table object holds a detail table that exists in a record. The detail table is retrieved by name, using record.tables.
. For example: record.tables.myDetailTable. Properties Property Description length Returns the count of rows in the detail table.About data types Where possible, values are automatically converted into the data type of the respective data field. Note: Dates must be passed as a Date object to allow them to be extracted into a Date field. See Date in the Mozilla help files. Passing an improper data type triggers an error.
Note: Dates must be passed as a Date object to allow them to be extracted into a Date field. See Date in the Mozilla help files. Passing an improper data type triggers an error. For instance the following objects are all invalid: { myBoolean : "true" } - The myBoolean field is boolean and expects a boolean, not a string { myDate : "2021-03-29" } - The myDate field is a date and expects a Date object: myDate: new Date(2021,2,29), not a string { myPageCount : 2.
createGUID() This function returns a unique 36-character string consisting of 32 alphanumeric, lower case characters and four hyphens. Format: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (8-4-4-4-12 characters).| Example: 123e4567-e89b-12d3-a456-426655440000. The function produces unique strings on each and every call, regardless of whether the call occurs within the same data mapper or not, or on concurrent threads.
Supported methods create() Creates a new instance of ScriptableHTTPRequest. l open(String method, String url, String user, String password) l open(String verb, String url, String userName, String password, String [] headers, String[] headervalues, String requestBody) l send() l send(String requestBody) Opens a HTTP request. If you don't use a user name and password, pass empty strings: request.open("GET",url,"",""); Sends an HTTP request and returns the HTTP status code. Blocked call.
try{ // Open a reader var reader = openTextReader(data.filename); // Create a temporary file var tmpFile = createTmpFile(); // Open a writer on the temporary file var writer = openTextWriter(tmpFile.getPath()); try{ var line = null; // Current line /* read line by line and readLine will return null at the end of the file */ while( (line = reader.readLine()) != null ){ // Edit the line line = line.toUpperCase(); // Write the result in the temporary file writer.write(line); // add a new line writer.
command String that specifies the path and file name of the program to execute. newByteArray() Function that returns a new byte array. newByteArray(size) Returns a new byte array of of the specified number of elements. size Integer that represents the number of elements in the new array. newCharArray() Function that returns a new Char array. newCharArray(size) Returns a new Char array of the specified number of elements. size Integer that represents the number of elements in the new array.
newIntArray(size) Returns a new Integer array of the specified number of elements. size Integer that represents the number of elements in the new array. newLongArray() Function that returns a new long array. newLongArray(size) Returns a new Long array of the specified number of elements. size Integer that represents the number of elements in the new array. newStringArray() Function that returns a new string array. newStringArray(size) Returns a new String array of the specified number of elements.
append Boolean parameter that specifies whether the file pointer should initially be positioned at the end of the existing file (append mode) or at the beginning of the file (overwrite mode). openTextReader() Function that opens a file as a text file for reading purposes. The function returns a "TextReader" below object. Please note that the temporary file must be closed at the end. openTextReader(filename,encoding) filename String that represents the name of the file to open.
Method Description open(inStream, inEncoding) Creates a reader from an input stream. Parameters: open(inFileName, inEncoding) parseCharset(inEncoding) l inStream: the input stream to read l inEncoding: the encoding to use when reading the file Creates a reader on the specified file. Parameters: l inFilename: the path of the file to read l inEncoding: the encoding to use when reading the file Returns a character set (Charset).
var var var var fileIn = openTextReader(data.filename); tmp = createTmpFile(); fileOut = openTextWriter(tmp.getPath()); line; while ((line = fileIn.readLine())!=null){ fileOut.write(line.replace((subject),"")); fileOut.newLine(); } fileIn.close(); fileOut.close(); deleteFile(data.filename); tmp.move(data.filename); tmp.close(); TextWriter The TextWriter object, returned by the openTextWriter() function, allows to open a text file, write to it and close it.
A template may contain designs for multiple output channels: a letter intended for print and an e-mail variant of the same message, for example. Content, like the body of the message or letter, can be shared across these contexts. Templates are personalized using scripts and variable data. More advanced users may edit the underlying HTML, CSS and JavaScript directly. The following topics will help to quickly familiarize yourself with the Designer. l "Designer basics" below.
Tip: Alternatively you could start with a Sample Project which creates an entire Connect solution: a Workflow configuration, as well as any Connect templates, data mapping configurations, Job Creation Presets and Output Creation Presets that are used in that configuration. See: "Sample Projects" on page 918. What's next? Create data mapping configurations to extract data from a variety of data sources. See "DataMapper basics" on page 199. Use Workflow to automate your customer communications.
Creating a template In the Welcome screen that appears after startup, get off to a flying start choosing Template Wizards. Scroll down to see all the Template Wizards. After deciding which output channel will be prevalent in your template, select a template. The Template Wizards can also be accessed from the menu: click File, click New, expand the Template folder, and then expand one of the templates folders.
Opening a package file Templates can also be stored in a package file (see "Creating package files" on page 418). To open a package file, switch the file type to Package files (*.OL-package) in the Open File dialog. If the package contains Print Presets, you will be asked if you want to import them into the proper repositories. Saving a template A Designer template file has the extension .OL-template.
To change which data mapping configuration is linked to the template, open both the template and the data mapping configuration that should be linked to it; then save the template. Auto Save After a template has been saved for the first time, Connect Designer can auto save the template with a regular interval. To configure Auto Save: 1. Select the menu option Window > Preferences > Save. 2. Under Auto save, check the option Enable to activate the Auto Save function. 3.
Saving a copy / down-saving a template The Connect software is backwards compatible: templates that were made with an older version of Connect can always be opened with the newest version of the software. But newer templates cannot be opened with an older version of the software.
import that into Workflow. The Send to Workflow dialog sends templates, data mapping configurations and Print Presets to the Workflow server. A data mapping configuration file contains the information necessary for data mapping: the settings to read the source file (Delimiter and Boundary settings), the data mapping workflow with its extraction instructions ('Steps'), the Data Model and any imported data samples. For more information see "Data mapping configurations" on page 199.
To create a custom template report, you need two files: A template design with the desired layout and variable data. This .OL-template file has to be made in the Designer. A data mapping configuration provides the variable data. You could use the data mapping configuration made for the standard template report, or create another one in the DataMapper module, using the standard XML template report as data sample.The DataMapper is included only in PlanetPress Connect and PReS Connect.
l Blank l Contact Us l Jumbotron l Thank You If you don't know what template to choose, see "Web Template Wizards" on the facing page further down in this topic, where the characteristics of each kind of template are described. 3. Click Next and make adjustments to the settings. The wizard remembers the settings that were last used for a Foundation Web template. l Section: l Name: Enter the name of the Section in the Web context. This has no effect on output.
l A Web context with one web page template (also called a section) in it. The web page contains a Header, a Section and a Footer element with dummy text, and depending on the type of web page, a navigation bar, button and/or Form elements. l Resources related to the Foundation framework (see "Web Template Wizards" below): style sheets and JavaScript files. The style sheets can be found in the Stylesheets folder on the Resources pane.
across many browsers and devices, and works back as far as IE9 and Android 2. See http://foundation.zurb.com/learn/about.html. Jumbotron The name of the Jumbotron template is derived from the large screens in sports stadiums. It is most useful for informative or marketing-based websites. Its large banner at the top can display important text and its "call to action" button invites a visitor to click on to more information or an order form.
For more information about the use of Foundation in the Designer, see "Using Foundation" on page 528. After creating a COTG template, the other contexts can be added, as well as other sections (see "Adding a context" on page 432 and "Adding a Web page" on page 498). Tip: If the COTG Form replaces a paper form, it can be tempting to stick to the original layout. Although that may increase the recognizability, it is better to give priority to the user-friendliness of the form.
l Time Sheet. The Time Sheet Template is a single page application used to add time entries to a list. This template demonstrates the dynamic addition of lines within a COTG template, as the Add button creates a new time entry. There is no limit to the number of entries in a single page. Submitted data are grouped using arrays (see "Grouping data using arrays" on page 539). 3. Click Next and make adjustments to the settings. The wizard remembers the settings that were last used for a COTG template.
6. Make sure to set the action and method of the form: select the form and then enter the action and method on the Attributes pane. The action of a Capture OnTheGo form should specify the Workflow HTTP Server Input task that receives and handles the submitted data. The action will look like this: http://127.0.0.1:8080/action (8080 is Workflow's default port number; 'action' should be replaced by the HTTP action of that particular HTTP Server Input task).
Tip: Click the Edges button on the toolbar temporarily adds a frame to certain elements on the Design tab. These will not Print or output. Tip: If you have started creating your Capture OnTheGo template using a COTG Template Wizard, you can find ready-made elements in the Snippets folder on the Resources pane. Resources This page clarifies the difference between Internal, External and Web resources that may be used in a template, and explains how to refer to them in HTML and in scripts.
you need to use that structure, for example in HTML: . In scripts, you can refer to them in the same way, for example: results.loadhtml("snippets/en/navbar.html"); See also: "Loading a snippet via a script" on page 831 and "Writing your own scripts" on page 808. Note: When referring to images or fonts from a CSS file, you need to remember that the current path is css/, meaning you can't just call images/image.jpg.
through URL Parameters: (http://www.example.com/data.json?user=username&password=password) or through HTTP Basic Auth: (http://username:password@www.example.com/data.json). Resources can also be called from a PlanetPress Workflow instance: l "Static Resources", as set in the preferences, are accessed using the resource path, by default something like http://servername:8080/_iRes/images/image.jpg.
3. Select the data type. This impacts the way the data can be handled in a script; for example, if a parameter's type is Number, its value can be used directly in calculations, without having to parse it first. In the Parameters pane, the type of a runtime parameter can be recognized by its icon. 4. Optionally, set a default value. A default value will only be used in the case that there is no actual value coming from the automation tool, e.g.
Accessing runtime parameters Runtime parameters in a template are accessible in scripts, via merge.template.parameters. (See "Standard Script API" on page 1169.) The merge.template object has a parameters array that allows to access the template's runtime parameters. (See: "template" on page 1292.) The script can read and could also change the values. Note however that any runtime parameter's value will be reset with each new record that the template is being merged with.
Adding a context To add a context, right-click the Contexts folder on the Resources pane and click New print context, New email context or New web context. Or use Context > Add in the main menu. Only one context of each type can be present in a template. Each context, however, can hold more than one section; see "Sections" below. Importing a context To import a context, click File > Import Resources... in the menu. See: "Import Resources dialog" on page 889.
Importing a section To import a section from another template, click File > Import Resources... in the menu. See: "Import Resources dialog" on page 889. Remember to copy the related source files, such as images, to the other template as well. Editing a section To open a section, expand the Contexts folder on the Resources pane, expand the respective context (Print, Email or Web) and double-click a section to open it.
l On the Resources pane, expand the Contexts folder, expand the folder of the respective context, right-click the name of the section, and then click Delete. Caution: No backup files are maintained in the template. The only way to recover a deleted section, is to click Undo on the Edit menu, until the deleted section is restored, or by reverting to the last saved state (click File > Revert, on the menu).
3. Choose which CSS files should be applied to this section. The available files are listed at the left. Use the arrow buttons to move the files that should be included to the list at the right. 4. You can also change the order in which the CSS files are read: click one of the included CSS files and use the Up and Down buttons. Note that moving a style sheet up in the list gives it less weight. In case of conflicting rules, style sheets read later will override previous ones.
Print Connect supports a number of different types of print outputs. These include: l PCL l PDF l PostScript (including the PPML, VIPP and VPS variants) With the Designer you can create one or more Print templates and merge the template with a data set to generate personal letters, invoices, policies, or any other type of letter you can think of. The Print context is the folder in the Designer that can contain one or more Print sections.
Headers, footers, tear-offs and repeated elements (Master page) In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos. In addition, some elements should appear on each first page, or only on pages in between the first and the last page, or only on the last page. Examples are a different header on the first page, and a tear-off slip that should show up on the last page. This is what Master Pages are used for.
In the Welcome screen that appears after startup: l Choose Template Wizards and scroll down until you see the Basic Print templates or ERP templates and select one of them. l Or choose New Template and select Print, PDF-based Print, or Microsoft Word-based Print. Alternatively, on the menu select File > New, expand the Template folder, and then: l Select PDF-based Print or Microsoft Word-based Print. l Or expand the Basic Print templates or ERP templates folder, select a template type and click Next.
page in the Print section. See "Master Pages" on page 462. l Scripts and selectors for variable data. The Scripts pane shows, for example, a script called "first_name". This script replaces the text "@first_name@" on the front of the postcard by the value of a field called "first_name" when you open a data set that has a field with that name. See "Variable data in the text" on page 708. l A script called Dynamic Front Image Sample. This script shows how to toggle the image on the front page dynamically.
l A Print context with one section in it; see "Print context" on page 443 and "Print sections" on page 447. l One empty Master Page. Master Pages are used for headers and footers, for images and other elements that have to appear on more than one page, and for special elements like tear-offs. See "Master Pages" on page 462. l One Media. You can see this on the Resources pane: expand the Media folder. Media 1 is the Virtual Stationery that you have selected in the Wizard.
After clicking Next, you can change the settings for the page. The initial page size and bleed area are taken from the selected PDF. When you click Finish, the Wizard creates: l A Print context with one section in it; see "Print context" on page 443 and "Print sections" on page 447. The selected PDF is used as the background of the Print section; see "Using a PDF file or other image as background" on page 451. For each page in the PDF one page is created in the Print section. l One empty Master Page.
l The brackets from the mail merge fields are converted to the @ character. l The variable is wrapped with a span element. l A user script is created for each data field. l The mail merge fields are added to the Data Model of the OL Connect template. Select File > Add data > From File Data Source to import the corresponding data. Or create a data mapping configuration to fill the Data Model with actual data. ERP templates The ERP template wizard creates a business document.
l A Print context with one section in it; see "Print context" below and "Print sections" on page 447. l One Master Page. Master Pages are used for headers and footers, for images and other elements that have to appear on more than one page, and for special elements like tear-offs. See "Master Pages" on page 462. l One Media. You can see this on the Resources pane: expand the Media folder. Media 1 is the Virtual Stationery that you have selected in the Wizard.
l The Print context is created and one Print section is added to it. You can see this on the Resources pane: expand the Contexts folder, and then expand the Print folder. The Print context can contain multiple sections: a covering letter and a policy, for example, or one section that is meant to be attached to an email as a PDF file and another one that is going to be printed out on paper.
each record. The sections are added to the output in the order in which they appear on the Resources pane. This order can be changed; see "Print sections" on page 447. It is also possible to exclude sections from the output, or to include a section only on a certain condition that depends on a value in the data; see "Conditional Print sections" on page 738. This can also be done using a Control Script; see "Control Scripts" on page 838.
Setting the bleed The bleed is the printable space around a page. It can be used on some printers to ensure that no unprinted edges occur in the final trimmed document. The bleed is one of the settings for a section. See "Page settings: size, margins and bleed" on page 456. Overprint and black overprint Normally, when two colors overlap in Print output, the underlying color is not printed.
Print sections Print templates (also called Print sections), are part of the Print context. They are meant to be printed directly to a printer or a printer stream/spool file, or to a PDF file (see "Generating Print output" on page 1316). The Print context can also be added to Email output as a PDF attachment; see "Generating Email output" on page 1340. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record.
Note: When both Media and a Master Page are used on a certain page, they will both be displayed on the Preview tab of the workspace, the Master Page being 'in front' of the Media and the Print section on top. To open the Preview tab, click it at the bottom of the Workspace or select View > Preview View on the menu. See "Media" on page 465 for a further explanation about how to add Media and how to apply them to different pages.
Windows Explorer and select Enhance with Connect. Alternatively, start creating a new Print template with a Wizard, using the PDF-based Print template (see "Creating a Print template with a Wizard" on page 437). To use a PDF file as background image for an existing section, see "Using a PDF file or other image as background" on page 451. Via a Control Script, sections can be added to a Print context dynamically; see "Dynamically adding sections (cloning)" on page 846.
order in which they appear on the Resources pane, so changing the order of the sections in the Print context changes the order in which they are outputted to the final document. To rearrange sections in a context: l On the Resources pane, expand the Print context and drag and drop sections to change the order they are in. l Alternatively, on the Resources pane, right-click a section in the Print context and click Arrange.
Note: Style sheets that are linked to (i.e. included in) a section show a chain icon in the Resources pane (see "Resources pane" on page 978). Using a PDF file or other image as background In the Print context, a PDF file can be used as a section's background. It is different from the Media in that the section considers the PDF to be content, so the number of pages in the section will be the same as the number of pages taken from the PDF file.
and then enter a web address (for example, http://www.mysite.com/images/image.jpg). Note: If a URL doesn't have a file extension, and the option Save with template is not selected, the Select Image dialog automatically adds the filetype parameter with the file extension as its value (for example: ?filetype=pdf (if it is the first parameter) or &filetype=pdf). The filetype, page and nopreview parameters are not sent to the host; they are used internally.
Tip: An alternative to using a PDF as background inside the template is to layer the template (i.e. the PDF output of a Print section) over the background PDF via a Script task in a Workflow process. This is called 'stamping'. In the unusual case where extracting text from the PDF that is the output of a Print section with a PDF background doesn't work, it is recommended to use this method. For more information, see this how-to: Stamping one PDF file on another.
Your printer must support Duplex for this option to work. To enable Duplex or Mixplex printing: 1. On the Resources pane, expand the Print context, right-click the print section and click Sheet configuration. 2. Check Duplex to enable content to be printed on the back of each sheet. 3. When Duplex printing is enabled, further options become available. l Check Omit empty back side for Last or Single sheet to reset a page to Simplex if it has an empty back side.
As of version 2020.2, a page that only has a DataMapper PDF background is no longer seen as empty. This may affect the output of templates created with previous versions. Print clicks If a page is empty, but still sent to a printer, it may be counted as a 'click' on the printer. To avoid this, you could check the Omit empty back side for Last or Single sheet option in the Duplex printing settings. This resets a page to Simplex if it has an empty back side.
Page specific content elements The specific characteristics of pages make it possible to use these special elements: l Page numbers can only be used in a Print context. See "Page numbers " on the next page to learn how to add and change them. l Conditional content and dynamic tables, when used in a Print section, may or may not leave an empty space at the bottom of the last page.
Whitespace elements: using optional space at the end of the last page Print sections with conditional content and dynamic tables (see "Personalizing content" on page 708) can have a variable amount of space at the bottom of the last page. It is useful to fill the empty space at the bottom with transpromotional material, but of course you don’t want extra pages created just for promotional data. 'Whitespace elements' are elements that will only appear on the page if there is enough space for them.
l Page count: The total number of pages in the document, including pages with no contents or without a page number. l Content page number: The current page number in the document, counting only pages with contents that are supplied by the Print section. A page that has a Master Page (as set in the Sheet Configuration dialog, see "Applying a Master Page to a page in a Print section" on page 464) but no contents, is not included in the Content page count.
1. On the Resources pane, right-click a section in the Print context and click Numbering. 2. Uncheck Restart Numbering if you want the page numbers to get consecutive page numbers, instead of restarting the page numbering with this section. Note: Even if a section is disabled, so it doesn't produce any output, this setting is still taken into account for the other sections. This means that if Restart Numbering is checked on a disabled section, the page numbering will be restarted on the next section.
4. Click Format. 5. After Widows and Orphans, type the minimum number of lines that should be kept together. Alternatively, manually set the set the widows and orphans properties in a style sheet: 1. Open the style sheet for the Print context: on the Resources pane, expand the Styles folder and double-click context_print_styles.css. 2. Add a CSS rule, like the following: p { widows: 4; orphans: 3 } Per paragraph To change the widow or orphan setting for one paragraph only: 1. Open the Formatting dialog.
Page breaks A page break occurs automatically when the contents of a section don't fit on one page. Note: Improved page breaking in Connect 2019.1 might impact upon templates made with earlier versions. See "Known Issues" on page 102. Inserting a page break To insert a page break before or after a certain element, set the page-break-before property or the page-break-after property of that element (a paragraph for example; see also "Styling text and paragraphs" on page 682): 1.
Alternatively you could set this property on the Source tab in the HTML (for example:
), or add a rule to the style sheet; see "Styling your templates with CSS files" on page 676. Adding blank pages to a section How to add a blank page to a section is described in a how-to: Create blank page on field value. Master Pages In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos.
Initially, the master page that has been created together with the Print context will be applied to all pages in the Print section. After adding more Master Pages, different Master Pages can be applied to different pages; see "Applying a Master Page to a page in a Print section" on the facing page. Importing a Master Page To import one or more Master Pages from another template, click File > Import Resources... in the menu. See: "Import Resources dialog" on page 889.
does not collide with the content of the header and footer. To set a margin for the header and/or footer: a. On the Resources pane, expand the Master pages folder, right-click the master page, and click Properties. b. Fill out the height of the header and/or the footer. The contents of a print section will not appear in the space reserved for the header and/or footer on the corresponding master page. 3. Finally, apply the master page to a specific page in a print section.
6. If output documents can be so long that they cannot fit in one envelope, you may check the Repeat sheet configuration option to have the sheet configuration repeat every n number of pages. 7. Click OK to save the settings and close the dialog. Note: Master Pages, Media and Duplex printing options can also be set in a Control Script (see "Control Scripts" on page 838 and "Control Script API" on page 1271). This is especially useful when you need identical sections with different settings.
Specifying and positioning Media Specifying a PDF for the front: the fast way To quickly select a PDF file for the front of a Media, drag the PDF file from the Windows Explorer to one of the Media. The Select Image dialog opens; select an image and check the option Save with template if you want to insert the image into the Images folder on the Resources pane. (For PDF files selected by URL this option is always checked.
is "localhost", it can be omitted, resulting in file:///, for example: file:///c:/resources/images/image.jpg. l Url lists image files from a specific web address. Select the protocol (http or https), and then enter a web address (for example, http://www.mysite.com/images/image.jpg).
Setting the paper's characteristics To set a Media's paper characteristics: 1. On the Resources pane, expand the Contexts folder, expand the Media folder, and right-click the Media. Click Characteristics. 2. Specify the paper's characteristics: l Media Type: The type of paper, such as Plain, Continuous, Envelope, Labels, Stationery, etc. l Weight: The intended weight of the media in grammage (g/m2).
1. On the Resources pane, expand the Print context; right-click the Print section, and click Sheet configuration. 2. Optionally, check Duplex to enable content to be printed on the back of each sheet. Your printer must support duplex for this option to work. If Duplex is enabled, you can also check Tumble to duplex pages as in a calendar, and Facing pages to have the margins of the section switch alternately, so that pages are printed as if in a magazine or book. 3.
results.attr("content","Media 1"); Media 1 will have been replaced with the name of the media selected for the chosen sheet position. The field Selector in the Script Wizard contains the name of the section and the sheet position that you have chosen. 4. Change the script so that on a certain condition, another media will be selected for the content. For instance: if(record.fields.GENDER === 'M') { results.attr("content","Media 2"); } This script changes the media to Media 2 for male customers.
Printing virtual stationery Media are not printed, unless you want them to. Printing the virtual stationery is one of the settings in a Job Creation Preset. To have the virtual stationery printed as part of the Print output: 1. Create a job creation preset that indicates that Media has to be printed: select File > Presets and see "Output Creation Presets Wizard" on page 1084 for more details. 2. Select that job creation preset in the Print Wizard; see "Generating Print output" on page 1316.
l The contents of the Print context, in the form of a single PDF attachment. (Compression options for PDF attachments can be specified in the Email context's properties; see "Compressing PDF attachments" on page 479.) l The output of the Web context, as a self-contained HTML file. l Other files, an image or a PDF leaflet for example. Attaching the Print context and/or the Web context is one of the options in the "Send (Test) Email" on page 935 dialog.
Nesting tables (putting tables in table cells) and applying CSS styles to each table cell to make the email look good on all screen sizes is a precision work that can be a tedious and demanding. Connect's Designer offers the following tools to make designing HTML email easier. Email templates: Slate and others The most obvious solution offered in the Designer is to use one of the templates provided with the Designer; see "Creating an Email template with a Wizard" on page 475.
All standard abbreviations can be found in Emmet's documentation: Abbreviations. To learn more about Emmet, please see their website: Emmet.io and the Emmet.io documentation: http://docs.emmet.io/. Preferences To change the way Emmet works in the Designer, select Window > Preferences, and in the Preferences dialog, select Emmet; see "Emmet preferences" on page 794.
Use background images wisely Most mail clients do not support background images: a very good reason to stay away from them in your mainstream email campaign. There is one situation in which they do come in handy. Both iPhone and Android default mail have solid CSS support and cover most of the mobile marketspace. You could use background images to substitute images when viewed on these devices. This is done by hiding the actual image and showing a mobile-friendly image as background image instead.
l Select Email Template. This starts the Basic Action Email wizard. l Or expand the Template folder, and then expand the Basic Email templates folder, the Banded Email templates folder, or the Slate: Responsive Email Templates by Litmus folder. See "Email Template Wizards" on the next page for information about the various types of Template Wizards. 2. Select a template and click Next.
Use the Attributes pane at the right to see the current element's ID, class and some other properties. Use the Styles pane next to the Attributes pane to see which styles are applied to the currently selected element. Note that the contents of the email are arranged in tables. The many tables in an Email template ensure that the email looks good on virtually any email client, device and screen size. As the tables have no borders, they are initially invisible.
The Banded Email Invoice Template is an invoice with an optional Welcome message and Pay Now button. Settings For a Blank email you can not specify any settings in the Wizard. For an Action or Invoiceemail, the Email Template Wizard lets you choose: l The subject. You can change and personalize the subject later, see "Email header settings" on page 483. l The text for the header. The header is the colored part at the top. The text can be edited later.
l A style sheet, named context_htmlemail_styles.css, is added to the template. Depending on which Template Wizard was used to create the template, another style sheet can be added as well. Style sheets are located in the folder Stylesheets on the Resources pane. These style sheets are meant to be used for styles that are only applied to elements in the Email context.
1. On the Resources pane, expand the Contexts folder; then right-click the Email context and select PDF Attachments. Alternatively, select Context > PDF Attachments on the main menu. This option is only available when editing an Email section in the Workspace. 2. Change the properties of the PDF file that will be attached when the Print context is attached to the email. Lossless is the maximum quality. Note that this will produce a larger PDF file. Uncheck this option to be able to set a lower quality.
For information about attachments see "Email attachments" on page 489. A plain-text version of the HTML is added to each email if the option is checked in the Email section's properties (see "Properties tab" on page 931). With new templates this is always the case. Adding an Email template When an Email template is created (see "Creating an Email template with a Wizard" on page 475), only one Email section is added to it.
Styling and formatting an Email template The contents of an Email section can be formatted directly, or styled with Cascading Style Sheets (CSS). See "Styling and formatting" on page 671. Email clients do not read CSS files and some even remove a