US English (US)
FR French
DE German
PL Polish
SE Swedish
FI Finnish

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

English (US)
US English (US)
FR French
DE German
PL Polish
SE Swedish
FI Finnish
  • Log in
  • Home
  • Identity Governance and Administration (IGA)
  • IGA solution library
  • Instructions & guidelines
  • Configure connectors

HR connector (import)

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Service Management
    Matrix42 Professional Solution Matrix42 Core Solution Enterprise Service Management Matrix42 Intelligence
  • Identity Governance and Administration (IGA)
    IGA overview IGA solution library
  • Platform
    ESM ESS2 ESS Efecte Chat for Service Management Integrations Add-ons
  • Release Notes for M42 Professional, IGA, Conversational AI
    2026.1 2025.3 2025.2 2025.1 2024.2 2024.1 2023.4 2023.3 2023.2 2023.1 2022.4 2022.3 Release Information and Policies
  • Other Material
    Terms & Documentation Guidelines Accessibility Statements
  • Services
+ More
    • Service Management

    • Identity Governance and Administration (IGA)

    • Platform

    • Release Notes for M42 Professional, IGA, Conversational AI

    • Other Material

    • Services

HR connector (import)

HR connector

Is part of solutions Native connectors and it uses ready-made script for Generic Python Script connector. With this native HR connector customers can import personal and organizational data from their HR solution and the connector has been designed for IGA solution, but it can be modified to be used in all Pro and IGA solutions. The connector supports XML, CSV and JSON formats, and it contains data validation and comparison to previous file. 

When HR connector is taken into use, customers responsibilities are

  1. Generate file(s) according to instructions described in chapter “generate file from HR solution”
  2. Schedule file(s) generation according to agreed schedule in the delivery
  3. Provide SFTP-server (and accesses), where files are delivered and from where HR connector can fetch file(s) 

 

How HR connector works?

 

General overview

HR-connector is using Generic Python Script connector for reading data from the file. The connector is designed to be used in IGA solutions user lifecycle management use cases. Using it for other purpose needs script to be exported from the host machine and modified according to customer requirements, notice that it is also possible to create new script from the beginning and use custom script connector to validate and import it.

HR connector supports one or several work periods, as long as work periods are generated as described in “generate file from HR solution” chapter.  

 

Basic principles for HR connector,

1. Connector uses extra arguments, defined in the scheduled-based provisioning task, for finding the file from the path (if extra argument is missing, process stops)

2. Files are transformed to JSON format (dict) and stored as JSON

3. Each file can be run as full or partial run

4. Partial run always compares data to the archived file (if archive file is missing, process stops)

5. In Json formatted file, all mapped attributes are mandatory field and task stops import if there are mandatory attributes missing from any of imported objects. CSV format works differently, it is enough for it that all mapped attributes are found from header row as a columns. 

6. Connector will import IGA admin task to IGA solution including errors and JSON files containing archived and imported rows, after import has been executed.

 

Good to remember!

Reviewing (after scheduled task has been saved for HR connector) extra arguments requires that password is saved in secure place for implementing possible changes to the arguments. 

 

 

Connector files, 

File (on EPE-worker container)  Description

/tmp/custom-provisioning-scripts/3_log.txt
Contains logs, number in file name is taskid.

/tmp/custom-provisioning-scripts/3_output.json
Includes the rows that passed the validation and should be imported to IGA solution.
Number in file name is taskid.
/tmp/custom-provisioning-scripts/data/archive/(data_type)_archive.json Stores all rows from the file that has been run, including valid and invalid. File is overwritten each run.
/tmp/custom-provisioning-scripts/data/archive/error_(data_type)_archive.json Stores rows that did not pass the validation during the run. These rows are not imported to IGA solution. File is overwritten each run.
/tmp/custom-provisioning-scripts/only_scheduled/IGAConnector.py Connector script
There can be many scripts in that folder and from Task UI you need to choose which script is run in by which task.

/tmp/custom-provisioning-scripts/misc-files/translation.json
1. Used for translating report cases to given language. 
2. New languages can be added by adding "language" e.g. "de" or "en", under all cases with key "text". Each report cases needs to have new language, otherwise connector runs into error.
3. To use specific language, extra arguments need key "language" with value found from translation.json
 
 

Generate file from HR solution

HR connector expects that file is generated from the HR solution based on following logic and the file is stored to customer's SFTP-server, where the connector can fetch the file(s). Note that encoding needs to be UTF-8.

Note

File encoding need to be UTF-8

It is recommended to use Linux line separator.

 


If you use csv file format, default field separator used by HR connector script is ; (semicolon).

Generating the file varies according to the HR solution which customer is using, for example especially IGA solution requires that information is received from the past and from the future.

For Workday HR solutions, it is recommended to create custom report to Workday, which reports needed information, and use json format for that report to format the data.

Good to know!

Sometimes HR solutions might generate new last update timestamp in situations when information is not needed to be delivered to solution like for example salary payments are run before pay day often. This unnecessary data causes lot of provisioning's towards the target systems and usually solving these issues takes time from customers admins / agents users. 

It is recommended to validate data before full automation can be started based on received information (data quality ensured), and Matrix42 IGA solution contains possibility to simulate changes (read more from here). 

 

 

Examples what information is used for generating the required file, 

Logic Additional information
All users with active work period(s) Users are selected based on work period dates
All users with active work period in the future One month ahead
All users with active work period in the past One month from the past, this is needed if changes to user information is made afterwards employment has already ended.
Users are sorted based on unique ID Unique ID can be either employee number or social security number. 
Last update timestamp Users can be selected based on last update timestamp, but please check with customer HR specialist which changes updates the timestamp.

 

 
 

Recommendations related SFTP server folder structure

If customer want's to use HR connector to import many different type of data, it is recommended to have one HR Connector per data type. 

For example if customer wants to import Organizations and Users, they should have 2 connector like "HR connector Organizations" and "HR connector Users".

And having those two connectors files on same ftp server, it's folder structure could be for example:

/sftp/iga/in/organisation

/sftp/iga/in/organization/archive

/sftp/iga/in/user

/sftp/iga/in/user/archive

Data folder should contain only one file at a time.

With that folder structure, you can use these script parameters for HR connector Organizations:

“sftp_remote_path_data”:"/sftp/iga/in/organization",

“sftp_remote_path_archive”:"/sftp/iga/in/organization/archive", 

“file_path”:"./organization/*.csv"

And similarly different script parameters for HR connector Users.

 
 

Parameters, validation & exceptions

Parameters

Are used for defining settings for HR connector, which are validated when the script is run. 
Remember to store password in secure place, in case you need to review parameters after you have saved the scheduled task.

HR connector uses following parameters,

Parameter Description Mandatory / optional
file_path Path and file pattern, where file is located.  Mandatory
data_type Values: costcenter, organization, title or workperiod.  Mandatory
update_limit

How many updates are allowed to Pro/IGA solution. 

LImit is only used when run type is partial.

Mandatory
run_type Run type specifies if comparison to previous file will be done. 
Values: full load or partial, default is partial
Mandatory
override_update_limit Overrides update limit, if amount of updates is more than given limit. Values, true or false, default is set to false. Mandatory
minimum_row_count What is the minimum amount of rows, that should exists in the file.  Optional
webapi_user Web API user (ESM user) for report creation. Requires create and update permissions to IGA admin task.  Mandatory
webapi_password Web API users password.  Mandatory
webapi_url URL to Pro/IGA solutions WebAPI.  Mandatory
report_template_code Target template code.  Mandatory
report_folder_code Target folder code.  Mandatory
debug Enables debug logging. Values, true or false, default is set to false.
If you set debug to true you get more detailed log messages, but task runs slightly slower. Logs can be found from epe-worker container /custom-provisioning-scripts -folder.
Optional

 

Validation

Validation means that HR connector validates that all mandatory information related to users are received from the HR solution and in case it is missing, HR connector creates as default IGA admin task.

Customer can define more attributes to be read, out side of list below, but it requires HR connectors script to be modified (read more from expansion possibilities chapter).

 

Attribute Description Mandatory / optional
Row ID Unique ID for the users work period in the file Mandatory
Last name Users last name, required for example when generating email address to the user. Mandatory
First name Users first name, required for example when generating email address to the user. Mandatory
Social security number / national ID Required if selected as unique ID for the user (or for example if bank ID authentication is delivered also). Mandatory / optional
Employee number (employee ID) Required if selected as unique ID for the user. Mandatory / optional
Employment start date When users work period was started or is going to start.  Mandatory
Title ID Unique title ID for users title (relation to title name) Mandatory
Manager ID Unique ID for users manager, usually managers employee ID Mandatory
Organizational unit ID Unique organization unit ID (relation to organization unit name) Mandatory
Spoken name Can be used for example in email address, commonly used in Scandinavia, when user is able to select one of his/hers official name to be the one he/she is called. Optional
Second name Is commonly used for identifying users with same name and in case name sake appears, second name letters are used in email address. Optional
Employment end date When users work period is ending / ended, information used for starting off-board processes Optional
Employment type Permanent, temporary, hour-based, external Optional
Is user manager? Users titles are not necessary always indicating if the user is manager, and in some HR solutions this information is separately available in own attribute. Optional
Title Name for users title ID Optional
Manager name Name for users manager ID Optional
User type Internal, external Optional
Cost center Users cost center name Optional
Organizational unit name Recommended to be read from the HR solution especially in cases where customer wants to update organizational unit information in directories or applications and/or when IGA automated rules are used. Optional

 

Exceptions

Customer can define in which template, folder and attribute HR connector generates following exceptions,

Exception Description IGA Admin actions
File is not received or is empty In case file does not exist in the customer's SFTP-server or it is empty IGA admin task is generated for more detailed troubleshooting. IGA admin validates that is there reason, why file is not generated from the HR system or why the file is empty. File needs to be re-generated and HR connector's scheduled task is run again.
User is missing unique ID Users with unique ID is imported, and IGA admin task is created for manual review for those users who were missing information. 
 
Users unique ID must be entered to the HR solution, file is re-generated and scheduled task is run manually.
Scheduled task is missing extra arguments Mandatory extra arguments are:
file_path
data_type
update_limit
webapi_user
webapi_password
webapi_url
report_template_code
Extra arguments needs to be added to the scheduled-based provisioning task and task needs to be re-run.
Archive file is missing Archive file is mandatory to found for partial run to be able to complete IGA admin task for solving the issue with missing file and when file found scheduled task needs to be re-run.
Missing mandatory field (object) or value of the field Row is blocked and import is finalized IGA admin can see from connector report (IGA admin task) if any rows were blocked. Information needs to be added to the source system and file re-generated and run again. 
Update limit (update_limit) is exceeded Import is interrupted (not run). IGA admin task is created for solving the issue and re-run the scheduled task.
Minimum row account is not exceeded Import is interrupted (not run). IGA admin task is created for solving the issue and re-run the scheduled task.
Multiple files matching the naming criteria in SFTP-server Import is interrupted (not run). Extra files needs to be removed and HR connector's scheduled task needs to be re-run.
 
 

Configuration instructions

These instructions are guiding how to take pre-configured HR connector in to use or how to create one from the beginning, notice that modifying the script requires Python scripting skills. 

 

Pre-requirements

Please make sure that following tasks are done, before running the scheduled task for HR connector, 

  1. IGAConnector.py script can be found from customers test & production environments
    1. In case it is missing, please contact Matrix42 Service Desk
  2. HR connector is configured into customer Pro/IGA solutions test environment
  3. HR connector is tested in customer Pro/IGA solutions test environment
    1. Create needed test data, test users and test cases
  4. Data quality has been confirmed by customers HR specialists & future IGA admin
  5. File is correctly generated, scheduled and stored in customer's SFTP-server
  6. For IGA solution implement configuration instructions for user lifecycle management use cases
    1. IGA set work period information data card(s) are in place
    2. IGA set account information data card(s) are in place
    3. Simulation phase has been agreed and IGA admin validates received data xx period of time before provisioning (when data quality has been ensured, full automation can be enabled by changing settings in IGA set work period information data card(s))
 

 

General guidance for scheduled tasks

General guidance for scheduled tasks

How to Create New Scheduled Task to import data

For configuring scheduled-based provisioning task, you will need access to Administration / Connectors tab.

1. Open the Administration area (a cogwheel symbol).

2. Open Connectors view.

3. Choose Connector for Scheduled-based task and select New Task
   Note! If connector is not created, you have to choose first New connector and after that New task.

 

4. Continue with connector specific instructions: Native Connectors

 
 

Should I use Incremental, Full or Both?

Scheduled task can be either Incremental or Full -type.

Do not import permissions with AD and LDAP incremental task

Incremental task has issue with permissions importing. At the moment it is recommended not to import group memberships with incremental scheduled task.

On Microsoft Active Directory and OpenLDAP connectors, remove this mapping on incremental task:
 

 

 

Setting on Scheduled tasks:

Incremental type is supported only for Microsoft Active Directory, LDAP and Microsoft Graph API (formerly known as Entra ID) Connectors.

Incremental type means, that Native Connectors (EPE) fetches data from source system, using changed timestamp information, so it fetches only data which is changed or added after previous incremental task run.

When Incremental type task is run for very first time, it does a full fetch (and it marks the current timestamp to EPE database),  thereafter, task uses that timestamp to ask the data source for data that changed since that timestamp (and then EPE updates the timestamp to EPE database for next task run). Clearing task cache doesn't affect this timestamp, so Incremental task is always incremental after first run.
 

Full type is supported for all Connectors.

Full type import fetches always all data (it's configured to fetch) from source system, on every run. 
 

Both Full and Incremental type tasks use also Task cache in EPE, which makes certain imports faster and lighter for M42 system.

By default that task cache is cleared ad midnight UTC time. When cache is cleared, next import after that is run without caching used to reason if data fetched should be pushed to ESM, all fetched data is pushed to ESM. But after that, next task runs until next time cache is cleared, are using EPE cache to determine if fetched data needs to be pushed to ESM or not.

You can configure at what time of day task cache is emptied, by changing global setting in EPE datapump configuration: 

/opt/epe/datapump-itsm/config/custom.properties

which is by default set to: clearCacheHours24HourFormat=0

You can also clear cache many times a day, but that needs to be thinked carefully, as it has impact on overall performance as EPE will push changes to ESM, that probably are already there, example(do not add spaces to attribute value): clearCacheHours24HourFormat=6,12

After changing this value, reboot EPE datapump container to take change into use.

Recommendations:

Have always by default Full type scheduled task.

If you want to fetch changes to data fetched already by full task, more frequently than you can run full task, add also incremental task. Usually incremental task is not needed.

 
 

Recommended Scheduling Sequence

Recommended scheduling sequence, depends how much data is read from Customers system/directory to the Matrix42 Core, Pro or IGA solution and is import Incremental or Full. 

Examples for scheduling, 

Total amount of users  Total amount of groups Full load sequence Incremental load sequence
< 500 < 1000 Every 30 minutes if partial load is not used
Four (4) times a day if partial load is used
Every 10 minutes
< 2000 < 2000 Every 60 minutes, if partial load is not used
Four (4) times a day if partial load is used
Every 15 minutes
< 5000 < 3000 Every four (4) hours, if partial load is not used
Twice a day if partial load is used
Every 15 minutes
< 10 000 < 5000 Maximum imports twice a day, no matter if partial load is or is not used Every 30 minutes
< 50 000 < 7000 Maximum import once a day, no matter if partial load is or is not used Every 60 minutes
Over 50 000 Over 7000 There might be a need for another EPE-worker, please contact Product Owner Separately evaluated


Please note that if there are several tasks running at the same time you may need more EPE-workers. The tasks should be scheduled at different times and can be completed according to the table above. However, if there are more than 6 tasks running at the same time, the number of epeworkers should be increased. It's best practice not to schedule tasks to run at same time, if possible.

Recommendations related to performance
If the amount fo data to be imported is over 10000 concider these things:
Adjust log level of ESM and DATAPUMP to ERROR-level, to lowe the amount of logging during task run
Have as few as possible automations starting immediately for imported datacards (listeners, handlers, workflows), as those make ESM to take longer time handling new datacards.

 
 

Set removed accounts and entitlements status removed/disabled

With this functionality, you can mark account and entitlement status to e.g. Deleted or Disabled, when account or entitlement is removed from source system. Starting from version 2025.3 you can also set status to generic objects (not only to accounts/identities and entitlements/groups). 

For version 2025.3 and newer

In version 2025.3 these settings are moved from properties files to Task UI. Also you can now set these settings for Generic objects, which have not been possible before this version.

There is separate configuration for each scheduled task, and for all mapping types. Here is example of this config on task:

For version 2025.2 and older

This functionality is available for “full” type scheduled tasks.

Settings are on datapump dockers configuration file. To change those parameter values, you need to set those in /opt/epe/datapump-itsm/config/custom.properties file.

Configuration

To enable disabling functionality, datapump config should have these parameters set to true:

disable.unknown.esm.users=true
disable.unknown.esm.groups=true

Those 2 parameters are false by default in 2024.2 and 2025.1 versions. In 2025.2 and newer version those are true by default.

 

Next are these parameters:

personTemplateStatusCodeAttributeKey=accountStatus
personTemplateStatusAttributeDisabledValueKey=Deleted
groupTemplateStatusCodeAttributeKey=status
groupTemplateStatusAttributeDisabledValueKey=5 - Removed

First two attributes should point to the DatacardHiddenState attribute in the User template, and tell which value should be send there when the user is deleted.

By default its accountStatus and Value 5 - Removed on IGA Account template.

All these needs to match with the attribute configuration:

 

1.PNG

Same thing applies for the next two paramaters, but its for Groups.'

If you need to change those parameters in properties file, do changes in Datapump container to file: /opt/epe/datapump-itsm/config/custom.properties and those changes will then survive over container reboot and will be copied on reboot to /opt/epe/datapump-itsm/config/application.properties.

Description

Tasks save their __taskid__ shown as Task Id mapping in the UI to the datacards, its then used to determine if the datacard was added by this task. In case there are multiple tasks with different sets of users.

This field was previously used as datasourceid, but since we moved to the model where connector can have multiple tasks its identifier cannot be used anymore, thats why the field was repurposed as taskid instead.

 

Taking users as an example, when task runs ESM is asked for the list of users that have its taskid in Task Id mapping field, and doesn't have a personTemplateStatusAttributeDisabledValueKey value in the personTemplateStatusCodeAttributeKey

This result is then compared to what the task fetched, and the datacards of users that were not fetched have their personTemplateStatusattribute set to value specified in the config - 5 - Removedby default.

Example log below shows described process and informs that one user was removed.

 

2.PNG

Same thing applies to groups but groupTemplateStatusattributes are used instead.

Notes

  • Feature works only with full fetch scheduled tasks..
  • No support for generic templates yet, only identity and access
  • When migrating from the previous versions where datasourceid was still used it needs to run at least once to set its taskid’s in the datacards first.
  • EPE identifies Disabled users or groups as the ones that were removed from the AD, at the present we do not support statuses related to the entity beign active or not.
  • EPE does not enable users back on its own.
  • If more than one tasks fetches the same users or groups it may overwrite the taskid in the datacard depending on which task ran last. It is suggested that many full type tasks are not fetching same user or group.
  • Always do configuration file changes to custom.properties, do not change only application.properties as those changes are lost on container reboot if you have not done same changes to custom.properties.
 
 

 

 
 

Configuring HR connector and scheduled task

For accessing connector management, user needs to have permissions to Platform configuration.

1. Open the solution administration area (a gear symbol).
2. Open connectors view.
3. Choose new connector 

 

2. Select Data Source type to be Custom Backend

 

3. Fulfill information 

  • Fill in unique connector name for the connector, notice that name cannot be changed afterwards.
  • Select IGAConnector.py script from provisioning script field
  • Parameters encryption password is needed for hiding / revealing parameters, after the connector has been saved
  • WebAPI user is needed when HR connector is writing data to customers Pro/IGA solution
  • WebAPI password is needed when HR connector is writing data to customers Pro/IGA solution

4. Reveal & modify parameters according to customer definitions

  • Save encryption password in secure place, you will need it later on if there is a need to adjust the parameters

    Example about parameters:
    {"objectGUID_field_name":"objectGUID", "sftp_server_fqdn":"transfer.xx.com", "sftp_server_ssh_port_number":"22", "sftp_username":"username_here", "sftp_password":"username_here", "sftp_remote_path_data":"/sftp/data", "sftp_remote_path_archive":"/sftp/archive", 
    "file_path":"./data/workperiod*.csv", // Path and file pattern where file is located. Mandatory value
       "data_type":"workperiod", // Values: costcenter, organization, title or workperiod. Mandatory value
       "run_type":"partial", // Run type specifies if comparison to previous file will be done. Values: full or partial. Optional value, default is 'partial'
       "update_limit": 100, // How many updates are allowed to IGA System. Mandatory value
       "override_update_limit":false, // overrides update limit, if amoun of updates is more than given limit. Values: true or false. Optional value, defaul is 'false'
       "minimum_row_count": 10, // What is the minimum amount of rows, that should exist in the file. Optional value
       "webapi_user": "IGA.WebAPI", // Web API user for report creation. Requires Create and Read permissions to IGA Adminisitration task. Mandatory value
       "webapi_password": "aeiou", // Web API user's password. Mandatory value
       "webapi_url":"https://eisdemo.efectecloud-demo.com", // URL to ESM environment. Mandatory value
       "report_template_code":"IGAWorkflowTaskInformation", // Target template code. Mandatory value
       "report_folder_code":"iga_tasks", // Target folder code. Mandatory value
       "debug": false, // Enables debug logging. Values: true of false. Optional value, default is 'false'
       "language": "en" // Sets IGA Administration Task report language. Optional value, default is 'en'
    }

 

5. Create new scheduled task for the HR connector

 

6. Fill in Task Details

  • Unique task name for the connector, notice that name cannot be changed afterwards.
  • Task usage is set to scheduled
  • Mappings type is set to generic (one template)
  • Attribute name for Objectguid. Wrote name of your input data column/attribute, which contains unique identifier for row. For example: objectGUID

 

7. Fill in failure information

Optional settings for failure handling, if scheduled task fails it can create data card that displays the error. If failure settings are defined , the administrator does not need to manually check the status of scheduled tasks.

  • Failure template - Select a Template of datacard which will be created in case of any errors during provisioning (connection to data sources,timeouts,etc.)
  • Failure folder - Select folder where failure data card is stored.
  • Failure attribute - Select an attribute where in the Failure Template should the error information be stored in.

 

8. Fill in mappings for generic template

Generic data are read to any template user want's and it is mandatory to set target folder, datasourceid and unique values which are used for identifying data between customers HR and Matrix42 Pro/IGA solution.

  • Target template - Select a template to define attribute mappings
    Select a template to define attribute mappings
  • Target folder - Select a folder from a list of folders. The list is narrowed down to match compatibility with selected Template.
  • Data Source Type mapping - optional. Contains constant string of Generic Python Script connector type
  • Task Id mapping - mandatory to map for example to datasourceid or taskid attribute. This is used by “Set value to datacard for objects deleted from source system”-feature. This same attribute must be also on mappings table.
  • Attribute mappings
    1. HR attribute (External attribute) - which attribute from the file is mapped to Pro/IGA
    2. Local attribute - to which attribute in Pro/IGA data is mapped to

 

9. In case you need to modify the actual script, it needs to be downloaded from the customers host machine, modified and uploaded again.

  • If you don't have accesses to the host machine, please contact Matrix42 Service Desk for assistance.

10. Save your changes

11. Before running the scheduled task for HR connector, 

  • Check that all configuration related to workflows and settings in data cards are in place.
  • When running for the first time, make sure that provisioning is disabled towards directories & applications

12. Run task manually or wait until scheduled run is started. 

13. Validate that data has been read to correct data cards and correct attributes are in place.

14. Troubleshooting

  • In case failure template is used, check correct data card
  • Check scheduled task history from connector management
  • Check logs

 

Good to remember!

Remember to save all passwords in secure place.

 
 
 

Is the HR connector suitable for my use case?

Choose HR connector if:

  • HR file is in flat XML, CSV or JSON format
  • Customer has a SFTP server
  • HR Information is received from the past and from the future
  • Datatype is cost center, organization, title or work period
  • Data contains at least Last name, First name, Employment start date, Title ID, Manager ID, Organizational unit ID
  • Data contains at least one of these unique values: Social security number / national ID / Employee number
     
 
 

How to customize import

How to create and modify custom Python scripts

General info

Generic Python Script connector comes with couple out-of-the box Python scripts. You should not modify those. Instead, if you need to make modifications to those, copy-paste those to new name and then do your modifications to newly named python script file. And as last step, modify your connector to use that script.

Work order:

  1. Create your custom scheduled or task-based script
  2. Take it into use to connector from Connectors UI

Scheduled scripts

How to take your custom python script into use for scheduled tasks, guidance for 1 host and 1 worker environment:

Note! Do not change "default" .py files which came with environment installation.

Copy your customly named (in this example mypythonscript.py) python script to these folders from HOST (remember to use correct tenant_name in path):

cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_scheduled

cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-worker-1/files/custom-provisioning-scripts

cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts

 

Change your custom python script file permission to 755 on all those above locations where you copied your script(in this example mypythonscript.py) (remember to use correct tenant_name in path):

chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_scheduled/mypythonscript.py

chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-worker-1/files/custom-provisioning-scripts/mypythonscript.py

chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/mypythonscript.py

 

If you have more hosts, repeat on every host.

If you have more workers, repeat to every worker(check that worker folder name is correct).

 


 

Event-based scripts

How to take your custom python script into use for event-based tasks, guidance for 1 host environment:

Note! Do not change "default" .py files which came with environment installation.

Copy your customly named (in this example mypythonscript.py) python script to these folders from HOST (remember to use correct tenant_name in path):

cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_event

cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts

 

Change your custom python script file permission to 755 on all those above locations where you copied your script(in this example mypythonscript.py) (remember to use correct tenant_name in path):

chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_event/mypythonscript.py

chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/mypythonscript.py

 

If you have more hosts, repeat on every host.

 

Example/base for custom Event-based python script 

This script reads attribute values from Connector and from Orchestration node. And prints values to logs. You can use this as a starting point for your own custom python script for Event-based tasks.

#!/usr/bin/python3

import json
import sys
import logging

def main():
    # Configure logging, change log file name
    logging.basicConfig(filename='python_event_task_abc.log', level=logging.INFO, format='%(asctime)s - %(message)s')
    
    # Read inputs from stdin
    lines = sys.stdin.read().splitlines()

    if len(lines) != 2:
        print("Error: Expected exactly two lines of input.")
        raise Exception("Expected exactly two lines of input.")

    # Parse the first line, this contains connector variables
    try:
        connector_parameters = json.loads(lines[0])
        host = connector_parameters.get('host', None)
        password = connector_parameters.get('password', None)
        logging.info(f"Connector parameters, Host={host}")
        # read custom connector parameters here
    except json.JSONDecodeError:
        raise Exception("Connector parameters is not a valid JSON string.") 

    # Parse the second line, this contains mappings data from template datacard
    try:
        task_mappings = json.loads(lines[1])
        logging.info(f"Task mappings JSON: {task_mappings}")
        #read task mappings data here

        # Implement your logic here

        
    except json.JSONDecodeError as e:
        raise Exception("Task mappings is not a valid JSON string.") from e
        # Exceptions raised are controlling workflow orchestration node flow, on exception it goes to Exception path, otherwise it goes to Completed path. Exception raised can be seen on Provisioning exception attribute value.
    except Exception as e:
        raise Exception("An unexpected error occurred") from e

if __name__ == "__main__":
    main()

 
 

 


Take customly named script into use to connector

How to take your customly named Python script into use, by selecting it from Connectors UI. Select your script from connectors Provisioning script -dropdown:

 

 

Implementation and work estimations

Only Matrix42 has access to host, which is needed for these custom scripts to be installed.

These expansion possibilities always need Matrix42 consultants review, before implementation and work estimations can be agreed.

 

 

 
 

 

hr bridge hr link

Was this article helpful?

Yes
No
Give feedback about this article

Table of Contents

Related Articles

  • Configure: EPE Delete data card from target ESM
  • Configure: EPE Create data card to target ESM

Copyright 2026 – Matrix42 Professional.

Matrix42 homepage


Knowledge Base Software powered by Helpjuice

0
0
Expand