Generic Python Script Connector
Generic Python Script Connector
Generic Python Script (formerly known as Custom script and Custom backend) connector is part of native connectors and it can be used to read data from any type of application and load it into customers Matrix42 Core, Pro and IGA solution. The functionality is implemented by a python script (customer provides the script) that is executed in the customer's solution host machine. The script is in charge of reading information from the target application, then it must generate a JSON file with the information it read.
When custom script connector is taken into use, it requires that
- Python script has been made for the connector and it is stored in customers Efecte solution host machine
- Connector details are fulfilled and scheduled task is configured for loading data into customers solution.

General functionalities
Connectors - general functionalities
Connectors - general functionalities
In this article are described general functionalities for managing native connectors in solution. All Native Connectors are managed from the same connector management UI.
Notice that there are separate descriptions for each connector, where connector specific functionalities and configuration instructions are described in detailed level.
To be able to access connector management, user needs to have admin level permissions to customers Platform configuration. When accesses are granted correctly, connector tab will be visible and user can manage connectors.

Left menu
Connector management is divided into four tabs:

- Overview - for creating and updating Native Connectors. The Admin User can see their status, type and how many scheduled or event tasks are associated with them.
- Authentication - for creating and updating authentication tasks. Provisioning task for authentication is needed for Secure Access to be able to define which Customers end-users can access to Matrix42 Core, Pro and IGA login page.
- Logs - for downloading Native Connector and Secure Access logs from UI.
- Settings - general settings for Native Connectors and Secure Access, including environment type for logging and monitoring.
Connectors overview tab
From overview page, user can easily and quickly see status for all connectors.

Top bar:
-
Status for Native Connectors (EPE)
- Green text indicates that Native Connectors is online. All the needed services are up and running.
- Red text indicates that there is a problem with Native Connectors, all the services are not running.
-
Status for Secure Access (ESA)
- Green text indicates that Secure Access is online. All the needed services are up and running.
- Red text indicates that there is a problem with Secure Access, all the services are not running.
-
Native Connectors version number is displayed in top right corner
Top bar for list view:

- New connector - opens new window for adding and configuring new connector
- Remove connector(s) - workflow references are calculated and pop-up window appears to confirm removal (notice that calculating references can take several seconds)
- Export - Admin can export one or more connectors (and tasks) from the environment. Usually used for exporting connectors and connectors (and tasks) from test to prod. Native connectors secret information is password protected.
-
Import - Admin can import one or more connectors (and tasks) to the environment. Usually used for importing connectors (and tasks) from test to prod.
- Admin cannot import from old EPE UI (Older than 2024.1) to the new one. Source and target environments must have same version.
- Import will fail if the configuration (templates, attributes) is not the same - for example when some attribute is missing
- If you are importing something with the same connector details, it will be merged under existing connector
- Refresh - Admin can refresh connectors view by clicking the button.
List view for overview,

- Select connector(s) - Select one connector by clicking check box in front of the connector row or clicking check box in the header row will select all connectors
- Id - Automatically generated unique ID of the connector. Cannot be edited or changed.
-
Status - indicates scheduled task status
-
Green check mark - Task is executed without any errors
-
Red cross -Task is executed, but there have been error
-
Grey clock - Task is not executed yet, waiting for scheduling
-
Orange - one of the tasks has a problem
- No value - Scheduled based-task is missing
-
- Name - Connector name added to connector settings. Unique name of the connector holding configuration for one data source
- Type - indicates target / source system
- Scheduled - informs how many scheduled tasks are configured
- Event - informs how many event tasks are configured
-
Manage
- Pen icon - opens connector settings (double-clicking the connector row, also opens settings)
- Paper icon - copies the connector
- Stop - workflow references are calculated and pop-up window appears to confirm removal
- Search - User can search connector by entering the search term in the corresponding field . Id, Status, Name, Type, Scheduled and Event fields can be searched.
Scheduled task information (click arrow front of the connector)
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for scheduled tasks
- New task - opens configuration page for new task
- Remove task(s) - removes the selected task(s) from the system and they cannot be recovered anymore.
- Refresh - refresh scheduled-based tasks view
- Search - user can search task by entering the search term in the corresponding field for Id, name, enabled, extract/load status.
List view for scheduled tasks

- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Enabled - Displays is the task scheduled or not
-
Green check mark - Task is scheduled
-
Red cross - Task is not scheduled
-
-
Extract status - Displays status of data extraction from the target directory/system
-
Green check mark - data is extracted successfully
-
Red cross - data is extracted with error(s) or extraction is failed
Clock - Task is waiting execution
-
-
Load status - Displays status of data export from json-file to customers solution
-
Green check mark - data is imported successfully to customers solution
-
Red cross - data is imported with error(s) or import is failed
-
-
Manage
- Pen icon - opens task settings in own window (double-clicking the task row also opens task settings)
- Paper icon - copies the task
- Clock icon - opens task history view
- Stop - remove task, pop-up window opens to confirm the removal
Scheduled task history view
By clicking the clock icon in the scheduled task row, history for scheduling will be shown.

Top bar for view history
- Refresh - refreshes scheduled task status. This doesn't affect task, this only updates UI to show latest information of task run.
List view for scheduled task history
-
Color of the row is indicating status
-
Green - task executed successfully
-
Red - error happened during execution
-
- Execution ID - unique ID for the scheduled task row
- Extract planned time - when next extract from the directory/application is scheduled to happen
- Extract complete time - when extract was completed
- Extract status - status for fetching data from the directory/application
- Load start time - when next load to customers solution is scheduled to happen
- Load complete time - when load was completed
- Load status - status for loading information to customers solution
List view for scheduled task status
- Actual start time - timestamp for actual start
- Users file - JSON file containing user information read from the directory/application
- Group file - JSON file containing group information read from the directory/application
- Generic file - JSON file containing generic information read from the directory/application
- Extract info - detailed information about reading information from the directory/application
- Load info - detailed information about loading the information to customers Matrix42 Core, Pro and IGA solution
Edit window for scheduled task
Configuration for scheduled task can be opened by clicking pen icon or double-clicking the task row.
Left menu and attributes varies according to selected options and therefore more detailed instructions for editing tasks can be found from the connector description, but there are common functionalities for all scheduled tasks which are described below.

Saving the task
In case mandatory information is missing from the task, hoovering mouse on top of the save button will show which attributes are still empty.
Top bar for edit scheduled task
- Run task manually - admin can run task manually out side of the defined scheduling
- Stop task - admin can stop scheduled-based task which is currently running. Task will be stopped and status is changed to be stopped. It waits in this state until the next timing occurs.
-
Clear data cache - Data cache for the next provisioning of Users and Groups will be cleared. It means that next run is run as first time run.
- By default, we clear the cache everyday at 00:00 UTC
- If you want to clear the cache at different time, then it has to specify some different value in host file 'custom.properties'.
-
EPE Cache is also cleared when EPE is restarted, whole environment is restarted, EPE mappings have been changed
Event task information
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for event tasks
- New task - opens configuration page for new event task
- Remove task(s) - removes selected task(s), pop-up window appears to confirm the removal
- Refresh - refreshes event tasks view
- Show workflow references - calculates task related workflow relations and statuses. This is very useful if you don't know from which workflows event-based tasks are used.
List view for event tasks
- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Workflow relations
- Question mark shows pop-up window with detailed information about the reference
-
Workflow status
- Not used - No relations to workflow
- In use - Workflow(s) attached to task, task cannot be removed
-
Manage
- Pen icon - opens task settings in own window
- Paper icon - copies the task
- Stop icon - removes the task, pop-up window appears to confirm the removal
Edit window for event task
Configuration for event task can be opened by clicking pen icon or double-clicking the task row.
Left menu and attributes varies according to selected options and therefore more detailed instructions for editing tasks can be found from the connector description, but there are common functionalities for all event tasks which are described below.

Edit event task window
- Task usage, editable? - this appears when editing existing task and changing the task usage type will break workflows
- Mappings type, editable? - this appears when editing existing task and changing the mappings type will break workflows
Saving the task
In case mandatory information is missing from the task, hovering mouse on top of the save button will show which attributes are still empty.
Authentication tab
Authentication for Matrix42 Core, Pro and IGA solutions are configured from authentication tab, notice that only some of the connectors (directory connectors) are supporting authentication, so its not possible to create authentication tasks to all available connectors.

Top bar for authentication
- New connector - opens new window for configuring new connector (notice that not all connectors are supporting authentication)
- Remove connector(s) - removes selected task(s), pop-up window appears to confirm the removal
-
Export - user can export one or more tasks from the environment. Usually used for exporting tasks from test to prod. EPE connectors are password protected.
- Note that Realm for authentication tasks is not exported, you need to set that manually after importing
- Import - user can import one or more tasks to the environment. Usually used for importing task from test to prod.
- Refresh - refreshes authentication tasks view
List view for authentication overview
- Select connector(s) - Select one connector by clicking check box in front of the connector row or clicking check box in the header row will select all connectors
- Id - Automatically generated Unique ID of the connector. Cannot be edited or changed.
- Name - Connector name added to connector settings. Unique name of the connector holding configuration for one data source
- Type - indicates target / source system
- Count - informs how many authentication tasks are configured
-
Manage
- Pen icon - opens authentication task setting in own window
- Paper icon - copies the task
- Stop icon - removes selected task
Authentication task information
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for authentication overview
- Create new task - opens configuration page for new authentication task
- Remove task(s) - removes selected task(s), pop-up window appears to confirm the removal
- Refresh - refreshes authentication tasks view
List view for authentication overview,
- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Manage
- Pen icon- opens task settings in own window (double-clicking the task row, also opens settings window)
- Paper icon - copies the task
- Stop icon - removes the task, pop-up window appears to confirm the removal
Logs tab
Logs tab is for downloading Native Connector and Secure Access logs from UI for detailed troubleshooting.

- epe-master logs - contains warning, debug and error level messages about Native Connectors and info how long task actions has been taken.
- epe-worker-ad logs - contains extract data status of Active Directory connector (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-worker-azure logs - contains extract data status of Entra ID (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-worker-ldap logs - contains extract data status of LDAP (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-launcher logs - contains information about EPE launches
- datapump-itsm logs - Contains information about data export to customers Matrix42 Core, Pro and IGA solution.
- esa logs - Contains information about Secure Access authentication.
Settings tab
Settings tabs is used for monitoring environments with connectors.

-
Environment type - is mandatory to be in selected and information is used for example defining alert critically.
- Test - select this when your environment is used as testing environment
- Prod - select this when your environment is used as production environment
- Demo - select this when your environment is used as demo or training environment
- Dev - select this when your environment is used as development environment
What we are monitoring?
- Failures in scheduled based provisioning (extracting data, exporting data to ESM, outdated certificates, incorrect passwords, incorrect search base/filter, incorrect mappings, etc.)
- Failures in event based provisioning (fail to write to AD/Azure, etc.)
- Event-based provisioning - which connectors are used for writing data towards applications/directories.
- ESA more than ten failed login attempts to one user in past 3 days
- Environment type - is mandatory to be in selected and information is used for example defining alert critically.
Data migrations
Do not click “Migrate attribute mappings” or “Migrate workflows”, if not requested to do so by Matrix42.
Create Python script
Here is example of Python script, but you can also create your own one.
#!/usr/bin/python3
import json
import os
import common_subroutines_efecte_library
# system_info_efecte_library.printOsName()
# system_info_efecte_library.printReleaseVersion()
arguments_dictionary = common_subroutines_efecte_library.initialize_script()
########################################
#
# put customer specific scripts under this
#
#
data_extraction_attributes_json = arguments_dictionary['data_extraction_attributes']
script_current_working_directory = arguments_dictionary['script_current_working_directory']
error_filename_full_path = arguments_dictionary['error_filename']
expected_output_filename_full_path = arguments_dictionary['expected_output_filename']
heartbeat_filename_full_path = arguments_dictionary['heartbeat_filename']
log_filename_full_path = arguments_dictionary['log_filename']
total_attributes_to_extract = len(data_extraction_attributes_json)
common_subroutines_efecte_library.log_line(log_filename_full_path, 'script started processing')
os.chdir(script_current_working_directory)
list = []
dictionary_data_extraction_attributes = {}
for n in range(total_attributes_to_extract):
attribute_name = common_subroutines_efecte_library.get_value_from_json(data_extraction_attributes_json[n],
'attributeName')
multivalue = common_subroutines_efecte_library.get_value_from_json(data_extraction_attributes_json[n], 'multivalue')
dictionary_data_extraction_attributes[attribute_name] = str(multivalue)
common_subroutines_efecte_library.log_line(log_filename_full_path, 'script will create json file with requested fields')
for n in range(1, 11):
values_dictionary = {}
for one_key in dictionary_data_extraction_attributes.keys():
values_dictionary[one_key] = 'this is a sample value for row = ' + str(n) + ' , key = ' + one_key
list.append(values_dictionary)
# print(json.dumps(list, indent=4, sort_keys=True))
f = open(expected_output_filename_full_path, "w")
f.write(json.dumps(list, indent=4, sort_keys=True))
f.close()
common_subroutines_efecte_library.log_line(log_filename_full_path, 'script completed successfully!')
# json_data = json.dumps(list)
# print('json_data = ' + json_data)
# heartbeat file must be updated at least once every 5 minutes
f = open(heartbeat_filename_full_path, "w+")
f.write('heartbeat updated: ' + common_subroutines_efecte_library.get_timestamp())
f.close()
Configure Custom Backend connector
For accessing connector management, user needs to have permissions to Efecte Platform configuration.
1. Open the Efecte administration area (a gear symbol).
2. Open connectors view.
3. Choose new connector

2. Select Data Source type to be Custom Backend

3. Fulfill information
- Fill in unique connector name for the connector, notice that name cannot be changed afterwards.
- Select [script_filename].py script from provisioning script field.
- Parameters encryption password is needed for hiding / revealing parameters, after the connector has been saved
- WebAPI user is needed when connector is writing data to customers Efecte solution
- WebAPI password is needed when connector is writing data to customers Efecte solution

4. Save the connector from save button. Now connector is configured and you can move forward to Configure Scheduled task.
Recommendations for ftp server folder structure
If customer want's to use Custom backend connector to load many different type of data, it is recommended to have one Custom backend connector per data type.
For example if customer wants to load Entitlements and Roles, they should have 2 connector like "Entitlements connector" and "Roles connector".
And having those two connectors files on same ftp server, it's folder structure could be for example:
/sftp/iga/in/entitlements
/sftp/iga/in/entitlements/archive
/sftp/iga/in/roles
/sftp/iga/in/roles/archive
Data folder should contain only one file at a time.
With that folder structure, you can use these script parameters in your Entitlements connector:
“sftp_remote_path_data”:"/sftp/iga/in/entitlements",
“sftp_remote_path_archive”:"/sftp/iga/in/entitlements/archive",
“file_path”:"./entitlements/*.csv"
And similarly different script parameters for Roles connector.
General guidance for scheduled tasks
General guidance for scheduled tasks
How to Create New Scheduled Task to import data
For configuring scheduled-based provisioning task, you will need access to Administration / Connectors tab.
1. Open the Administration area (a cogwheel symbol).
2. Open Connectors view.
3. Choose Connector for Scheduled-based task and select New Task
Note! If connector is not created, you have to choose first New connector and after that New task.

4. Continue with connector specific instructions: Native Connectors
Should I use Incremental, Full or Both?
Scheduled task can be either Incremental or Full -type.
Do not import permissions with AD and LDAP incremental task
Incremental task has issue with permissions importing. At the moment it is recommended not to import group memberships with incremental scheduled task.
On Microsoft Active Directory and OpenLDAP connectors, remove this mapping on incremental task:

Setting on Scheduled tasks:

Incremental type is supported only for Microsoft Active Directory, LDAP and Microsoft Graph API (formerly known as Entra ID) Connectors.
Incremental type means, that Native Connectors (EPE) fetches data from source system, using changed timestamp information, so it fetches only data which is changed or added after previous incremental task run.
When Incremental type task is run for very first time, it does a full fetch (and it marks the current timestamp to EPE database), thereafter, task uses that timestamp to ask the data source for data that changed since that timestamp (and then EPE updates the timestamp to EPE database for next task run). Clearing task cache doesn't affect this timestamp, so Incremental task is always incremental after first run.
Full type is supported for all Connectors.
Full type import fetches always all data (it's configured to fetch) from source system, on every run.
Both Full and Incremental type tasks use also Task cache in EPE, which makes certain imports faster and lighter for M42 system.
By default that task cache is cleared ad midnight UTC time. When cache is cleared, next import after that is run without caching used to reason if data fetched should be pushed to ESM, all fetched data is pushed to ESM. But after that, next task runs until next time cache is cleared, are using EPE cache to determine if fetched data needs to be pushed to ESM or not.
You can configure at what time of day task cache is emptied, by changing global setting in EPE datapump configuration:
/opt/epe/datapump-itsm/config/custom.properties
which is by default set to: clearCacheHours24HourFormat=0
You can also clear cache many times a day, but that needs to be thinked carefully, as it has impact on overall performance as EPE will push changes to ESM, that probably are already there, example(do not add spaces to attribute value): clearCacheHours24HourFormat=6,12
After changing this value, reboot EPE datapump container to take change into use.
Recommendations:
Have always by default Full type scheduled task.
If you want to fetch changes to data fetched already by full task, more frequently than you can run full task, add also incremental task. Usually incremental task is not needed.
Recommended Scheduling Sequence
Recommended scheduling sequence, depends how much data is read from Customers system/directory to the Matrix42 Core, Pro or IGA solution and is import Incremental or Full.
Examples for scheduling,
| Total amount of users | Total amount of groups | Full load sequence | Incremental load sequence |
| < 500 | < 1000 |
Every 30 minutes if partial load is not used Four (4) times a day if partial load is used |
Every 10 minutes |
| < 2000 | < 2000 |
Every 60 minutes, if partial load is not used Four (4) times a day if partial load is used |
Every 15 minutes |
| < 5000 | < 3000 |
Every four (4) hours, if partial load is not used Twice a day if partial load is used |
Every 15 minutes |
| < 10 000 | < 5000 | Maximum imports twice a day, no matter if partial load is or is not used | Every 30 minutes |
| < 50 000 | < 7000 | Maximum import once a day, no matter if partial load is or is not used | Every 60 minutes |
| Over 50 000 | Over 7000 | There might be a need for another EPE-worker, please contact Product Owner | Separately evaluated |
Please note that if there are several tasks running at the same time you may need more EPE-workers. The tasks should be scheduled at different times and can be completed according to the table above. However, if there are more than 6 tasks running at the same time, the number of epeworkers should be increased. It's best practice not to schedule tasks to run at same time, if possible.
Recommendations related to performance
If the amount fo data to be imported is over 10000 concider these things:
Adjust log level of ESM and DATAPUMP to ERROR-level, to lowe the amount of logging during task run
Have as few as possible automations starting immediately for imported datacards (listeners, handlers, workflows), as those make ESM to take longer time handling new datacards.
Set removed accounts and entitlements status removed/disabled
With this functionality, you can mark account and entitlement status to e.g. Deleted or Disabled, when account or entitlement is removed from source system. Starting from version 2025.3 you can also set status to generic objects (not only to accounts/identities and entitlements/groups).
For version 2025.3 and newer
In version 2025.3 these settings are moved from properties files to Task UI. Also you can now set these settings for Generic objects, which have not been possible before this version.
There is separate configuration for each scheduled task, and for all mapping types. Here is example of this config on task:

For version 2025.2 and older
This functionality is available for “full” type scheduled tasks.
Settings are on datapump dockers configuration file. To change those parameter values, you need to set those in /opt/epe/datapump-itsm/config/custom.properties file.
Configuration
To enable disabling functionality, datapump config should have these parameters set to true:
disable.unknown.esm.users=truedisable.unknown.esm.groups=true
Those 2 parameters are false by default in 2024.2 and 2025.1 versions. In 2025.2 and newer version those are true by default.
Next are these parameters:
personTemplateStatusCodeAttributeKey=accountStatuspersonTemplateStatusAttributeDisabledValueKey=DeletedgroupTemplateStatusCodeAttributeKey=statusgroupTemplateStatusAttributeDisabledValueKey=5 - Removed
First two attributes should point to the DatacardHiddenState attribute in the User template, and tell which value should be send there when the user is deleted.
By default its accountStatus and Value 5 - Removed on IGA Account template.
All these needs to match with the attribute configuration:

Same thing applies for the next two paramaters, but its for Groups.'
If you need to change those parameters in properties file, do changes in Datapump container to file: /opt/epe/datapump-itsm/config/custom.properties and those changes will then survive over container reboot and will be copied on reboot to /opt/epe/datapump-itsm/config/application.properties.
Description
Tasks save their __taskid__ shown as Task Id mapping in the UI to the datacards, its then used to determine if the datacard was added by this task. In case there are multiple tasks with different sets of users.
This field was previously used as datasourceid, but since we moved to the model where connector can have multiple tasks its identifier cannot be used anymore, thats why the field was repurposed as taskid instead.
Taking users as an example, when task runs ESM is asked for the list of users that have its taskid in Task Id mapping field, and doesn't have a personTemplateStatusAttributeDisabledValueKey value in the personTemplateStatusCodeAttributeKey
This result is then compared to what the task fetched, and the datacards of users that were not fetched have their personTemplateStatusattribute set to value specified in the config - 5 - Removedby default.
Example log below shows described process and informs that one user was removed.

Same thing applies to groups but groupTemplateStatusattributes are used instead.
Notes
- Feature works only with full fetch scheduled tasks..
- No support for generic templates yet, only identity and access
- When migrating from the previous versions where datasourceid was still used it needs to run at least once to set its taskid’s in the datacards first.
- EPE identifies Disabled users or groups as the ones that were removed from the AD, at the present we do not support statuses related to the entity beign active or not.
- EPE does not enable users back on its own.
- If more than one tasks fetches the same users or groups it may overwrite the taskid in the datacard depending on which task ran last. It is suggested that many full type tasks are not fetching same user or group.
- Always do configuration file changes to custom.properties, do not change only application.properties as those changes are lost on container reboot if you have not done same changes to custom.properties.
Configure scheduled task for reading data
Note! If connector is not created, you have to create first “new connector” and after that you are able to create new tasks.
1. Open the Efecte administration area (a gear symbol).
2. Open connectors view.
3. Choose connector for which scheduled-based task is configured
4. Select new task under the correct connector

4. Define scheduling for the task (if and how scheduled task should be run periodically). Choose scheduling sequence, which depends how much data is read to customers Efecte solution.

5. Fill in Task Details
- Unique task name for the connector, notice that name cannot be changed afterwards.
- Task usage is set to scheduled
- Mappings type is set to generic (one template)

6. Fill in failure information
Optional settings for failure handling, if scheduled task fails it can create data card to Efecte ESM that displays the error. If failure settings are defined , the administrator does not need to manually check the status of scheduled tasks.
- Failure template - Select a Template of datacard which will be created in case of any errors during provisioning (connection to data sources,timeouts,etc.)
- Failure folder - Select folder where failure data card is stored.
- Failure attribute - Select an attribute where in the Failure Template should the error information be stored in.

- Attribute name for Objectguid. Wrote name of your input data column/attribute, which contains unique identifier for row. For example: objectGUID

7. Fill in mappings for generic template
Generic data are read to any template user want's and it is mandatory to set target folder, datasourceid and unique values which are used for identifying data between custom backend and Efecte solution.
-
Target template - Select a template to define attribute mappings
- Target folder - Select a folder from a list of folders. The list is narrowed down to match compatibility with selected Template.
-
Attribute mappings
- Efecte template attribute - to which attribute in Efecte directory attribute is mapped to.
- file attribute - which attribute from the file is mapped to Efecte

8. In case you need to modify the actual script, it needs to be downloaded from the customers host machine, modified and uploaded again.
- If you don't have accesses to the host machine, please contact Efecte Service Desk for assistance.
9. Save your changes
10. Before running the scheduled task for Custom Backend connector,
- Check that all configuration related to workflows and settings in data cards are in place.
- When running for the first time, make sure that provisioning is disabled towards directories & applications
- If you are using default IGAConnector.py and csv type import file, field separator used script is ; (semicolon).
11. Run task manually or wait until scheduled run is started.
12. Validate that data has been read to correct data cards and correct attributes are in place.
13. Troubleshooting
- In case failure template is used, check correct data card
- Check scheduled task history from connector management
- Check Efecte Provisioning Engine logs
Configure event task for writing data
- Open the Efecte Platform configuration view (a gear symbol).
- Open connectors view
- Choose connector, which will use the event task
- Select “new task” under correct connector

4. Fill in Task Details
- Task name - Give name name for the task, it will be displayed in connectors view.
- Task usage indicated that is the task used for reading data or writing data. Can be changed afterwards, but it's not recommended if event task is in use. It will break workflows.
- Select Mappings type Generic

5. Define Generic mappings
-
Target template - Select a template to define attribute mappings
- Target folder - Select a folder from a list of folders. The list is narrowed down to match compatibility with selected Template.
-
Attribute mappings
- Efecte template attribute - to which attribute in Efecte directory attribute is mapped to.
- Backend attribute - which attribute from the directory is mapped to Efecte
- Add new attribute - It is possible to set additional attributes, which are read from backend, by choosing New attribute button

6. Save provisioning task from “save” button
7. The next step is to configure the workflow to use this event-based task. From workflow engine in Efecte Platform, it is possible to execute provisioning activities towards backend. This means that any of the available orchestration nodes activity can be run at any point of the workflow.
Workflow references are shown in the connector overview page,

Workflow activities (orchestration nodes)
Run provisioning task
This activity is used when all information from the directory is immediately needed back.

In the illustration above administrators can choose the correct Directory “Target”. Run provisioning task orchestration node read attributes from backend to Efecte.
If there are needs to change the attribute mappings, those attributes must be defined in the provisioning task configuration view, in order them to be changed in the orchestration node.
Provisioning exception is an optional property on this workflow node. Admins can configure this property in use where exceptions can be written if any exceptions exists during the provisioning actions.
Execute Custom Backend Task
This activity will call a python script (the python script). at that point, the new information will only be available in the external system, not yet in ESM.

In the illustration above administrators can choose the correct Directory “Target”. If there are needs to change the attribute mappings, those attributes must be defined in the provisioning task configuration view, in order them to be changed in the orchestration node.
Provisioning exception is an optional property on this workflow node. Admins can configure this property in use where exceptions can be written if any exceptions exists during the provisioning actions.
How to customize connector
How to create and modify custom Python scripts
General info
Generic Python Script connector comes with couple out-of-the box Python scripts. You should not modify those. Instead, if you need to make modifications to those, copy-paste those to new name and then do your modifications to newly named python script file. And as last step, modify your connector to use that script.
Work order:
- Create your custom scheduled or task-based script
- Take it into use to connector from Connectors UI
Scheduled scripts
How to take your custom python script into use for scheduled tasks, guidance for 1 host and 1 worker environment:
Note! Do not change "default" .py files which came with environment installation.
Copy your customly named (in this example mypythonscript.py) python script to these folders from HOST (remember to use correct tenant_name in path):
cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_scheduled
cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-worker-1/files/custom-provisioning-scripts
cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts
Change your custom python script file permission to 755 on all those above locations where you copied your script(in this example mypythonscript.py) (remember to use correct tenant_name in path):
chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_scheduled/mypythonscript.py
chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-worker-1/files/custom-provisioning-scripts/mypythonscript.py
chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/mypythonscript.py
If you have more hosts, repeat on every host.
If you have more workers, repeat to every worker(check that worker folder name is correct).
Event-based scripts
How to take your custom python script into use for event-based tasks, guidance for 1 host environment:
Note! Do not change "default" .py files which came with environment installation.
Copy your customly named (in this example mypythonscript.py) python script to these folders from HOST (remember to use correct tenant_name in path):
cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_event
cp mypythonscript.py /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts
Change your custom python script file permission to 755 on all those above locations where you copied your script(in this example mypythonscript.py) (remember to use correct tenant_name in path):
chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/only_event/mypythonscript.py
chmod 755 /var/lib/efecteone/tenant_files/{tenant_name}/epe-master/files/custom-provisioning-scripts/mypythonscript.py
If you have more hosts, repeat on every host.
Example/base for custom Event-based python script
This script reads attribute values from Connector and from Orchestration node. And prints values to logs. You can use this as a starting point for your own custom python script for Event-based tasks.
#!/usr/bin/python3
import json
import sys
import logging
def main():
# Configure logging, change log file name
logging.basicConfig(filename='python_event_task_abc.log', level=logging.INFO, format='%(asctime)s - %(message)s')
# Read inputs from stdin
lines = sys.stdin.read().splitlines()
if len(lines) != 2:
print("Error: Expected exactly two lines of input.")
raise Exception("Expected exactly two lines of input.")
# Parse the first line, this contains connector variables
try:
connector_parameters = json.loads(lines[0])
host = connector_parameters.get('host', None)
password = connector_parameters.get('password', None)
logging.info(f"Connector parameters, Host={host}")
# read custom connector parameters here
except json.JSONDecodeError:
raise Exception("Connector parameters is not a valid JSON string.")
# Parse the second line, this contains mappings data from template datacard
try:
task_mappings = json.loads(lines[1])
logging.info(f"Task mappings JSON: {task_mappings}")
#read task mappings data here
# Implement your logic here
except json.JSONDecodeError as e:
raise Exception("Task mappings is not a valid JSON string.") from e
# Exceptions raised are controlling workflow orchestration node flow, on exception it goes to Exception path, otherwise it goes to Completed path. Exception raised can be seen on Provisioning exception attribute value.
except Exception as e:
raise Exception("An unexpected error occurred") from e
if __name__ == "__main__":
main()
Take customly named script into use to connector
How to take your customly named Python script into use, by selecting it from Connectors UI. Select your script from connectors Provisioning script -dropdown:

Implementation and work estimations
Only Matrix42 has access to host, which is needed for these custom scripts to be installed.
These expansion possibilities always need Matrix42 consultants review, before implementation and work estimations can be agreed.
Table of Contents

