M42 CoreProIGA Connector
How to integrate Matrix42 Core, Pro or IGA solution to another instance of Matrix42 Core, Pro or IGA solution
M42 CoreProIGA Connector
How to integrate Matrix42 Core, Pro or IGA solution to another instance of Matrix42 Core, Pro or IGA solution
M42 CoreProIGA Connector
M42 CoreProIGA Connector (previously Efecte Connector / Efecte ESM connector) is part of Connector Management Native Connectors and it is used for reading/writing data cards to another Matrix42 Core, Pro or IGA solution. It can be used for all Matrix42 Core, Pro or IGA solutions which are using Provisioning Engine.
M42 CoreProIGA Connector requires configuration according to customers use case, and in principle configuration has three (3) steps:
- Configure connector - enables connector to establish connection to the target Matrix42 CoreProIGA solution
- Configure scheduled task - is used when data is read from the target Matrix42 CoreProIGA solution
-
Configure event task - is used when data is written to the target Matrix42 CoreProIGA solution
- Also workflow orchestration nodes are required to be configured (create / update / delete data cards)
Important to know!
With M42 CoreProIGA connector you need to pay extra attention that you don't end-up creating endless loop between two Matrix42 CoreProIGA solutions. This might happen when data is written to another Matrix42 CoreProIGA solution and workflows orchestration node is not configured to one of the delaying activities:
Timer
Approval
Wait condition
Manual task (if node is not marked instantly as done when rolled back)
2024.2 and newer version instructions
General functionalities
Connectors - general functionalities
Connectors - general functionalities
In this article are described general functionalities for managing native connectors in solution. All Native Connectors are managed from the same connector management UI.
Notice that there are separate descriptions for each connector, where connector specific functionalities and configuration instructions are described in detailed level.
To be able to access connector management, user needs to have admin level permissions to customers Platform configuration. When accesses are granted correctly, connector tab will be visible and user can manage connectors.

Left menu
Connector management is divided into four tabs:

- Overview - for creating and updating Native Connectors. The Admin User can see their status, type and how many scheduled or event tasks are associated with them.
- Authentication - for creating and updating authentication tasks. Provisioning task for authentication is needed for Secure Access to be able to define which Customers end-users can access to Matrix42 Core, Pro and IGA login page.
- Logs - for downloading Native Connector and Secure Access logs from UI.
- Settings - general settings for Native Connectors and Secure Access, including environment type for logging and monitoring.
Connectors overview tab
From overview page, user can easily and quickly see status for all connectors.

Top bar:
-
Status for Native Connectors (EPE)
- Green text indicates that Native Connectors is online. All the needed services are up and running.
- Red text indicates that there is a problem with Native Connectors, all the services are not running.
-
Status for Secure Access (ESA)
- Green text indicates that Secure Access is online. All the needed services are up and running.
- Red text indicates that there is a problem with Secure Access, all the services are not running.
-
Native Connectors version number is displayed in top right corner
Top bar for list view:

- New connector - opens new window for adding and configuring new connector
- Remove connector(s) - workflow references are calculated and pop-up window appears to confirm removal (notice that calculating references can take several seconds)
- Export - Admin can export one or more connectors (and tasks) from the environment. Usually used for exporting connectors and connectors (and tasks) from test to prod. Native connectors secret information is password protected.
-
Import - Admin can import one or more connectors (and tasks) to the environment. Usually used for importing connectors (and tasks) from test to prod.
- Admin cannot import from old EPE UI (Older than 2024.1) to the new one. Source and target environments must have same version.
- Import will fail if the configuration (templates, attributes) is not the same - for example when some attribute is missing
- If you are importing something with the same connector details, it will be merged under existing connector
- Refresh - Admin can refresh connectors view by clicking the button.
List view for overview,

- Select connector(s) - Select one connector by clicking check box in front of the connector row or clicking check box in the header row will select all connectors
- Id - Automatically generated unique ID of the connector. Cannot be edited or changed.
-
Status - indicates scheduled task status
-
Green check mark - Task is executed without any errors
-
Red cross -Task is executed, but there have been error
-
Grey clock - Task is not executed yet, waiting for scheduling
-
Orange - one of the tasks has a problem
- No value - Scheduled based-task is missing
-
- Name - Connector name added to connector settings. Unique name of the connector holding configuration for one data source
- Type - indicates target / source system
- Scheduled - informs how many scheduled tasks are configured
- Event - informs how many event tasks are configured
-
Manage
- Pen icon - opens connector settings (double-clicking the connector row, also opens settings)
- Paper icon - copies the connector
- Stop - workflow references are calculated and pop-up window appears to confirm removal
- Search - User can search connector by entering the search term in the corresponding field . Id, Status, Name, Type, Scheduled and Event fields can be searched.
Scheduled task information (click arrow front of the connector)
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for scheduled tasks
- New task - opens configuration page for new task
- Remove task(s) - removes the selected task(s) from the system and they cannot be recovered anymore.
- Refresh - refresh scheduled-based tasks view
- Search - user can search task by entering the search term in the corresponding field for Id, name, enabled, extract/load status.
List view for scheduled tasks

- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Enabled - Displays is the task scheduled or not
-
Green check mark - Task is scheduled
-
Red cross - Task is not scheduled
-
-
Extract status - Displays status of data extraction from the target directory/system
-
Green check mark - data is extracted successfully
-
Red cross - data is extracted with error(s) or extraction is failed
Clock - Task is waiting execution
-
-
Load status - Displays status of data export from json-file to customers solution
-
Green check mark - data is imported successfully to customers solution
-
Red cross - data is imported with error(s) or import is failed
-
-
Manage
- Pen icon - opens task settings in own window (double-clicking the task row also opens task settings)
- Paper icon - copies the task
- Clock icon - opens task history view
- Stop - remove task, pop-up window opens to confirm the removal
Scheduled task history view
By clicking the clock icon in the scheduled task row, history for scheduling will be shown.

Top bar for view history
- Refresh - refreshes scheduled task status. This doesn't affect task, this only updates UI to show latest information of task run.
List view for scheduled task history
-
Color of the row is indicating status
-
Green - task executed successfully
-
Red - error happened during execution
-
- Execution ID - unique ID for the scheduled task row
- Extract planned time - when next extract from the directory/application is scheduled to happen
- Extract complete time - when extract was completed
- Extract status - status for fetching data from the directory/application
- Load start time - when next load to customers solution is scheduled to happen
- Load complete time - when load was completed
- Load status - status for loading information to customers solution
List view for scheduled task status
- Actual start time - timestamp for actual start
- Users file - JSON file containing user information read from the directory/application
- Group file - JSON file containing group information read from the directory/application
- Generic file - JSON file containing generic information read from the directory/application
- Extract info - detailed information about reading information from the directory/application
- Load info - detailed information about loading the information to customers Matrix42 Core, Pro and IGA solution
Edit window for scheduled task
Configuration for scheduled task can be opened by clicking pen icon or double-clicking the task row.
Left menu and attributes varies according to selected options and therefore more detailed instructions for editing tasks can be found from the connector description, but there are common functionalities for all scheduled tasks which are described below.

Saving the task
In case mandatory information is missing from the task, hoovering mouse on top of the save button will show which attributes are still empty.
Top bar for edit scheduled task
- Run task manually - admin can run task manually out side of the defined scheduling
- Stop task - admin can stop scheduled-based task which is currently running. Task will be stopped and status is changed to be stopped. It waits in this state until the next timing occurs.
-
Clear data cache - Data cache for the next provisioning of Users and Groups will be cleared. It means that next run is run as first time run.
- By default, we clear the cache everyday at 00:00 UTC
- If you want to clear the cache at different time, then it has to specify some different value in host file 'custom.properties'.
-
EPE Cache is also cleared when EPE is restarted, whole environment is restarted, EPE mappings have been changed
Event task information
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for event tasks
- New task - opens configuration page for new event task
- Remove task(s) - removes selected task(s), pop-up window appears to confirm the removal
- Refresh - refreshes event tasks view
- Show workflow references - calculates task related workflow relations and statuses. This is very useful if you don't know from which workflows event-based tasks are used.
List view for event tasks
- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Workflow relations
- Question mark shows pop-up window with detailed information about the reference
-
Workflow status
- Not used - No relations to workflow
- In use - Workflow(s) attached to task, task cannot be removed
-
Manage
- Pen icon - opens task settings in own window
- Paper icon - copies the task
- Stop icon - removes the task, pop-up window appears to confirm the removal
Edit window for event task
Configuration for event task can be opened by clicking pen icon or double-clicking the task row.
Left menu and attributes varies according to selected options and therefore more detailed instructions for editing tasks can be found from the connector description, but there are common functionalities for all event tasks which are described below.

Edit event task window
- Task usage, editable? - this appears when editing existing task and changing the task usage type will break workflows
- Mappings type, editable? - this appears when editing existing task and changing the mappings type will break workflows
Saving the task
In case mandatory information is missing from the task, hovering mouse on top of the save button will show which attributes are still empty.
Authentication tab
Authentication for Matrix42 Core, Pro and IGA solutions are configured from authentication tab, notice that only some of the connectors (directory connectors) are supporting authentication, so its not possible to create authentication tasks to all available connectors.

Top bar for authentication
- New connector - opens new window for configuring new connector (notice that not all connectors are supporting authentication)
- Remove connector(s) - removes selected task(s), pop-up window appears to confirm the removal
-
Export - user can export one or more tasks from the environment. Usually used for exporting tasks from test to prod. EPE connectors are password protected.
- Note that Realm for authentication tasks is not exported, you need to set that manually after importing
- Import - user can import one or more tasks to the environment. Usually used for importing task from test to prod.
- Refresh - refreshes authentication tasks view
List view for authentication overview
- Select connector(s) - Select one connector by clicking check box in front of the connector row or clicking check box in the header row will select all connectors
- Id - Automatically generated Unique ID of the connector. Cannot be edited or changed.
- Name - Connector name added to connector settings. Unique name of the connector holding configuration for one data source
- Type - indicates target / source system
- Count - informs how many authentication tasks are configured
-
Manage
- Pen icon - opens authentication task setting in own window
- Paper icon - copies the task
- Stop icon - removes selected task
Authentication task information
When clicking arrow at the beginning of the connector row, all related scheduled and event tasks are shown

Top bar for authentication overview
- Create new task - opens configuration page for new authentication task
- Remove task(s) - removes selected task(s), pop-up window appears to confirm the removal
- Refresh - refreshes authentication tasks view
List view for authentication overview,
- Select task(s) - Select task to be deleted from the list view by ticking.
- Id - Unique ID of the task. Generated automatically and cannot be changed.
- Name - Task name added to task settings, unique name of the task.
-
Manage
- Pen icon- opens task settings in own window (double-clicking the task row, also opens settings window)
- Paper icon - copies the task
- Stop icon - removes the task, pop-up window appears to confirm the removal
Logs tab
Logs tab is for downloading Native Connector and Secure Access logs from UI for detailed troubleshooting.

- epe-master logs - contains warning, debug and error level messages about Native Connectors and info how long task actions has been taken.
- epe-worker-ad logs - contains extract data status of Active Directory connector (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-worker-azure logs - contains extract data status of Entra ID (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-worker-ldap logs - contains extract data status of LDAP (what Native Connector is loading to to customers Matrix42 Core, Pro and IGA solution). If selection is empty, it means that directory is not in use in this environment.
- epe-launcher logs - contains information about EPE launches
- datapump-itsm logs - Contains information about data export to customers Matrix42 Core, Pro and IGA solution.
- esa logs - Contains information about Secure Access authentication.
Settings tab
Settings tabs is used for monitoring environments with connectors.

-
Environment type - is mandatory to be in selected and information is used for example defining alert critically.
- Test - select this when your environment is used as testing environment
- Prod - select this when your environment is used as production environment
- Demo - select this when your environment is used as demo or training environment
- Dev - select this when your environment is used as development environment
What we are monitoring?
- Failures in scheduled based provisioning (extracting data, exporting data to ESM, outdated certificates, incorrect passwords, incorrect search base/filter, incorrect mappings, etc.)
- Failures in event based provisioning (fail to write to AD/Azure, etc.)
- Event-based provisioning - which connectors are used for writing data towards applications/directories.
- ESA more than ten failed login attempts to one user in past 3 days
- Environment type - is mandatory to be in selected and information is used for example defining alert critically.
Data migrations
Do not click “Migrate attribute mappings” or “Migrate workflows”, if not requested to do so by Matrix42.
Configure Efecte connector
In this chapter is described configuration instructions for Efecte connector to be able connect to another Efecte solution.
Check from the picture, how two Efecte solutions are named, so it is easier to follow up instructions.

For accessing connector management, user needs to have permissions to Efecte Platform configuration.
1. Open the Efecte administration area (a gear symbol).
2. Open connectors view.
3. Choose new connector

4. Select Data Source type to be Efecte

5. Fulfill information related to Efecte 1 solution
- Connector name - give your connector a friendly name (name can be changed afterwards)
- Host URL - host address which will be used connecting to Efecte solution
- User name - Efecte 1 solution WebAPI user name
- Password - Efecte 1 solution WebAPI users password
6. Fulfill information related to Efecte 2 solution
-
WebAPI user - Efecte 2 solution WebAPI user name
- List of all Efecte local users
- WebAPI password - Efecte 2 solutions WebAPI users password
7. Save connector information.
7. After saving the connector details, you can press test connection button and test if the connector can connect to target Efecte solution.
9. If you edit existing connector details, remember to save your changes and test connection after that.

General guidance for scheduled tasks
General guidance for scheduled tasks
How to Create New Scheduled Task to import data
For configuring scheduled-based provisioning task, you will need access to Administration / Connectors tab.
1. Open the Administration area (a cogwheel symbol).
2. Open Connectors view.
3. Choose Connector for Scheduled-based task and select New Task
Note! If connector is not created, you have to choose first New connector and after that New task.

4. Continue with connector specific instructions: Native Connectors
Should I use Incremental, Full or Both?
Scheduled task can be either Incremental or Full -type.
Do not import permissions with AD and LDAP incremental task
Incremental task has issue with permissions importing. At the moment it is recommended not to import group memberships with incremental scheduled task.
On Microsoft Active Directory and OpenLDAP connectors, remove this mapping on incremental task:

Setting on Scheduled tasks:

Incremental type is supported only for Microsoft Active Directory, LDAP and Microsoft Graph API (formerly known as Entra ID) Connectors.
Incremental type means, that Native Connectors (EPE) fetches data from source system, using changed timestamp information, so it fetches only data which is changed or added after previous incremental task run.
When Incremental type task is run for very first time, it does a full fetch (and it marks the current timestamp to EPE database), thereafter, task uses that timestamp to ask the data source for data that changed since that timestamp (and then EPE updates the timestamp to EPE database for next task run). Clearing task cache doesn't affect this timestamp, so Incremental task is always incremental after first run.
Full type is supported for all Connectors.
Full type import fetches always all data (it's configured to fetch) from source system, on every run.
Both Full and Incremental type tasks use also Task cache in EPE, which makes certain imports faster and lighter for M42 system.
By default that task cache is cleared ad midnight UTC time. When cache is cleared, next import after that is run without caching used to reason if data fetched should be pushed to ESM, all fetched data is pushed to ESM. But after that, next task runs until next time cache is cleared, are using EPE cache to determine if fetched data needs to be pushed to ESM or not.
You can configure at what time of day task cache is emptied, by changing global setting in EPE datapump configuration:
/opt/epe/datapump-itsm/config/custom.properties
which is by default set to: clearCacheHours24HourFormat=0
You can also clear cache many times a day, but that needs to be thinked carefully, as it has impact on overall performance as EPE will push changes to ESM, that probably are already there, example(do not add spaces to attribute value): clearCacheHours24HourFormat=6,12
After changing this value, reboot EPE datapump container to take change into use.
Recommendations:
Have always by default Full type scheduled task.
If you want to fetch changes to data fetched already by full task, more frequently than you can run full task, add also incremental task. Usually incremental task is not needed.
Recommended Scheduling Sequence
Recommended scheduling sequence, depends how much data is read from Customers system/directory to the Matrix42 Core, Pro or IGA solution and is import Incremental or Full.
Examples for scheduling,
| Total amount of users | Total amount of groups | Full load sequence | Incremental load sequence |
| < 500 | < 1000 |
Every 30 minutes if partial load is not used Four (4) times a day if partial load is used |
Every 10 minutes |
| < 2000 | < 2000 |
Every 60 minutes, if partial load is not used Four (4) times a day if partial load is used |
Every 15 minutes |
| < 5000 | < 3000 |
Every four (4) hours, if partial load is not used Twice a day if partial load is used |
Every 15 minutes |
| < 10 000 | < 5000 | Maximum imports twice a day, no matter if partial load is or is not used | Every 30 minutes |
| < 50 000 | < 7000 | Maximum import once a day, no matter if partial load is or is not used | Every 60 minutes |
| Over 50 000 | Over 7000 | There might be a need for another EPE-worker, please contact Product Owner | Separately evaluated |
Please note that if there are several tasks running at the same time you may need more EPE-workers. The tasks should be scheduled at different times and can be completed according to the table above. However, if there are more than 6 tasks running at the same time, the number of epeworkers should be increased. It's best practice not to schedule tasks to run at same time, if possible.
Recommendations related to performance
If the amount fo data to be imported is over 10000 concider these things:
Adjust log level of ESM and DATAPUMP to ERROR-level, to lowe the amount of logging during task run
Have as few as possible automations starting immediately for imported datacards (listeners, handlers, workflows), as those make ESM to take longer time handling new datacards.
Set removed accounts and entitlements status removed/disabled
With this functionality, you can mark account and entitlement status to e.g. Deleted or Disabled, when account or entitlement is removed from source system. Starting from version 2025.3 you can also set status to generic objects (not only to accounts/identities and entitlements/groups).
For version 2025.3 and newer
In version 2025.3 these settings are moved from properties files to Task UI. Also you can now set these settings for Generic objects, which have not been possible before this version.
There is separate configuration for each scheduled task, and for all mapping types. Here is example of this config on task:

For version 2025.2 and older
This functionality is available for “full” type scheduled tasks.
Settings are on datapump dockers configuration file. To change those parameter values, you need to set those in /opt/epe/datapump-itsm/config/custom.properties file.
Configuration
To enable disabling functionality, datapump config should have these parameters set to true:
disable.unknown.esm.users=truedisable.unknown.esm.groups=true
Those 2 parameters are false by default in 2024.2 and 2025.1 versions. In 2025.2 and newer version those are true by default.
Next are these parameters:
personTemplateStatusCodeAttributeKey=accountStatuspersonTemplateStatusAttributeDisabledValueKey=DeletedgroupTemplateStatusCodeAttributeKey=statusgroupTemplateStatusAttributeDisabledValueKey=5 - Removed
First two attributes should point to the DatacardHiddenState attribute in the User template, and tell which value should be send there when the user is deleted.
By default its accountStatus and Value 5 - Removed on IGA Account template.
All these needs to match with the attribute configuration:

Same thing applies for the next two paramaters, but its for Groups.'
If you need to change those parameters in properties file, do changes in Datapump container to file: /opt/epe/datapump-itsm/config/custom.properties and those changes will then survive over container reboot and will be copied on reboot to /opt/epe/datapump-itsm/config/application.properties.
Description
Tasks save their __taskid__ shown as Task Id mapping in the UI to the datacards, its then used to determine if the datacard was added by this task. In case there are multiple tasks with different sets of users.
This field was previously used as datasourceid, but since we moved to the model where connector can have multiple tasks its identifier cannot be used anymore, thats why the field was repurposed as taskid instead.
Taking users as an example, when task runs ESM is asked for the list of users that have its taskid in Task Id mapping field, and doesn't have a personTemplateStatusAttributeDisabledValueKey value in the personTemplateStatusCodeAttributeKey
This result is then compared to what the task fetched, and the datacards of users that were not fetched have their personTemplateStatusattribute set to value specified in the config - 5 - Removedby default.
Example log below shows described process and informs that one user was removed.

Same thing applies to groups but groupTemplateStatusattributes are used instead.
Notes
- Feature works only with full fetch scheduled tasks..
- No support for generic templates yet, only identity and access
- When migrating from the previous versions where datasourceid was still used it needs to run at least once to set its taskid’s in the datacards first.
- EPE identifies Disabled users or groups as the ones that were removed from the AD, at the present we do not support statuses related to the entity beign active or not.
- EPE does not enable users back on its own.
- If more than one tasks fetches the same users or groups it may overwrite the taskid in the datacard depending on which task ran last. It is suggested that many full type tasks are not fetching same user or group.
- Always do configuration file changes to custom.properties, do not change only application.properties as those changes are lost on container reboot if you have not done same changes to custom.properties.
Configure scheduled task for reading data
In this chapter are configuration instructions when data cards is only read from source Efecte solution.
Notice from the picture, how two Efecte solutions are named, so it is easier to follow up instructions.

For configuring schedule based task, you will need access to Efecte Platform configuration console.
Note! If connector is not created, you have to create first “new connector” and after that you are able to create new tasks.
1. Open the Efecte administration area (a gear symbol).
2. Open connectors view.
3. Choose connector for which schedule based task is configured
4. Select new task under the correct connector

5. Define scheduling for the task (if and how scheduled task should be run periodically). Choose scheduling sequence, which depends how much data is read to Efecte solution.

6. Fill in Task Details
- Fill in unique task name for the schedule based task, notice that name cannot be changed afterwards.
-
Task usage indicated that is the task used for reading data or writing data.
- Note, that if event type is changed afterwards it can break the workflows.
- Mappings type is always Generic
-
Filtering query - this is optional and you can use filters for reading only specific data cards from source Efecte solution and if value is left empty all datacards related to the template are read from the source Efecte solution. It uses EQL syntax, for example:
- SELECT entity FROM entity WHERE entity.template.code = 'system' AND entity.deleted = 0 AND $access_rights_group$ is not null
- EQL documentation: https://docs.efecte.com/webapi-esm/efecte-query-language-description-esm
- Template code - Efecte solution template code (what data cards are read from source Efecte solution)
-
Folder code - Efecte solution folder code (where data cards are stored)
- Tip! you can use listeners to move data cards to different folder

7. Fill in failure information
Optional settings for failure handling, if scheduled task fails it can create data card to Efecte solution (not to source Efecte solution) that displays the error. If failure settings are defined , the administrator does not need to manually check the status of scheduled tasks.
- Failure template - Select a template of datacard which will be created in case of any errors during provisioning (connection to data sources, timeouts etc.)
- Failure folder - Select folder where failure data card is stored.
- Failure attribute - Select an attribute where in the failure template should the error information be stored in.

8. Mappings for generic template
Generic data are read to any template user want's and it is mandatory to set target folder, datasourceid and unique values which are used for identifying data between two Efecte solutions.
-
Target template - select Efecte solution template where data is read to from the source Efecte solution.
- Target folder - select Efecte solution folder (the list is narrowed down to match compatibility with selected template) where data is read to from the source Efecte solution.
-
Attribute mappings
-
On the left side: source Efecte solution attribute
- Data source ID (Datasourceid) is connector task name, which is useful in some cases to be shown in data cards
- Efecte ID unique attribute in data cards, usually using IDgenerator handler
- On the right side: Efecte solution attribute
-
On the left side: source Efecte solution attribute
- It is possible to set additional attributes, which are read from source Efecte solution, by choosing new attribute button.

11. Save provisioning task from the save button. If some required attributes are missing the save button is displayed as grey and it will display what is missing from the settings.
12. You have now configured schedule based connector task for Efecte connector to read data cards from source Efecte solution.
- You can now wait until task is started based on scheduling or
- Run task manually - by clicking the button task is configured to be scheduled to start immediately. Usually for test runs or if you don't want to change the schedule settings, but want to run the task now.
Configure event task for writing data
All configuration related to Efecte Provisioning Engine and event-based provisioning task are configured in Efecte Service Management platform.
Notice from the picture, how two Efecte solutions are named, so it is easier to follow up instructions.

1. Open the Efecte Platform configuration view (a gear symbol).
2. Open connectors view
3. Choose connector, which will use the event task
4. Select “new task” under correct connector

5. Fill in Task Details
-
Task name - Give name name for the task, it will be displayed in connectors view.
- It is good practice to name a task in a way that describes its purpose for example
- Task usage indicated that is the task used for reading data or writing data. Can be changed afterwards, but it's not recommended if event task is in use. It will break workflows.
- Mappings type is always generic template

6. Add target Efecte solution template details
- No need to fill Filter query, it's only used for the scheduled tasks.
- Template code - source Efecte solution template code where data is created or deleted
-
Folder code - source Efecte solution folder code where data is created or deleted
- Tip! You can use listeners to move data cards to different folders

8. Define generic mappings
-
Target template - Select a template from source Efecte solution where information is read from target Efecte solution
- Target folder - Select a folder from source Efecte solution (list is narrowed down to match compatibility with selected template) where data is read from.
-
Attribute mappings
-
On the left side: target Efecte solution attribute
- Data source ID (Datasourceid) is connector task name, which is useful in some cases to be shown in data cards
- Efecte ID unique attribute in data card, usually using IDgenerator handler
- On the right side: Efecte solution attribute
-
On the left side: target Efecte solution attribute
- It is possible to set additional attributes, which are read from target Efecte solution, by choosing new attribute button.

9. Save provisioning task from “save” button. If any mandatory information is missing, you are not able to save the task and save button will show missing information.
10. The next step is to configure the workflow to use this event-based task. From workflow engine in Efecte Platform, it is possible to execute provisioning activities towards another Efecte solution. This means that any of the available orchestration nodes activity can be run at any point of the workflow.
Workflow references are shown in the connector overview page,

Configure bi-directional connector
In cases where bi-directional connector is required, you need to pay extra attention that you don't create loop between two Efecte solutions.
When configuring bi-directional connector it requires that both scheduled- and event-based tasks are configured in both Efecte solutions.
Notice from the picture, how two Efecte solutions are named, so it is easier to follow up instructions.

There are several different scenarios how bi-directional connector can be configured, but here we have listed common scenarios and examples how configuration should be made.
Scenario 1
Efecte 1 creates ticket to Efecte 2, based on ticket type and Efecte 2 sends ticket back based on status.
Configuration in Efecte 1.
- Create event-based task like described in instructions above for event-based task
-
Configure your workflow like described in instructions below for workflow activities
-
Create condition to Efecte 1 workflow to check should the ticket be sent to Efecte 2
For example here is workflow in Efecte 1 that sends Ticket to Efecte 2 when ticket type is incident
-
Create condition to Efecte 1 workflow to check should the ticket be sent to Efecte 2

-
Create Schedule based task like described in instructions above for schedule based task
-
Add filter for the task that checks the status of the ticket in Efecte 2. In this case ticket must be solved in Efecte 2 before it will be read to Efecte 1.
-
Add filter for the task that checks the status of the ticket in Efecte 2. In this case ticket must be solved in Efecte 2 before it will be read to Efecte 1.
Configuration in Efecte 2.
-
Create WebAPI user for the Efecte1 Efecte connector and send the credentials to Efecte 1 Admin
- WebApi user must have permissions to read Incident data cards with status Resolved
- Give attribute codes of the incident that are read from the Efecte 2 to Efecte 1
Scenario 2
Efecte 1 creates ticket to Efecte 2 and updates existing ticket in Efecte 2 with changed information (description field has been updated)
Configuration in Efecte 1.
- Create event-based task like described in instructions above for event-based task
-
Configure your workflow like described in instructions below for workflow activities
- There must be conditions in workflow when ticket will be created from Efecte 1 to Efecte 2
-
There must be provisioning node with Create data card action. For example
Configuration in Efecte 2.
-
Create WebAPI user for the Efecte 1 EFfecte connector and send the credentials to Efecte 1 Admin
- WebApi user must have permissions to read Incident data cards
- Give attribute codes of the incident that are read from the Efecte 1 to Efecte 2
-
Create schedule based task like described in instructions above for scheduled-based task
-
Add filter for the task that checks the status of the ticket in Efecte 1. In this case ticket must not be solved in Efecte 1 before it will be read to Efecte 2.
-
Add filter for the task that checks the status of the ticket in Efecte 1. In this case ticket must not be solved in Efecte 1 before it will be read to Efecte 2.
Scenario 3.
Efecte 1 reads all application data cards from Efecte 2, data is changed in Efecte 1 and updated back to Efecte 2.
Configuration in Efecte 1.
-
Create schedule based task like described in instructions above for scheduled-based task. For example
- Create event-based task like described in instructions above for event-based task
- Configure your workflow like described in instructions below for workflow activities
Configuration in Efecte 2.
-
Create WebAPI user for the Efecte 1 Efecte connector and send the credentials to Efecte 1 Admin
- WebApi user must have permissions to read Application data cards
- Give attribute codes of the application template that are read from the Efecte 2 to Efecte 1
Scenario 4.
Efecte 1 creates manual access right request as a ticket to Efecte 2, based on status change it is send back to Efecte 1.
Configuration in Efecte 1.
- Create event-based task like described in instructions above for event-based task
-
Configure your workflow like described in instructions below for workflow activities
-
Create condition to Efecte 1 workflow to check should the ticket be sent to Efecte 2
For example: workflow in Efecte 1 sends Access right request to Efecte 2 as ticket when the provisioning type is manual.
-
Create condition to Efecte 1 workflow to check should the ticket be sent to Efecte 2
Configuration in Efecte 2.
-
Create WebAPI user for the Efecte 1 Efecte connector and send the credentials to Efecte 1 Admin
- WebApi user must have permissions to edit status attribute in access right request data cards
- Give status attribute code of the access right request template and it's values.
- Create event-based task like described in instructions above for event-based task
-
Configure your workflow like described in instructions below for workflow activities
- Add conditions to workflow when Efecte connector should change the status change to Efecte 1. For example when status is 07 - Resolved
2024.1 and older version instructions
Configure Scheduled-based Task
For configuring schedule based provisioning task, you will need access to Efecte Platform configuration console.
1. Open the Efecte Administration area (a gear symbol).
2. Open IGA view
3. Choose Add a new task for Scheduled-based Provisioning

4. Choose Efecte ESM from the Add a new task list
5. Fill in unique name for the provisioning task
6. Choose WebAPI user (List of all Efecte local users) and type in password
7. Select a Failure Template of datacard which will be created in case of any errors during provisioning (connection to data sources,timeouts,etc.)

8. Choose scheduling sequence, which depends how much data is read to Customers Efecte solution

9. Fill in Properties section, where information for the connection is defined

10. Fill in Efecte ESM Template Mapping section
- Remote ESM template code name - Template code of the remote ESM
- Remote ESM folder code name - Folder code of the remote ESM
- Mappings left side is for Remote attributes and right side for the local ESM instance
-
datasourceid and efecte_id are required mappings
- datasourceid should be string attribute. Connector's name is displayed in target data card's DatasourceID when importing data to Efecte.
- efecte_id must be unique attribute of the template usually is using idgenerator handler

14. Save provisioning task from the top bar
15. You have now configured schedule based provisioning task and you can
- Test connection
- Run task manually
16. If task is executed manually (run task manually) or it is run according to scheduling, task status can be reviewed under Extract / Load Status tab.
Configure Event-based Task
All configuration related to Efecte Provisioning Engine and event-based provisioning task are configured in Efecte Service Management platform.
1. Open the Efecte Administration area (a gear symbol).
2. Open IGA view
3. Choose Add a new task for Scheduled-based Provisioning

4. Choose Efecte ESM from the Add a new task list
5. Fill in unique name for the provisioning task

6. Fill in Properties section, where information for the connection is defined. Note Filter Query do not need any values, it's only used for he scheduled tasks.

7. Fill in Efecte ESM Template Mapping section
- Remote ESM template code name - Template code of the remote ESM
- Remote ESM folder code name - Folder code of the remote ESM
- Mappings left side is for Remote attributes and right side for the local ESM instance
-
datasourceid and efecte_id are required mappings
- datasourceid should be string attribute. Connector's name is displayed in target data card's DatasourceID when importing data to Efecte.
- efecte_id must be unique attribute of the template usually is using idgenerator handler

8. Save provisioning task from the top bar
9. You have now configured event-based provisioning task and you can Test connection
10. The next step is to configure the workflow to use this event-based task. From workflow engine in Efecte Service Management platform, it is possible to execute provisioning activities towards Efecte ESM. This means that any of the available Activity's can be run at any point of the workflow
Workflow activities (orchestration nodes)
Create Data card

In the illustration above, the Efecte ESM Template Mappings are populated from Provisioning tasks. Admins choose the correct Efecte ESM "Target” and are able to view the Identity Mappings which are configured for selected task. In this orchestration view admins are not allowed to change any mappings, those are presented only as a visual aid. If any changes to the mappings are needed, those needs must be execute in the Provisioning task configuration view.
"Target” displays all connector tasks that are configured for Efecte ESM and task mappings refer to the same template where the workflow itself is located.
Provisioning exception is an optional property on this workflow node. Admins can configure this property in use where exceptions can be written if any exceptions exists during the provisioning actions.
Delete data card

Admins choose the correct Efecte ESM "Target”. In this orchestration view admins are not allowed to change any mappings or conditions. If any changes are needed, those needs must be execute in the Provisioning task configuration view.
"Target” displays all connector tasks that are configured for Efecte ESM and task mappings refer to the same template where the workflow itself is located.
Provisioning exception is an optional property on this workflow node. Admins can configure this property in use where exceptions can be written if any exceptions exists during the provisioning actions.
Troubleshooting
In this chapter are described troubleshooting options,
- In case failure template is used, check correct data card
- Check scheduled task history from connector management
- Check Connector logs
Restrictions
These restrictions apply when source and target system has same attribute type. Some of these might work if you change attribute type on another system to String.
Event-based tasks restrictions
- Emails are not supported
- Attachments, meaning external references (FileUpload handler) are not supported
- Date attributes are not supported
- DateTime attributes are not supported
- Worklog handler is not supported
Scheduled tasks restrictions
- Date type attributes (you need to map those to string type attribute)
- DateTime type attributes (you need to map those to string type attribute)
Table of Contents

