Setup
If the plan is switched to mass update (consult an administrator for this). Thousands of entries can be synced fairly quickly.
This has the downside that workflows are NOT triggered when entities are created or updated in the DataEngine.
If there are workflows which needs to run:
- in the DataEngine create a new field for this module (e.g.
synced
) - in the plan on create/update set this field to a specific value (e.g. name of the other system)
- set the needed workflow to run only for entities with this value and add as last action set field
synced
todataengine
- create a scheduler which fetches these entities and calls a save
If an entity is deleted in the DataEngine it is first marked as deleted and can be cleaned up later.
In the default case (Option Fetch deleted: false
) only entities which are not marked as deleted
will be returned. If this option is enabled, the Adapter will return normal and marked as deleted entities.
To set an entity as marked as deleted, set the field deleted
to 1
.
When creating/updating a multiEnum field from a record, its possible to omit the “implode” post-mapping action in the HubEngine Plan.
If done like this, the ^
are not necessary anymore in the mapping.
An example mapping for Evalanche to DataEngine could look like this:
- Actions:
- Single action
- Type: explode
- Delimiter: |
- Event: pre-mapping
- Single action
- Mappings:
- From: 1234, To: newsletter, Type: auto
- From: 1235, To: webinars, Type: auto
The old logic with “implode” post-mapping action and ^
for each mapping item is still supported, old plans do not need to be updated.
Unmapped values
Unmapped values are currently send as is in the HubEngine.
Because of this its possible to have an empty mapping when using the same id as from the incoming system. A mapping 1234 -> 1234 can be ignored.
As of now the date column that is used for filtering changed records is hardcoded to date_modified.
Setting a different update field does not have any effect on fetching changes.