This tutorial explains how to use Event Logs in Pigello, to backfill event-driven integrations or to sync schedule-based integrations efficiently. By querying the common.eventlog entity, you can find out which entities were created, updated, or deleted in a given time window, and then fetch only that subset of data.
Integrations often need to stay in sync with Pigello in one of two ways:
- Event-driven: You react when something happens (e.g. webhooks). If you missed events or need to catch up after downtime, you need a way to backfill—to discover what changed in a period and process it.
- Schedule-based: You run a job on a schedule (e.g. every hour) and sync changes since the last run. Instead of re-reading all data every time, you want to only process entities that actually changed in that window.
In both cases, you need a list of the updates/creations/deletions that happend for a certain type of entity, in a given time range. That is what the Event Log provides.
The common.eventlog entity records that something happened to an object in Pigello. Each log entry includes:
object_id— The UUID of the entity that was affected (e.g. a verification, tenant, or invoice).content_type— The type of that entity (e.g.accounting.verification,accounts.tenant).event_identifier— What kind of event occurred (e.g. created, updated, deleted—often expressed as identifiers like*.instancecreated,*.instanceupdated,*.instancedeleted).created_at— When the event was logged.event_context— Additional context, eg. used to store thepkof a deleted instance.triggered_by_tenant— Relation to the tenant who triggered the event, if a tenant triggered it.triggered_by_sub_tenant— Relation to the sub tenant who triggered the event, if a sub tenant triggered it.triggered_by_organization_user— Relation to the organization user who triggered the event, if an organization user triggered it.triggered_by_integration— Relation to the integration activation who triggered the event, if an integration triggered it.
You can query the Event Log list endpoint with filters (e.g. on created_at for a time window, and optionally on content_type or event_identifier) to get all events in that window. See the API Reference for the exact endpoint and filter parameters: GET /common/eventlog/list/.
If your integration is normally driven by events (e.g. webhooks) but you missed some—or you are starting from a point in time—you can backfill:
- Choose a time window (e.g. from when you last processed events until now).
- Query the Event Log for that window: filter on
created_at(e.g.created_at__gteandcreated_at__lte) and optionally oncontent_typeorevent_identifier. - Order by
created_atto process items in order. - From the result, collect the set of (
content_type,object_id) that you care about. - Fetch only those entities from the API (by ID or by list with
id__in). You do not need to re-read entire datasets, only the entities that actually had events in the window. - Process them (e.g. send to your system, update your cache) as you would for a live event.
This way you can catch up after an outage or initialize from a given date without scanning all data.
If you run a scheduled job (e.g. every hour or every night):
- Store the last run time (or the
created_atof the last event you processed). - On each run, query the Event Log for events after that time (e.g.
created_at__gte= last run). - From the events, build the set of (
content_type,object_id) that changed. - Fetch only those entities from the API (by ID or by list with
id__in). You do not need to re-read entire datasets, only the entities that actually had events in the window.
You only request data for entities that actually changed, which keeps the sync faster and avoids re-reading unchanged data.
Use the Event Log list endpoint with time-based filters:
- After a time:
created_at__gte=2024-01-15T00:00:00.000Z— events on or after this timestamp. - Before a time:
created_at__lte=2024-01-15T23:59:59.999Z— events on or before this timestamp. - Between: Combine both to get a closed time window.
Add ordering (_order_by) to sort on eg. event creation timestamp. You can also filter by content_type or event_identifier if you only care about certain entity types or actions (e.g. only deletes).
Once you have the event log entries for your window:
- Deduplicate by (
content_type,object_id) so each entity is processed once even if it had multiple events in the window. - Fetch the current state of those entities from the relevant list or detail endpoints (e.g. by
id__infor that content type). For deleted entities, the event log tells you they were deleted; you may not need to fetch them, only to remove them from your side. - Apply the changes in your system (create, update, or delete) based on the event identifiers and the current state you fetched.
This pattern keeps your integration efficient: you only pull the data that changed in the time window.
- Event Logs (common.eventlog) record what happened to which object in Pigello (object_id, content_type, event_identifier, created_at).
- Use them to backfill event-driven integrations (find what changed in a time window after downtime or from a start date) or to sync schedule-based integrations (only process entities that changed since the last run).
- Query the Event Log list with a time window (
created_at__gte/created_at__lte) and optional filters; then fetch only the entities that appear in the result. - You avoid re-reading full datasets and only collect data for entities that were created, updated, or deleted in the window.