Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Please read Users and roles page on User guides.
Please read Communities and organisations page on User guides.
All user data captured or calculated by the mediarithmics platform ends in a special database named datamart.
A mediarithmics datamart is more than a traditional database. This new generation database leverages a database engine specially designed to outperform in the domain of data-driven marketing applications. This database engine is the outcome of more than seven years of R&D from mediarithmics engineering teams. Datamart is the first database engine to provide both real-time query processing and scalability on large data volumes.
A datamart is a multi-model database. Datamart displays information it manages in an object graph, where nodes can either be persisted objects or objects computed on the fly. Datamart then generates column-oriented tables specialized for analytics scenarios.
The object graph nature of a datamart is particularly adapted to capture the 'user graph' where each piece of information associated to a user is connected with the others in a local graph structure.
The mediarithmics graph structure organizes a user as a node, called a UserPoint. The user can be identified or anonymous. Think of the UserPoint as a pin-point that a detective would use to connect various pieces of information on the board, as shown in the following diagram.
Automatic identity resolution is where the software "detective" merges two UserPoint representing the same person. All content and identifiers are regrouped under the older UserPoint. The newer UserPoint is then archived.
The structure of a datamart is defined by a customizable schema, defined in a text file and based on the GraphQL Schema Definition Language (SDL). The GraphQL schema defines what types of data are stored in your data graph. Schemas are strongly typed, which unlocks powerful developer tooling.
SDL is simple and intuitive to use, while being extremely powerful and expressive. The specification of this standard is available here: http://spec.graphql.org/
A tutorial explaining the syntax of the type system is available here: https://graphql.org/learn/schema/
User data alone is not sufficient to implement powerful personalized marketing. As in interpersonal communication, experience can help us make educated guesses. Datamart supports homogeneous management collected and automatically computed data to get the necessary insight on each user. Both types of data are declared in the same GraphQL schema.
The mediarithmics datamart allows the homogeneous management of the collected data and the computed data. They are both declared in the same GraphQL schema and from the outside nothing distinguishes them.
From the inside, the computed data is associated with algorithms that are loaded into the platform in the form of plugins. These algorithms can have different roles:
Machine Learning Function to calculate predictive data
UserTrait Function to calculate aggregates on user data
Like all the other plugins, they are freely modifiable and customizable.
GraphQL: to allow querying, in real time, the local graph of a single user (e.g. for real-time personalization scenarios)
OTQL (Object Tree Query Language): to query the whole object graph in a couple of seconds, even with billions of data points. (e.g. for audience segmentation queries)
SQL: to run powerful analysis of your behavioral data
Welcome to our documentation web site, whether you are Ad / Digital Ops, Data Engineer, Data Analyst or Data Scientist, you will find here all the information you need to make the most of mediarithmics.
Our vision is to provide a data-first marketing cloud to solve the need for organisations to serve as best-in-class in a digital economy.
The mediarithmics platform can be seen both as a set of digital marketing applications that can be used directly by marketers, or as a highly customizable data marketing cloud infrastructure that data engineers and data scientists can tailor to the needs of their organisation.
Even if this 'Get started' section is mainly targeted at a technical audience, it is written in plain English, without reference to technical implementation details, and should be readable by anyone interested in the way mediarithmics structures the world of modern marketing. It is a good introduction to the mediarithmics platform.
The purpose of a data marketing platform is to enable use of all available information to make better decisions in all areas of marketing.
The mediarithmics platform can collect all data sources. Whether the data comes from the users' online activity or from backoffice management systems such as CMS', ERPs, and CRMs, all the data is stored for each user in a graph model as shown below. This universal data model allows you to connect all the information of a user, the collected information such as account identifiers, terminal identifiers, profile information, activities, and purchases, as well as data calculated on the fly, such as appetence score, age and gender predictions, and recommendations of articles or other content.
These centralized data allow us to conduct multiple analyzes (visit analytics, uplift analytics, segment insights, data discovery ...) and to act when the time comes:
Either by activating audience segments via a rich ecosystem of connectors to advertising networks, communication tools (notifications, emails) or personalization tools (on-site display, A/B testing on user journey),
Either by defining orchestrations that will be triggered automatically, by aligning themselves with the highlights of the user experience (e.g., first visit, creation of an account, drop in the frequency of visits, etc.).
In many companies, the recent history of data marketing has been built by the accumulation of different solutions that stack or overlap. Whether it is for the acquisition of new customers, the improvement of the conversion rate, the retention of existing customers, or the construction of predictive models, each application in isolation makes sense and provides a service to a team.
But after a while, this structuring in silos no longer makes it possible to be reactive and to innovate. The data is multiplied in multiple systems and these systems cannot speak to each other because of the "heaviness" of these huge amounts of data.
The mediarithmics platform has been designed from the start to overcome these structural difficulties and introduce a new model: the data-first marketing cloud.
In this model, the data gathered in a single datamart are shared by all applications and users. Both mediarithmics and partner applications access the same source of truth for analytical queries that mobilize large amounts of data, but also for all real-time personalisation micro-queries that are interested in the characteristics of a single user.
All the user data captured or calculated by the mediarithmics platform ends up in a special database named Datamart.
A mediarithmics datamart is more than a traditional database. This new generation database is leveraging a database engine specially designed to outperform in the domain of data-driven marketing applications. This database engine is the outcome of more than seven years of R&D from mediarithmics engineering teams. It is the first database engine to provide both real-time query processing and scalability on large data volumes (see focus on Datamart).
The mediarithmics datamart is a key enabler of all the use cases that can be imagined in one-to-one marketing … (real-time user experience customization, audience segmentation, funnel analytics, and campaign analytics …).
The information inserted in a datamart is primarily stored following a graph structure, where nodes containing pieces of information are connected through a link to represent a relationship between those pieces of information.
Let's take an example. A user visits a website which implements a tracker connected to the mediarithmics platform. At the end of the visit, three nodes will be created in the datamart.
One node to represent the user
One node to represent the visit (e.g., piece of information: the web site, the date, the visit duration, pageviews, products in the basket, etc.)
One node to represent the device which was used.
Notice that the node representing the user looks like a pinpoint. This node is called a UserPoint and it plays the role of a central pinpoint connecting all pieces of information which are collected about the user.
In the User 360 View, it is important to distinguish the nodes representing a user identifier and the nodes representing a describing content on the user.
There are three different types of :
The UserAccount Id : It represents the identifier of a registered user. The registration system can be a CRM system, a loyalty program system, or an authentication system. This registration system is external to the mediarithmics platform. The UserAccount Id is composed of a character string and a compartment id, which represents the registration system.
The UserEmail: It represents an identifier based on the user email. It is usually derived from the user email by using a hash function (MD5, SHA-256, …)
The UserDeviceTechnicalId: it represents the identifier of a UserDevice.
The describing content of a user 360 view is composed of different data elements corresponding to different point of view on the user:
A UserActivity represents an interaction with the user
A UserProfile represents a summary of timeless information about the user (e.g. firstname, lastname, birthdate, address, sex, age, etc.)
A UserSegment is here to capture that the user belongs to a particular group of users, such as an audience segment.
A UserActivity represents any interaction with the user either online or offline. It can represent the summary of an online visit, the detailed content of an in-store purchase, the summary of a call to a call-center, a campaign interaction like an opened email or a click on a banner.
A UserActivity can contain a list of events. An event represents the most granular level of interaction with a user. It is defined by a name and a set of custom properties.
For each user, the UserActivity are ordered in a timeline.
For more detailed information on UserActivity and UserEvent, please follow the link below.
UserProfile
A UserProfile provides a summary of information on the user and can contain any kind of information. The UserProfile is usually imported from an existing information system like a CRM or login database, and collect information such as: contact details, the user preferences, the status in a loyalty program, the subscription to newsletters.
A UserSegment node represents a user who belongs to an audience segment. An Audience Segment is a group of users who share some common characteristics in the eyes of the marketer. There are several ways to define an audience segment in the platform.
Whatever the type of Audience Segment, a User 360 view contains a UserSegment node for each audience segment the user belongs to.
For more detailed information on audience segments and their life-cycles, please follow the link below.
A UserChoice node represents the choice of a user regarding data privacy and data processing declared by the organisation. This information is used to automatically adapt the behavior of the applications to consider the user choices about Data Privacy.
A UserScore is a set of data calculated dynamically and providing predictive information on a user. A UserScore is connected to a Machine Learning Function (ML Function). A UserScore can, for example, predict the expected lifetime value, churn risk, age, or gender of a user.
All platform services are directly manageable and consumable through APIs.
Beyond these standard capabilities, the platform is also a computing infrastructure which enables the hosting of custom algorithms packaged in plugins. A plugin is a bundle of code and data which defines the behavior of a live function hosted in the mediarithmics cloud. It can be developed in any programming language. Due to latency requirements, some languages may be more adaptable than others.
The mediarithmics platform can integrate these plugins:
Activity Analyzer: to filter and enrich activities data collected through a tag or the API
Audience Feed: to push audience segments to third-party systems
Machine Learning Function: to add live predictive data to a user graph
The mediarithmics platform operates at the heart of an open ecosystem by offering connections with many partners.
Our platform covers your needs, including (but not limited to): analytics tools, messaging solutions, personalization solutions, prediction tools, advertising networks, and audience monetization.
Email Renderer: to customize the rendering of emails
Attribution Processor: to customize the attribution of a marketing conversion to different campaigns
We categorise Network IDs into two groups: Device-based Network IDs and User-based Network IDs. Each category allows you to add custom identifiers or subscribe to existing ones.
To integrate a Device-based or User-based Network ID, the steps are generally:
Create a new / Subscribe to a Device Registry or Compartment
Activate this Device Registry or Compartment on the desired Datamarts (only required where there is multiple datamarts in your environment)
Activate the Device Registry or Compartment ingestion on your Channels
Don't hesitate to check the documentation of a particular Network ID below for more information and detailed procedures.
Device-based Network IDs refer to Device Registries because each device is assigned a "registry" where its identifiers are stored.
Here are common steps to activate a Device registry on a specific datamart:
If you have a single datamart, the Device Registry is automatically activated, and you can go to the next step.
If you have multiple datamarts, you must manually activate the Device Registry:
Go to Navigator > Settings > Organisation > Device Registries.
Select your Device Registry.
This is the current list of supported data warehouses cloud providers :
Google BigQuery
To set up your own Big Query data warehouse as a data store for mediarithmics please follow these steps in this order :
For First ID, you need to :
Subscribe to First ID
Activate First ID data ingestion from Websites
Go to Navigator > Settings > Organisation > Device Registries.
In the Shared device registries section, click Manage subscriptions.
Subscribe to First ID.
If you have multiple datamarts, don't forget to activate First ID on all datamarts.
First ID requires a modification of the user-event-tag to ingest the identifier. Follow these steps:
Head to Navigator > Settings > Datamart > Channels.
Select the site where you want to activate automated First ID capture.
Go to JS Tag Configuration > Device Identification.
Enable First ID.
Click More button to open the menu.
Choose Edit linked datamarts.
Select all datamarts where you want to activate the Device Registry.
Repeat these steps for all site where First ID data collection is expected.
For ID5, you need to :
Subscribe to ID5
Activate ID5 data ingestion from websites
Go to Navigator > Settings > Organisation > Device Registries.
In the Shared device registries section, click on Manage subscriptions
Subscribe to ID5.
If you have multiple datamarts, activate the ID5 subscription on all datamarts.
ID5 requires a specific technical configuration to be properly ingested from your website.
Please contact your Account Manager to complete the ID5 configuration for your channel (an activity analyzer will be added to your channel to handle ID5 IDs).
A UserPoint could have multiple user accounts and user profiles on the same datamart. For example, having a profile on one of your sites and another profile on another site.
Compartments are a notion created to represent those different places where you have accounts and profiles. There is always one default compartment on a datamart, and you can create more of them. Compartments are associated with user account IDs to create a user identifier.
UTIQ proposes two types of identifiers:
UTIQ martechpass (mobile): refers to identifiers coming from mobile network connections
UTIQ martechpass (fixed): refers to identifiers coming from internet box connections
To subscribe:
Go to Navigator > Settings > Organisation > Compartments.
In the Shared user account compartments section, click on Manage subscriptions.
Subscribe to:
UTIQ martechpass (mobile)
If you manage multiple datamarts, ensure UTIQ is activated on each one.
You must subscribe to each type individually, depending on the identifiers you want to capture. The configuration steps are the same for both mobile and fixed identifiers.
UTIQ IDs can be ingested natively via the user-event-tag when users browse your websites.
To enable automatic ingestion:
Go to Navigator > Settings > Datamart > Channels.
Select the site where you want to activate UTIQ capture.
Under JS Tag Configuration > Device Identification, enable:
UTIQ martechpass (mobile)
mediarithmics will create new audience segment feeds for you, and they will be made available to all other customers. Please discuss your needs with your Account Manager.
Here are our creation steps for making a new audience segment feed for you :
Validation process to ensure we need the connector.
Audience segment feed specifications:, studying how we connect to the partner and which parameters should be made available to the user.
Code and validation.
Cookie matching set up. Some partners need a cookie-matching mechanic so that we can send them identifiers that have meaning for them.
A UserPoint is a 360° vision of a unique user on the platform.
It is composed of different sets of data to cover all the user aspects providing a 360° view (online and offline):
represent an interaction with the user
represent a summary of timeless information about the user, like first name, last name, birth date...
The activity analytics endpoint has been designed as a cube to query user activities.
This API gives you programmatic access to user activities as a data cube. You get metrics with dimensions, filters, and within date ranges leveraging the part of our multi-model database.
With the activities analytics API, you can create reports to answer questions like:
Number of active users. The metric is users and it has no dimensions or filter.
Audience segments represents a group of users and are central to most marketing actions. Users can be grouped by common profile characteristics, or similar behavior in their online browsing or in their purchases. Users can be grouped either using the mediarithmics platform, or externally and then imported.
Here is a list of audience segment types that differ from each other by the method used to group users:
Audience segments calculated from a query (type USER QUERY). Several tools are provided to define this type of segment. Occasional platform users will find it easier to use the Audience Builder. More advanced users such as data admin or data analyst will find a greater wealth of expression in the Segment Builder. Finally, technical users, data engineers or integrators, can directly use a textual query language (see OTQL).
Audience segments imported from an external system (type USER LIST)
It is important to distinguish the nodes representing a user identifier and the nodes describing content on the user.
Every activity or import that'll have one of those properties set will automatically be related to the correct UserPoint.
Any activity using an identifier that doesn't already exist in the datamart will trigger the creation of a new UserPoint.
There are three different types of user identifiers:
To declare a new data warehouse go to the Computing console, under the Data Store section you will find the Data warehouse menu.
Define the following properties for your datawarehouse :
A name
A description
Then upload or drag and drop your credentials file. If the credentials file contains the correct set of permissions (cf ), the data warehouse will be created. If not, an error will be raised.
A is composed of sections and cards.
Each section has a title and cards disposed on a grid.
Each card is a white block organizing horizontally or vertically.
The size and position of each card is defined by a 12 column grid, with as many rows as needed. Cards size and position are set with {h,w,x,y} properties :
You can compute segments browser-side using our Edge technology and share them with Google Ad Manager. This allows you to react instantly to user behavior and trigger campaigns as users navigate your website.
To do so, the mediarithmics tag:
Collects and stores identifiers and navigation data in local storage. When an event is pushed by the tag, it is sent both to the activity processing pipeline and saved locally.
This document import allows you to mark device points for deletion.
Each line in the document is a user agent identifier that is linked to a device point (or is a device point id directly). For each of those identifiers, the job will find its associated device point and delete it. Note that a device point may be linked to multiple user agent identifiers, all of which will be deleted once their device point is (including user agent identifiers that may not appear in your document).
A hyper point is a preventive way to flag a UserPoint whenever there are too many identifiers linked to it. Concretely, a UserPoint becomes a hyper point when mediarithmics tries to and especially when at least one of the following conditions is met:
the frequency of creation in this UserPoint reaches:
This document import allows you to mark UserPoint for deletion.
Each line in the document is a user identifier that is linked to a UserPoint (or is a UserPoint ID directly). For each of those identifiers, the job will find its associated UserPoint and delete it, along with all of its identifiers, segments, scenario and its profile .
You can manage charts using and . Manipulating dashboards by API and in advanced mode can be useful in some advanced integrations, but will take longer.
With data visualisation, you can create dashboards :
In your datamart's home page
Each dashboard is represented by a object. It has a title, scopes, and a . Its content is composed of .
Dashboards can be displayed on:
Datamart's home page with the home scope.
Segments page with the segments scope.
Deployment. We allow connections to the partner's API during this step.
Specify the following details:
Name
Name of the Compartment
Token
A technical identifier used to link the Compartment with the JS Tag
If you have multiple datamarts, don't forget to activate the new Compartment on all other datamarts.
Audience segments calculated from a likeness prediction algorithm (type USER LOOKALIKE)
Audience segments associated with the events of a campaign (eg: all the people exposed to a given video campaign) (type USER ACTIVATION)
Audience segments associated with A / B tests (control group and test group)
An audience segment represents a group of users. A UserSegment is a piece of information that is associated with a user to indicate that this user belongs to the segment.
For example, if an audience segment has 10,000 users, each user has a UserSegment to signify their membership in this segment. So there are 10,000 UserSegment, one for each user in the segment.
In the data schema this information element is materialized in the form of an object of type UserSegment (see standard datamart schema). This object contains the following information:
The segment identifier
The date the user entered the segment (creation date of the UserSegment object)
The expiration date at the end of which the user will exit the segment. This date is optional and is only entered for some types of segments.
When working with a User Query audience segment, it is important to understand the difference between the count of the segment and the persistence of this segment.
Counting consists of calculating the number of users who verify the segment's grouping rules. That is to say the users who respond positively to the conditions and to the boolean operators present in the segment query. The counting of a segment is done in real time. It usually only takes a few seconds (2-5 sec) to get a result.
However, if the count is immediate, the process of bringing all the targeted users into the segment by creating a UserSegment type record for each of them may take more time. This time depends on the size of the segment, the segment refresh policy and the calculation budget allocated to the account.
Proceeds to key/value targeting with the configured activation platforms
Panel-based
ct_panel_mics
Segment ID
Semantic
ct_semantic_mics
Targeting list ID
The compatible platforms for activating a targeting list are:
Setup prebid.js on your website
Have an active account on Xandr Monetize solution
You'll need to provide your Xandr's credentials to your mediarithmics Account manager and especially the following :
Username
Passsword
Here are general steps to follow in order to activate a Compartment on a specific datamart:
If you have a single datamart, the Compartment is automatically activated.
If you have multiple datamarts, you must activate it on every datamart
Head to Navigator > Settings > Datamart > Compartments.
Select your Compartment.
Click to the More button to open the menu.
Choose Edit linked datamarts.
Select the datamart where you want to activate the Compartment.
For community-created user IDs, activate them across organizations by selecting Enable Compartment in the Shared User Account Compartments Enabled section.
For BigQuery warehouses if you chose to give limited access you should toggle the "Has rad-only access" toggle before validating the creation
more than 10 identifiers by Registry ID in 2 days,
more than 4 Account ID in 7 days,
more than 4 Email hash in 7 days,
more than 8 UserPoint were merged to that UserPoint in the last 14 days,
more than 100 user identifiers are attached to this UserPoint.
This system is a preventive action of flagging a UserPoint so that the UserPoint is still actionable in mediarithimics.
The quarantine job is a monthly process that flags any UserPoint which has too many objects attached to it. This happens especially when at least one of the following conditions is met:
more than 10.000 activities,
more than 5.000 segments.
A UserPoint in quarantine can still be looked up in Navigator but no more additions will happen to it.A UserPoint can be freed by the quarantine job if it gets below the above-mentionned limits. This particularly happens by deleting activities (through API or when cleaning rules are applied) on a quarantined UserPoint.
UTIQ martechpass (fixed)
UTIQ martechpass (fixed)
Repeat this process for each website where UTIQ ID capture is
required.


UserSegment are here to capture that the user belongs to a particular group of users
UserChoice capture that the user has given his consent for a specific type of data processing
UserTrait represent a calculated attribute used to describe the user.
Imagine you are going to a site with firefox, and you go back to that site later on safari. At the moment, there is no way for the platform to know that all those actions are from the same user. You'll have two UserPoints: one with the UserDeviceTechnicalId from firefox and the other with the UserDeviceTechnicalId from safari. If later you log into the site with your UserAccount on both navigators, the two UserPoint will be associated with a common UserAccount ID identifier. The platform will automatically merge the two UserPoint and their activities to represent the reality that we now know those two users were the same.
UserPoint merges can only be triggered by :
The merge will result in a UserPoint survivor, to which we migrate all the data related to the previous two UserPoint.
users, grouped by channel_id and date_yyyymmdd dimensions, without filters.Number of sessions per day for users who had an activity of type AD_VIEW on campaign 666. The metric is sessions, grouped by the date_yyyymmdd dimension with filter clauses activity_type = AD_VIEW and origin_campaign_id = 666
Days with more than 200k transactions on a specific channel. The metric is number_of_transactions, grouped ty the date_yyyymmdd dimension with filter clauses on channel_id = 666. Then a filter is applied on the calculated metric to only keep days with more than 200k transactions.
It can also be used to build custom dashboards. For more information, see Dashboards.
Calling the API to get your first metrics is easy with your favorite tool that you already use to query other mediarithmics endpoints. See the API Quickstart guide to get started.
user_activities_analytics returns a customized report of your activities analytics data.
See Dimensions en metrics for the complete list of supported dimensions and metrics.
This endpoint is a mediarithmics Data cube. You can find documentation on how data cubes work and which data cubes are available in the specific documentation section.
h is the number of rows that the card takes.
w is the number of columns that the card takes
{x,y} are the coordinates of the top left corner of the card on the grid
Here is a sample grid with five cards and their corresponding properties :

Computes browser-side which segments the user enters or leaves.
When a user enters or exits one or more segments:
Triggers the Google Ad Manager integration
Sends the information back to the server to maintain a server-side state of the segment. Segment statistics are computed periodically by running a query that counts users flagged as belonging to the segment.
To configure this feature, please follow these steps:
Make sure your site is implementing the latest version of the mediarithmics snippet (see Website tracking for more information)
Update your schema:
Identify which properties in your schema need to be used to segment your users with Edge. These properties must exist in the schema and be stored exactly as pushed by the tag, without any transformation applied (e.g., event rule, visit analyzer, @PropertyPath)
Mark the properties you want to expose to Edge segmentation with the @EdgeAvailability flag (see for more information)
Don't hesitate to validate your schema with our team 👍
Enable Edge on the channel. Your Account Manager will activate this feature on your channels
Finalize the integration with the SSP / AdServer:
For GAM: the JS snippet automatically pushes the segment IDs as key/values to the ad call, but at the moment we do not create them automatically in GAM. They need to be created manually in GAM UI (check for more information)
For any other platform: no integration at the moment tough it can be done easily in the snippet as the segment IDs are available in the local storage
A computed field is an instance of a Datamart function applied within a specific context. To use it, it must be declared in your schema.
A Computed Field Function is a plugin of type COMPUTED_FIELD_FUNCTION, which can be created via the API following the standard process or through the computing console interface.
A computed field is defined by these elements:
State: The data stored to build the result. It is the history. Each new activity updates the State.
→ the state is closely related to the Lookback Window - i.e. the historical depth of events to be analyzed.
Logic: The logic can be very simple (like a formula, a sum, a count…) or more complex (like a logical operation).
Result: The data calculated and stored. It contains the score(s) to be used.
The result can be either a single variable or an object, depending on the complexity of the computation. It is computed from the State
Other interesting concepts to take into consideration:
Input Data: Either new userActivities, either userProfile update, either both.
Period Update: The regularity of updates (every x days - by default x=1).
INITIAL – The instance is created with this status.
INITIAL_LOADING – Once declared in the schema and successfully validated, an initial loading job is triggered.
ACTIVE – The computed field becomes active if the job is completed successfully.
To better understand how a computed field works, here are the technical steps and the relation with the methods:
Initialization: The computed field is declared in the schema.
State Updates: Triggered by new user activities or profile updates.
Result Calculation: The result is computed based on the current state.
Storage: The result is stored for querying.
Use the bulk import endpoints to create a document import with the USER_DEVICE_POINTS_DELETION document type and APPLICATION_X_NDJSON mime type. Only ndjson data is supported.
Create an execution with your commands formatted in ndjson. Each line represents a user agent.
Please note that the uploaded data is in ndjson and not json. That means the different deletions are not separated by commas, but by a line separator.
A list of possible user_agent_id values can be found at
The OTQL Language is an extension to GraphQL. It is based on the same schema and adds unique features to query a graph of millions/billions of objects.
We provide an analytics data cube to query user activities.
Use the bulk import endpoints to create a document import with the USER_POINTS_DELETION document type and APPLICATION_X_NDJSON mime type. Only ndjson data is supported.
Create an execution with your commands formatted in ndjson. Each line can represent either a user agent, a user email, a user account or a UserPoint ID. Their respective syntax is detailed in User identifiers deletion.
Please note that the uploaded data is in ndjson and not json. That means the different deletions are not separated by commas, but by a line separator.
A list of possible user_agent_id values can be found at
In your segments
In the standard segment builder
Those dashboards will answer questions like :
What is ingested in the platform?
Do we have moments where we ingest less data than on other days?
Who are my users?
How many active users do I have? On which channels?
What differentiates people in this segment from the rest of my users?
Do I have enough data to activate a segment?
Create your first dashboard with our Quickstart guide.
Speed up your learning curve with useful examples in our Cookbook.
Standard segment builders with the builders scope.
A specific set of segments with the segments scope and segment IDs in the segment_ids property.
A specific set of standard segment builders with the builders scope and builder IDs in the builder_ids property.
You can have multiple dashboards at the same scope.
If you have multiple dashboards to be displayed at a given scope, tabs will be created to switch between them. If you don't have multiple dashboards, it is displayed without any tab.
It is best to have multiple dashboards than a single big one to prevent too many requests from being executed simultaneously.
See REST resources for managing dashboards by API.
If you want to start from an existing dashboard, you can
Open the dashboard you want to clone on the computing console
Go to the Advanced tab and copy the whole JSON
Create a new dashboard
Go to the Advanced tab and paste the whole JSON
Start editing your new dashboard in the WISYWIG or the Advanced tab
Online tracking aims at giving the ability to track unique users across digital properties. We support the following integrations in real time:
Use this feature if you want to track in real time what your users are doing on your digital touch points.
If your need is to mass import activities, users or CRM profiles, you should consider .
With real-time user tracking, you send user events to mediarithmics that are aggregated and encapsulated into . Some user activities only have one user event, while others can have multiple events.
For example, when you send hits from the JS Tag on a web site, mediarithmics aggregates all the events into sessions, creating one user activity per session.
If registering a visit as IN_SESSION, the visit will go through the session aggregation step. We aggregate visits in sessions based on the provided identifiers. We close sessions after 30 minutes of inactivity or if a new event is recorded with a referrer from a different domain than the previously recorded. Visits registered as LIVE don't create sessions.
When using the real-time tracking capability of mediarithmics, you enter what is called the User Activity Processing Pipeline. It allows mediarithmics to do some processing on your behalf.
Here are the steps of processing, in order :
At session closing or every minute if events are live, creation of a user activity.
Execution of . They allow you to edit the events in the activity and add new ones based on their shapes.
Application of the It is a plugin allowing you to execute code to transform each activity.
The activity is finally stored.
To measure and optimize the performance (i.e., cost per action) of your marketing campaigns, you need to track user conversions driven by your Marketing Campaign.
On the mediarithmics platform, you create a Goal to define a trigger when a user should convert.
There are two ways of configuring a Goal:
Creating a Goal and using the associated Pixel. Each user that sees the Pixel in a page will convert for this goal. This Pixel should be present on the confirmation page or dynamically loaded when the user converts (by using a Tag Manager or by including the 'pixel load' in the website logic)
Writing a rule that will be run on the Datamart data. It will generate a user conversion each time a user matches the rule.
From experience option #1 is the simplest solution to configure and should be used for each 'non-recurrent' User Conversion tracking.
The option #2 offers more flexibility but with the cost of a longer & more complex configuration. It should be used either when the:
datamart tracking JS TAGs / Pixels are already present on the website, including the page where the conversion happens
Goal to track will be used repeatedly
Goal definition requires a complex rule that can't be achieved by displaying a JS TAG / Pixel
When creating a goal in the navigator (Campaigns tab > Goals sub-tab):
Define the name of the goal.
Select Trigger via Pixel.
Copy/paste the generated HTML code to your technical team so that they can integrate it.
Don't forget to associate the correct attribution model for the attribution of the conversion to your Marketing Campaigns.
When creating a goal in the navigator (Campaigns tab > Goals sub-tab) :
Define the name of the goal.
Select Trigger via Query.
Define the query that should be used to tell if a User has converted or not.
Don't forget to associate the correct attribution model for the attribution of the conversion to your Marketing Campaigns
If you have issues when defining the proper Query to define your Goals, please try Option #1 or reach out to your Account Manager.
Creating a connection to Snowflake requires to set up key-pair authentication
Generate a private and a public key
Assign the public key to a Snowflake user
Create and format a credentials JSON file
Use the following command to generate a private key (called rsa_key.p8)
Then use the following command to generate a public key (called rsa_key.pub linked to the private key called rsa_key.pub stored in the current directory)
From the Snowflake interface or the Snowflake CLI run the following SQL query to assign the public key generated at step 1 (rsa_key.pub) to the snowflake user :
Also make sure the user also has a default warehouse, else run this query :
Now you need to generate a JSON file with the following format :
"user" : the user you associated the public key with
"account" : the "account" property is what is called "Account identifier" in Snowflake. It's usually the organization name and the account name hyphenated
"database" : the database containing the data you want to make available to mediarithmics
Bulk import user activities to insert user activities that happened outside of real-time tracking. Those activities usually are offline activities, like store visits or store purchases, but you can adapt it to your use cases.
Activities imported through bulk import don't go through the activity processing pipeline. You shouldn't use this feature if you intend to do conversion detection or automation activation. Your activity analyzers also won't process those activities, and you should format exactly how you expect them to be stored by mediarithmics.
Use the endpoints to create a with theUSER_ACTIVITYdocument type and APPLICATION_X_NDJSON mime type. Only ndjson data is supported for user activities.
Then, create anwith your user activities formatted in ndjson .
You can, of course, upload multiple activities at once. Note the uploaded data is in ndjson and not json. That means the different activities are not separated by commas, but by a line separator \n
An audience feed is a plugin acting like a connector that allows mediarithmics customers to push their segments to a third-party platform.
It is generic: once a connector to a partner has been created, every customer can use it. Each audience feed has a specific set of options to adapt to each customer.
An audience external feed is a mediarithmics specific to a partner, but shared across customers. In the UI, it is called Server-side plugin: go to a specific segment, click Add a feed, and you will see a feed type called server-side. It is marketed as connectors or server-to-server connectors.
It has:
Plugin definition
group_id: com.mediarithmics.audience.externalfeed
artifact_id: [[partner]]-connector
A feed is an instance of an audience feed. It is specific to a segment in an organisation. It has:
Feed ID
Instance properties specified by users in the UI when adding a feed to a segment.
A feed session is initiated whenever an external feed is activated. A new session will also be created if a feed is paused and then reactivated. This session is not visible in the UI.
A feed preset is a template allowing users to easily create feed instances with pre-configured properties. You can, for example, create a Facebook feed preset containing your key for your organisation, and you won't have to remember it every time you set up a new feed.
The UserPoint API allows to retrieve, create, update or delete all data related to a single UserPoint.
The endpoint is formatted as follows:
/v1/datamarts/<DATAMART_ID>/user_points/<USER_POINT_SELECTOR>/<DOCUMENT_TYPE>/<DOCUMENT_SELECTOR>
where
<USER_POINT_SELECTOR> allows to select a UserPoint based on any of its identifiers
Possible values are:
<DOCUMENT_TYPE> designates the type of data that will be addressed
Possible values are:
<DOCUMENT_SELECTOR> allows to select a particular document on the UserPoint
As each document is identified by a different identifier, possible values depend on the type of document previously selected. It is also optional: use it if you want to get, update or delete a particular document.
This client-side connexion allows you to synchronize a targeting list with Google Ad Manager (GAM) and therefore run your media campaigns on this targeting list.
Please note that Google Ad Manager requires user consent for contextual targeting.
To synchronize your targeting lists with Google, you will need to :
Setup on your website
Have an active account on Google Ad Manager
You'll need to follow the following steps:
Configure your Ad Manager network (Admin > Global settings > General settings > Api access > Enabled)
Add mediarithmics contextual targeting service account user for API access (Admin > Global settings > Network settings > Add a service account user) and specify KeyValue Management role.
A UserPoint can be associated with multiple UserDevicePoint. Each user device point can include:
Multiple device technical identifiers
One device information record
Go to Navigator > Settings > Organisation > Device Registries.
In the First-party device registries section, click on New Registry.
Specify the following details:
Go to Navigator > Settings > Organisation > Device Registries.
In the Shared device registries section, click on Manage subscriptions.
Select Subscribe to the desired registry.
If you have multiple datamarts, don't forget to activate your Device Registry on all datamarts.

For the following integration to work, you must add the on the page environment.
Here is an example of the AMP tag:
Arguments
# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_DEVICE_POINT_DELETION",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '
{ "type": "USER_AGENT", "user_agent_id": "vec:89998434" }
{ "type": "USER_AGENT", "user_agent_id": "udp:987654" }
'# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_POINTS_DELETION",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '
{
"type": "USER_ACCOUNT",
"compartment_id": "1000",
"user_account_id": "8541254132"
}
{
"type": "USER_EMAIL",
"hash": "982f50d88d437d13bdbd541edfv4fe5176cc8d862f8cbe7ca4f0dc8ea"
}
{ "type": "USER_AGENT", "user_agent_id": "vec:89998434" }
{ "type": "USER_AGENT", "user_agent_id": "udp:987654" }
{ "type": "USER_POINT", "user_point_id": "95772db3-e762-45da-8cf1-4893debffae7" }
'Setup Equativ tag on your website (sas object must be available)
Have an active account on EMP
You'll need to provide your Equativ's credentials to your mediarithmics Account manager and especially the following :
Client ID
Client secret
Name
Name of the Device Registry
Token
A technical identifier used to link the Device Registry with the JS Tag
Type
Must be one of the following enums:
MOBILE_VENDOR_ID ,CUSTOM_DEVICE_ID
Description
(Optional) More information about the Device Registry
Save the goal.
Save the goal.
Plugin versions with
Deployed code
An external service referencing partner’s API
Configuration files for things like credentials, tokens, technical configurations

The SECURITYADMIN role or higher
"private key" : the private key generated in step one and associated to the user public key. Be ware of the formatting of the private key : the key must be in on a single line, this means you should use a "\n" separator at each return to line of the private key.
<USER_POINT_ID> or user_point_id=<USER_POINT_ID>
user_agent_id=<USER_AGENT_ID>
compartment_id=<COMPARTMENT_ID>,user_account_id=<USER_ACCOUNT_ID>
email_hash=<EMAIL_HASH>user_identifiers
user_profiles
user_activities
user_choices
user_scenarios
user_segmentsopenssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocryptopenssl rsa -in rsa_key.p8 -pubout -out rsa_key.pubALTER USER <user_name> SET RSA_PUBLIC_KEY='MIIBIjANBgkqh...';ALTER USER <user_name> SET DEFAULT_WAREHOUSE=<warehouse_name>{
"user" : "user_name",
"account" : "organinization_name-account_name",
"database" : "database",
"private_key" : "-----BEGIN PRIVATE KEY-----\nMII..."
}# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_ACTIVITY",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/1162/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{ "$user_agent_id": "<USER_AGENT_ID>", "$type":"TOUCH","$session_status":"NO_SESSION","$ts":<TIMESTAMP>,"$events":[{"$event_name":"$email_mapping","$ts":<TIMESTAMP>,"$properties":{}}]}'user_profiles/compartment_id=<COMPARTMENT_ID>
user_profiles/compartment_id=<COMPARTMENT_ID>/user_account_id=<USER_ACCOUNT_ID>
user_choices/processing_id=<PROCESSING_ID>
user_scenarios/scenario_id=<SCENARIO_ID>
user_segments/audience_segment_id=<SEGMENT_ID>Detection of query-based conversions. Conversions relying on pixels skip this step.
Evaluation of automations triggers "React to an event". If the ongoing activity matches the trigger, the UserPoint enters the scenario.
UserChoice management ensures the data you ingest is compatible with GDPR and other regulations.
event_name is the (optional, default to '$page_view').
any custom property
string
All custom properties should be added to the extraUrlParams object (prop1, prop2 in the example).
<amp-analytics type="mediarithmics">
<script type="application/json">
{
"vars": {
"site_token": <SITE_TOKEN>,
"event_name": "amp-test-pageview"
},
"extraUrlParams": {
"prop1": "value1",
"prop2": "value2"
}
}
</script>
</amp-analytics>name
type
description
site_token
string
site_token is the token of the website (required).
event_name
string
The code of the contextual targeting snippet is non-blocking - it does not impact on the page rendering time. The snippet can be inserted in the <head> part of the web page.
The Contextual Targeting feature should be activated on your organisation(s) by your Account manager. This will allow you to configure contextual targeting on your segments and update the JS Tag with them.
New URLs & hits must be captured into mediarithmics, either through:
User event tag (user tracking)
(no user tracking)
(for mobile app only. url field needs to be populated in that case)
Check the . Bellow are some precisions about specific Computed fields Function implementation.
These 3 functions are triggered during the initialization of the Computed Field Function and whenever a new activity, profile update, or computed field modification occurs, updating the State accordingly.
.
Although onUpdateComputedField has not been released yet, it is still required to be implemented in your plugin. In the meantime, simply return the State directly to avoid any errors.
This function returns the result of the computed field function for a specific State. The Result will be stored for a defined duration, and querying the field will return the stored value during that period.
.
Follow the standard procedure to with the type COMPUTED_FIELD_FUNCTION. Once the plugin is created, you can from the build.
In the Navigator > Settings > Datamart, navigate to Computed Fields.
Add a new computed field.
Select the plugin and version of the new instance you want to use.
Complete the necessary fields:
This will set up the computed field linked to your computedfield function for use in the schema.
After creating the computed field, you need to connect it to your schema:
Use the directive @ComputedField(technical_name = "...") at the appropriate level of your schema.
Ensure the technical name matches the one used when creating the computed field.
This will integrate the computed field into your schema and make it available for use in queries and operations.
A schema validation will be triggered when you click Save. It will ensure that your Datamart has access to all the declared computed fields and verify their integration with the schema.
The computed field must be in the UserPoint object.
Once your schema is validated, our service will run a job to initialize your computed field and set the state to match the current status of the timeline. For each UserPoint in your Datamart, we will process their timeline and execute the OnUpdateXXX() function.
To monitor the progress:
Go to the Computing Console > Computed Fields.
Check if your computed field is ready or if the initial loading is still in progress.
Please note, this process may take some time to complete (up to 72h) as it computes the timelines for all UserPoint.
Avoid using your computed field until the initial loading is complete. While the query will not return an error, the result may be inaccurate during this process.
Once the initial loading is complete, you can start using your computed field just like any other standard field in mediarithmics features.
For example, in query tools or any OTQL query:
If your computed field is indexed, you can also perform queries like:
This allows you to incorporate the computed field into your data analysis and queries once it's fully initialized.
In order to track activities on your mobile app, you'll need to keep in mind the following :
All mobile user activities need to be sent to mediarithmics using the Tracking API
The should be used to authenticate any requests between your app and mediarithmics. Don't hesitate to contact your Account manager to have more information about this.
The activity and events in the payload need to comply with mobile app specificities described in the
The user agent identifier ($user_agent_id) needs to be particularly formatted using guidelines
The first option is to integrate a small piece of code (approx. 100 lines) into the mobile application to execute calls to the mediarithmics tracking API. Sample code for iOS and Android is available in the section (see illustration below).
The second option is to re-use an existing analytics tool. It is then possible to transfer events from the analytics solution's server to the mediarithmics API.
In the context of Mobile App Tracking, predefined event names are available out-of-the-box to simplify and automate event processing ():
app open event ($app_open) corresponds to the opening of the app and app resume event (when the app becomes active again)
app install event ($app_install)
app update event ($app_update)
The install and update events are automatically calculated on server side and you don't have to send them :
The install event ($app_install) is triggered the first time an app open event is received for a user, regardless if the user is new or existing.
The update event ($app_update) is triggered when the SDK version, app version or OS version changes from one open to the next.
In the context of Mobile App Tracking, some fields of the User Activity objects have a limited set of possible values. Details below:
A UserProfile is usually imported from an existing information system, for example CRM, and login database. We usually see our customers using the profile to collect:
Contact details
User preferences
Status in a loyalty program
Subscription to a newsletter
CRM information
Scoring calculated outside the platform
Aggregated Values
other details
Each UserProfile is associated with a UserPoint by a . It can also be linked to a UserAccount.
When importing profiles it is recommended to complete the $user_account_id and the $compartment_id event if the values are the same than the ones used as the identifier as the values are not inherited from the identifiers info. You can find more information about the .
You can have different compartment IDs and different UserAccount between the one in the profile wrapper, as the UserPoint identifier and the one into the UserProfile object.
Schema decorators allow you to customize how your graph appear within the mediarithmics platform interfaces, specifically the Advanced Segment Builder and the Query Tool.
By uploading a specific CSV file, you can "shallow-rename" fields (change their display label) or hide specific properties from users without altering the underlying technical schema.
Schema decorators are defined using a CSV file. Below are the specifications for the file format.
Separator: Comma (,)
Quotes: Double quotes (") must be used for strings that contain commas.
The CSV must contain the following headers.
Code snippet
You can manage schema decorators either through the mediarithmics user interface (Navigator) or programmatically via the API.
In the Datamart > Object View Configuration section of the Navigator Settings, you can use the buttons under the Schema to manage your CSV files directly.
The available actions are:
Upload new Decorators: Allows you to upload a prepared CSV file to apply new labels and visibility rules. This will overwrite existing decorators for this schema.
Download Template: Downloads a blank CSV file containing the required headers (OBJECT_NAME, FIELD_NAME, etc.) to help you get started.
Download Decorators: Downloads the current active decorator CSV file. This is useful if you want to make edits to the existing configuration.
You will need the following values:
DATAMART_ID: The ID of your Datamart.
SCHEMA_ID: The ID of the Schema you wish to decorate.
MICS_API_TOKEN: Your API authentication token.
To fetch the existing decorator file for a specific schema:
Bash
To upload a new decorator CSV file (replacing the existing configuration):
Bash
These functions are triggered during the initialization of the Datamart Function and whenever a new activity, profile update, or computed field modification occurs, updating the State accordingly.
These methods must be commutative to ensure consistency during initial loading, where event order is not guaranteed. It should always produce the same result, regardless of the sequence in which events are processed.
To prevent system overload, the State size is restricted to a maximum of 1 MB. If this limit is exceeded, the update will be discarded and not saved.
onUpdateActivity(state: State, userActivity: UserActivity): State
Inputs:
state: Current state object.
userActivity: New user activity data.
onUpdateUserProfile(state: State, userProfile: UserProfile, operation: core.Operation): State
Inputs:
state: Current state object.
userProfile: Updated userProfile data.
onUpdateComputedFields(state: State | null): Result
Inputs: state: Current state object or null.
Outputs: Result object.
What It Does: Computes the result from the current state.
Example:
This method must be implemented, even if it not currently used.
Please return the same state.
buildResult(state: State | null): Result
Inputs: state: Current state object or null.
Outputs: Result object.
What It Does: Computes the result from the current state.
Example:
Before deploying the plugin, create tests to ensure the function behaves as expected.
Test cases might include:
Adding new activities, and update profiles and verifying the state is updated correctly. Check that the order we send profile updates and activities has no impact.
Checking that activities which are no more relevant to compute the result are removed from the state.
Validating that the correct values are returned in the result.
How to decide if we create 1 or several plugin(s)? (example with 2 scores)
If the required data in the state is the same for both scores → 1 plugin
If the required data in the state is different (state is polymorph) → 2 plugins
Always return the same state in case of error, or null. Otherwise, we delete the state, and so reset the result.
To demonstrate how the computed field works, consider this use case:
Goal: Calculate the sales over 12 months for items in the IT category per user.
For simplicity, this example focuses on the IT category. However, the computed field can be scaled to compute the amount for all categories, not just IT.
In this scenario, the computed field will:

String
The mobile app id (previously created through Navigator / API)
$type
String enum
The activity type should only be APP_VISIT
$session_status
String enum
The sessions status should be:
IN_SESSION: This value should be used if you’re making one API call per tracked event (recommended way). The platform will automatically aggregate all the events of a session sent through many API calls when this value is used.
CLOSED_SESSION: should be used only if you do a single API call per session at its end. In this case, you should provide ALL the events of the session in the $events array in the call. Each API call with this value will generate a new User Activity in the Platform.
$user_agent_id
String (Optional)
The user agent identifier of the user device containing a unified representation of an advertising id.
Ex: mob:ios:raw:6d92078a-8246-4ba4-ae5b-76104861e7dc for a raw IDFA on iOS platform.
$app_id
Name
Name of the computed field
Description
(Optional) Describe the computed field
Technical name
Name of the computed field used in the schema
Compute period (in days)
Maximal duration before the periodic update of the Result in days.
Events filter
GraphQL selector query to filter the activity which will trigger the State updates
The timestamp of the creation operation for this user profile. Automatically set by the system.
[any custom property]
Any
The value of a custom property
field
type
description
$user_account_id
String (Optional)
The associated UserAccount ID. If none, the profile remains anonymous.
$compartment_id
Integer (Optional)
The compartment ID. If none, the profile is imported into the default compartment_id.
$last_modified_ts
Timestamp
The timestamp of the last edit operation for this user profile. Automatically set by the system.
$creation_ts
Timestamp
Delete Decorators: Removes the current decorator file. The schema will revert to displaying raw technical field names, and hidden fields will become visible again.
OBJECT_NAME
The name of the object (resource) where the property is located.
FIELD_NAME
The original technical name of the property to decorate.
HIDDEN
true or false. Indicates if the field should be hidden from the UI.
LABEL
The user-friendly name to display in the Segment Builder and Query Tool.
HELP_TEXT
A short description displayed as a tooltip when hovering over the question mark (?) icon in the platform.
LOCALE
The locale for the label. Currently, this must be set to en-US.
What It Does: Updates the state with new user activities and removes outdated activities.
Example:
operation: Operation type (enum: UPDATE | DELETE).Outputs: Updated state object.
What It Does: Handles updates to userProfile.
Example:
Maximize the number of scores calculated by a Computed Field Function to optimize performance. Therefore, the same state is used to calculate multiple scores, and we avoid storing the same state multiple times.
Ensure that the state is designed to store relevant information efficiently
Ensure that the state is commutative to maintain consistency while building the result. ie. state activities and profiles data updates can happend in any order
Be sure to have a cleaning rule to update the state base on the defined lookback window for both userProfile and userActivities.
In other words, when updating the state, be sure to remove the data that are no more useful to compute the result, in order to limit the state size.
When editing a live computed field, you must relaunch an Initial Loading through API.
Aggregate the order amounts over the last 12 months.
Filter the data based on the IT category (or any other category, depending on the use case).
By scaling this approach, you can calculate the sales amount for multiple categories, ensuring flexibility and extensibility in your calculations.
In this use case, we need to declare the State, Result, and UserActivity for computing the sales amount over the last 12 months for items in the IT category.
The State stores the order amounts for each day, categorized by IT items, for the last 12 months. Here's how the state structure looks :
The activities_for_the_last_12_months keeps the amount data for each day, where the date is represented by a numeric value (e.g., timestamp), and the amount is the total order value for that day.
The Result represents the total sales amount over the last 12 months for the IT category.
The result will return the computed IT_amount_for_the_last_12_months after summing up the values stored in the state.
The UserActivity defines the structure of the activity that triggers the update. In this example, we focus on the items bought, particularly in the IT category.
The items_bought array contains details about each item, such as category (IT, for example) and price.
With the context declared, you can implement your computed field logic. Here's how the MyComputedField class looks:
Goal: Update the state with new IT category purchases and remove activities older than 12 months.
It filters the UserActivity to ensure only IT items are included.
Removes old activities beyond the 12-month period.
Goal: Sum all the basket amounts stored in the state and return the total.
It checks each stored activity date to ensure it is within the last 12 months and sums the amount for each IT purchase.
The onUpdateActivity function is triggered only when a new activity occurs. Therefore, you need to handle outdated activities within your function if the UserPoint does not receive a new event to update the State.
Before deploying the plugin, create tests to ensure the function behaves as expected. Test cases might include:
Adding new activities and verifying the state is updated correctly.
Checking that old activities are purged.
Validating that the correct basket amounts are returned in the result.
Once the computed field is implemented, you need to declare it in your schema as follows:
This will link the computed field (IT_amount_for_the_last_12_months) to your schema, making it available for querying.
When working with profile and activity information:
Be mindful that the computed field needs to be commutative during the initial loading. For cases where UserProfile and UserActivity need to be aggregated, it might be necessary to store more information in the State to ensure consistency and accuracy during this phase.
Example use-case: Basket amount by fidelity card
During initial loading, if you need to track the basket amount for current fidelity cards, you may need to store all activities grouped by fidelity card in the state. This is because, until the initial loading completes, you may not know which fidelity cards are currently active for the user.
datamartId*
String
The ID of the datamart want to retrieve activities for
userpointId*
String
The ID of the UserPoint you want to retrieve activities for
filters
String
Filter(s) to select appropriat user activities
{
// Response
}DELETE https://api.mediarithmics.com/v1/datamarts/:datamartId/user_timelines/:userpointId/user_activities?filters=:filters
datamartId*
String
The ID of the datamart want to retrieve activities for
userpointId*
String
The ID of the UserPoint you want to retrieve activities for
filters
String
Filter(s) to select appropriat user activities
As mentionned above, filters are optional and give you the ability to narrow your query. You can apply filters on any custom fields you have in your activity object as well as the following proprietary fields:
$unique_key
$type
$channel_id
$ts (through start_ts & end_ts keywords)
To apply any filter, mention the field on which you want your filter to be applied, followed by double equal (==), then by value(s) for the field to be matched against.
Note that:
You can provide in your query multiple filters by seperating filters with a comma (,). In that case, it will be considered as an "AND" between those filters
A filter can take multiple values by separating values with a pipe (|). In that case, it will be considered as an "OR" between those values
// Example in your schema
type UserPoint {
id: ID!
accounts: [UserAccount]
…
rfm_score: RfmScore @ComputedField(technical_name = “RfmScore”) @TreeIndex(index:"USER_INDEX")
}
type RfmScore {
…
}SELECT { rfm_score } FROM UserPointSELECT { id } FROM UserPoint WHERE rfm_score = "PASSIVE"# UserProfile object
{
"$compartment_id": 1606,
"$user_account_id": "c9879698769OIUYOIY9879879",
"$last_modified_ts": 5467987654613,
"$creation_ts": 6579874654654654,
"firstname": "David",
"lastname": "Guetta",
"gender": 1,
"newsletter_options": {
"subscribed": true,
"preferred_periods": "MONTHLY"
}
}# Complete payload when importing a profile
{
"operation": "UPSERT",
"compartment_id": "1600",
"user_account_id": "identifier_account_id",
"force_replace": true,
"user_profile": {
"$compartment_id": 1606,
"$user_account_id": "c9879698769OIUYOIY9879879",
"$last_modified_ts": 5467987654613,
"$creation_ts": 6579874654654654,
"firstname": "David",
"lastname": "Guetta",
"gender": 1,
"newsletter_options": {
"subscribed": true,
"preferred_periods": "MONTHLY"
}
}
}OBJECT_NAME,FIELD_NAME,HIDDEN,LABEL,HELP_TEXT,LOCALE
UserPoint,id,false,User ID,Unique identifier for the user point,en-US
UserPoint,technical_hash,true,,,en-US
UserEvent,subcategory,false,Subcategory,The specific sub-category of the event,en-US
UserEvent,subcategory_id,true,,,en-US
UserEvent,title,false,Event Title,The main title or headline of the event,en-US
UserEvent,region,false,Region,The geographic region associated with the event,en-UScurl -H "Authorization:$MICS_API_TOKEN" \
-X GET \
--location "https://api.mediarithmics.com/v1/datamarts/$DATAMART_ID/graphdb_runtime_schemas/$SCHEMA_ID/schema_decorators"curl -H "Authorization:$MICS_API_TOKEN" \
-X PUT \
--location "https://api.mediarithmics.com/v1/datamarts/$DATAMART_ID/graphdb_runtime_schemas/$SCHEMA_ID/schema_decorators" \
--data-raw "$(cat path/to/schema_decorators.csv)"// Trigger by a new activity
onUpdateActivity(state, userActivity) {
// Logic to update state
return updatedState;
}// Trigger by an update on the UserProfile
onUpdateUserProfile(state, userProfile, operation) {
// Logic to handle UserProfile updates
return updatedState;
}// Trigger by the result computation of another computed field
onUpdateComputedFields(state) {
// Logic to compute result
return state;
}buildResult(state) {
// Logic to compute result
return result;
}export interface State {
activities_for_the_last_12 _months: {
[date: number] : [{
amount: number;
}]
}
}export interface Result {
IT_amount_for_the_last_12_months: number;
}interface Items {
category: string;
price: number;
}
interface UserActivity {
items_bought: Items[];
}export class MyComputedField extends core.ComputedFieldPlugin<State, Result, UserActivity, UserProfile, ComputedField> {
constructor() {
super();
}
// Function to Update the state
onUpdateActivity(state: State, userActivity: UserActivity): State { ... }
// Won't be used but need to be declared;
onUpdateUserProfile(state: State, userProfile: UserProfile, operation: core.Operation): State {
return state;
}
// Won't be used but need to be declared;
onUpdateComputedField(state: State, computedField: ComputedField): State {
return state;
}
// Function to compute the Result
buildResult(state: State | null): Result { ... }
}type UserPoint {
id: ID!
accounts: [UserAccount]
…
IT_amount_for_the_last_12_months: Int! @ComputedField(technical_name = “IT_Amount”) @TreeIndex(index:"USER_INDEX")
}{
// Response
}# As explained above, user activities query is always for a particular datamart (1649) & userpoint (dff34408-6cc1-4531-86c5-bac8da9ebac9)
# Retrieve all activities
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities"
# Retrieve all activities for a given channel (4307)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=$channel_id==4307"
# Retrieve all activities for a given activity type (SITE_VISIT)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=$type==SITE_VISIT"
# Retrieve all activities for several activity types (SITE_VISIT & APP_VISIT)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=$type==SITE_VISIT|APP_VISIT"
# Retrieve a given activity (17928440-e72f-11ec-aad0-d12e54ffd215)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=$unique_key==17928440-e72f-11ec-aad0-d12e54ffd215"
# Retrieve all activities ingested after a particular date (14/06/2022 10:38:47)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=start_ts==1655195927000"
# Retrieve all activities ingested before a particular date (13/07/2022 13:15:00)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=end_ts==1657717207000"
# Retrieve all activities ingested between 2 dates (between 14/06/2022 10:38:47 & 13/07/2022 13:15:00)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=start_ts==1655195927000,end_ts==1657717207000"
# Retrieve all activities ingested between 2 dates for a given channel (between 14/06/2022 10:38:47 & 13/07/2022 13:15:00 for channel 4307)
curl -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=start_ts==1655195927000,end_ts==1657717207000,$channel_id==4307"
# Delete a given activity (17928440-e72f-11ec-aad0-d12e54ffd215)
curl -X DELETE -k -H "Authorization: <MICS_API_KEY>" -H "content-Type: application/json" "https://api.mediarithmics.com/v1/datamarts/1649/user_timelines/dff34408-6cc1-4531-86c5-bac8da9ebac9/user_activities?filters=$unique_key==17928440-e72f-11ec-aad0-d12e54ffd215"
The feed details are divided into three sections to enhance understanding:
Provides a high-level summary of the feed.
Shows the status of the feed instance.
Displays the identifiers sent to the partner for addition or deletion.
:
Lists all plugin properties configured for the current instance.
:
Offers an in-depth view of processing steps, aiding in the investigation of anomalies.
The Stats tab is tailored for non-technical users to easily assess feed activity. It shows whether the feed is successfully transmitting identifiers to the destination platform.
Creation:
Waiting for Activation: The feed card is created on the segment but remains inactive.
Feed Activated: The feed creates the segment on the destination platform (triggers the onExternalSegmentCreation function).
Connection:
The feed tests the connection with the destination platform (triggers the onExternalSegmentConnection function).
Starting:
The feed begins transmitting identifiers to the destination platform.
Initial Loading:
Processes all users in the segment and sends their identifiers to the destination.
Live:
Continues to send new identifiers and requests deletion of those no longer present in the segment.
Successful Identifier Transactions: number of upserts or deletions of identifiers successfully sent during the selected period.
Identifier Coverage: percentage of UserPoint without any identifiers sent to the destination platform.
Daily Graph: visual representation of upserts or deletions sent daily.
Client-side feeds have simpler functionality but use a similar interface.
PAUSED: the feed card is created but not activated.
ACTIVE: the feed is activated on the segment and the feed is downloaded by the browser.
Daily Downloads: number of times the feed was downloaded by the browser during the selected period.
The Configuration tab displays the plugin properties for the feed instance:
Plugin properties layout is determined during the creation of the plugin version.
These values can only be modified when the feed is not activated. Once activated, the configuration becomes read-only.
The Troubleshooting tab is designed for technical users, providing detailed insights into:
Successful and failed operations (e.g., upserts and deletions).
Errors and processing steps for investigating anomalies.
Server-Side and Client-Side feeds:
Resource details, plugin information, and plugin version information.
Server-Side Feeds only:
Instance details and initial loading logs.
API Calls to the Audience Feed Plugin:
Displays response status for /user_segment_update calls.
Identifiers Sent to the Destination Platform:
Shows processed identifiers and their statuses for non-batching cases.
Push: Adding a UserPoint to the segment or an identifier to a UserPoint.
Remove: Removing a UserPoint from the segment or an identifier from a UserPoint.
Statuses:
PROCESSED: Successful with no destination platform response (e.g., batch/file delivery).
SUCCEEDED: Successful with a positive response from the destination platform.
FAILED: Error occurred within the plugin or at the destination platform.
NO_ELIGIBLE_IDENTIFIERS: No eligible identifiers to send.
API Calls to the Audience Feed Plugin (/batch_update route):
Number and status of batches created.
Records Sent to the Destination Platform:
Rows sent per batch (may contain multiple identifiers).
Files Sent by File Delivery Service:
Number of files sent and response statuses.
Records Sent to the Destination Platform:
Rows sent per file (may represent multiple identifiers).
Displays a daily graph of browser downloads, as shown in the Stats tab.
ndjson .Each line in the uploaded file can have the following properties:
field
type
description
operation
Enum
Either UPSERT or DELETE
compartment_id
String (Optional)
The Compartment ID, acting as a user in correlation with user_account_id
user_account_id
String (Optional)
The User Account ID, acting as an in correlation with compartment_id
email_hash
You can, of course, upload multiple user choices at once. Note the uploaded data is in ndjson format and not json. That means the different choices are not separated by commas, but by a line separator \n
When importing choices with identifiers, only one identifier is allowed per line. For example, you shouldn't specify the user agent ID if the Email Hash is already used in a line.


Dashboards can have filters in the top action bar.
You can set up filters only in advanced mode using the available_filters property of your .
The user can select a value and all the queries in the dashboard adapt to the selected value.
{
// Using technical names of compartments, segments or channels
// will result in IDs being automatically replaced by names in the UI
"technical_name": String,
"title": String,
"values_retrieve_method": 'Query', // Only available value at the moment
// OTQL query to retrieve list of selectable values
// Use a query string, not the ID of a query
"values_query": String,
// How to adapt queries in the dashboard to the selected value(s)
"query_fragments": [QueryFragment],
"multi_select": Boolean, // If the user can select multiple values
}A query fragment tells the dashboard how to adapt each query to the value(s) selected by the user.
{
// Any available data source such as 'activities_analytics' or 'OTQL'
"type": String,
// Only for OTQL type, chooses which queries should be transformed
// Select 'ActivityEvent' to transform queries FROM ActivityEvent
"starting_object_type": String,
// The query part to add
"fragment": String,
}Here is a sample with a filter that enables the selection of compartments and an other for channels

This document import allows you to mark for deletion. Each line in the document represents a different object to remove from the platform.
The integration batch is a plugin type used for customers' integrations. It can be periodic or non-periodic plugin. You can choose to pause a recurring plugin so that all the coming executions are canceled.
Imagine you want to create a script that imports data every night for a customer :
You declare a new integration batch plugin called
To configure this feature, please follow the next steps IN ORDER:
# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_CHOICE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/1162/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{
"operation": "UPSERT",
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"force_replace": true,
"user_choice": {
"$processing_id": "<PROCESSING_ID>",
"$choice_ts": "<CHOICE_TS>"
}
}'{
"available_filters": [
{
"values_retrieve_method": "query",
"values_query": "SELECT {compartment_id @map} FROM UserProfile",
"technical_name": "compartments",
"query_fragments": [
{
"type": "OTQL",
"starting_object_type": "UserPoint",
"fragment": "profiles {compartment_id IN $values}"
},
{
"type": "OTQL",
"starting_object_type": "UserProfile",
"fragment": "compartment_id IN $values"
}
],
"multi_select": true,
"title": "Data provider"
},
{
"values_retrieve_method": "query",
"values_query": "SELECT {channel_id @map} FROM UserEvent",
"technical_name": "channels",
"query_fragments": [
{
"type": "OTQL",
"starting_object_type": "UserPoint",
"fragment": "events {channel_id IN $values}"
},
{
"type": "OTQL",
"starting_object_type": "UserEvent",
"fragment": "channel_id IN $values"
},
{
"type": "activities_analytics",
"fragment": [
{
"dimension_name": "channel_id",
"operator": "IN_LIST",
"not": false,
"expressions": "$values"
}
]
}
],
"multi_select": true,
"title": "Channels"
}
],
"sections": ...
}String (Optional)
The Email Hash, acting as an identifier
user_agent_id
String (Optional)
The User Agent ID, acting as an identifier
force_replace
Boolean (Optional)
Mandatory when the operation is UPSERT.
If true, then the User Choice will be completely replaced by the object passed in the user_choice field.
If false, the object passed in the user_choice field will be merged with the existing User Choice of the UserPoint.
user_choice
JSON Object (Optional)
Mandatory when operation is UPSERT.
This is a JSON Object representing the User Choice.
Please refer to the User choices page for more
information.
Note that the $processing_id field is always mandatory, and $choice_ts is mandatory when operation is UPSERT.



Use the bulk import endpoints to create a document import with theUSER_IDENTIFIERS_ASSOCIATION_DECLARATIONSdocument type and APPLICATION_X_NDJSON mime type. Only ndjson data is supported for user activities.
Create an execution with your commands formatted in ndjson .
Each line will create/merge a UserPoint that has all the specified identifiers
field
type
description
identifiers
UserIdentifierResource[]
An array of User Identifier Resource of any type
User identifier resource can be of three shapes. Either email or user agent or user account id. They correspond with the different types of user identifiers.
field
type
description
type
"USER_EMAIL"
The type of the identifier.
hash
String
A hash of the email. The hashing function should be unique per datamart.
String (optional)
the email address
field
type
description
type
"USER_AGENT"
The type of the identifier.
user_agent_id
String
The user agent ID
field
type
description
type
"USER_ACCOUNT"
The type of the identifier.
user_account_id
String
The User Account ID
compartment_id
String (optional)
The Compartment ID. If you don't input the compartment id it will fall back on the default one
You can, of course, add different identifier types at the same time. Please note that the uploaded data is in ndjson and not json. That means the different additions are not separated by commas, but by a line separator \n
import-data-for-customerYou declare a first 1.0.0 version for this plugin with the code of the script and the declaration of the script parameters
Your script is now available for usage
To execute the script, you can :
Create an integration batch instance that will use the code from the 1.0.0 version with specific input parameters
Either program the instance to automatically create executions at a specified cron, or manually create a new execution to start now or later.
Use the plugin creation endpoints to create a new plugin with the plugin type as INTEGRATION_BATCH. Everything else remains the same.
Use the plugin version creation endpoints to create a new plugin version with all the properties. The call and format are the same than usual.
For the integration batch plugin, the instance is called integration_batch_instances.
There are five properties that are used for this plugin type: cron , cron_status, ram_size, cpu_size and disk_size.
The cron and the cron_status are not mandatory as you can create non-periodic jobs. If used, you should use them together.
The ram_size , cpu_size and disk_size are mandatory and the default values are set to LOW.
You can perform the operations POST / PUT / GET and DELETE on the instances.
Executions can be created either automatically by the scheduler using the cron defined in the instance or manually using the API or the interface.
When creating an execution you have to set the execution_type and expected_start_date properties.
You can perform the operations POST / PUT / GET and DELETE on the executions.
The execution_type can be either MANUAL when created using the interface or CRON when created by the instance using the cron value set in the instance.
The expected_start_date is set by the timestamp chosen in the interface or by the cron set in the instance.
Schema update
Event rule configuration
Contextual snippet installation
Contextual Targeting only works as of now with standard schema. Please reach out to your Account manager if you do not have UserEvent type in your schema.
For Panel-based as wel as Semantic contextual targeting, you'll need to update your runtime schema to add the following line in UserEvent type:
$url attribute needs to be populated with a proper URL in order to extract the contextual_key (URL without protocol nor query stings).
As well, ensure that the field ts in UserEvent is indexed:
For Semantic contextual targeting, and particularly if you want to create Audience segments based on semantic information, you'll need to add the following lines in UserEvent type:
As well as the following types:
Don't forget to add @EdgeAvailability on semantic_tagging & targeting_list_ids properties if you want to use those characteristics for building Edge segments. More information about Edge segments.
You need to add Contextual Targeting Extractor event rule to channels on which you want to use the contextual targeting feature. This event rule will have various objectives:
Specify which JavaScript tag will be responsible for URL & hit collection:
User event tag: uses the URL explicitly passed in the event
Contextual targeting tag: automatically captures the page URL from the browser
Calculate and add the contextual key in the event. This is done automatically when adding the event rule
Add the semantic information extracted from a URL in the event . This is required if you want to create Audience segments based on semantic information. Check Tag event with semantic extract in that case
Specify URLs (Blacklist URLs) or group of URLs (Blacklist RegEx) to blacklist. No semantic extraction nor targeting will be perfomed on those URLs. If we take the example of a news website, we recommend blacklisting pages that list articles, usually related to the same theme (e.g. "politics") and that are frequently refreshed.
Add a URL to blacklist per line. ALL URLs provided should NOT have protocol in it (http://, https://). For example:
Add a regexp to blacklist per line. Note that only * is authorized and * will automatically be applied before and after the input regexp. For example:
Will blacklist the following URLs:
www.mediarithmics.io/homepage
mediarithmicsTEST.io/homepage
mediarithmics*io/homepage/page.
You need to deploy a new mediarithmics snippet dedicated to Contextual targeting feature on every page where you want contexutal targeting activation to be performed. The mediarithmics contextual targeting tag is made of two parts:
The configuration that you should fill according to your context (site token & snippet name)
A technical part which contains the JavaScript code which asynchronously load the Contextual targeting tag on the page. This part should not be edited.
Here is an example of the contextual targeting tag you should implement:
You can find the site token, for each channel created, in Settings > Datamart > Channel.
As mentioned earlier, the Contextual targeting tag loaded through the snippet can also capture new URLs and associated hits.
In case of a Single-page application, you will need to execute the following function whenever the URL change and, more exactly, on every page where you want contexutal targeting to be performed:
This call wil result in extracting the current URL to lookup associated Targeting lists and synchronize their IDs with configured activation platforms. Please make sure to call this function as soon as possible to ensure that the Targeting lists IDs are calculated before the auction is executed.
# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_IDENTIFIERS_ASSOCIATION_DECLARATIONS",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/1162/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '
{
"identifiers":[
{ "type": "USER_EMAIL", "hash":"<EMAIL_HASH>" },
{ "type": "USER_AGENT", "user_agent_id": "<USER_AGENT_ID>" },
{ "type": "USER_ACCOUNT", "user_account_id": "<USER_ACCOUNT_ID>", "compartment_id": "<COMPARTMENT_ID>" }
]
}
'# Create the plugin definition
curl -X POST \
https://api.mediarithmics.com/v1/plugins \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"organisation_id": "my_organisation_id",
"plugin_type": "INTEGRATION_BATCH",
"group_id": "com.my-client.integration_batch",
"artifact_id": "integration-batch-my-client"
}'# Create the plugin version
curl -X POST \
https://api.mediarithmics.com/v1/plugins/<PLUGIN_ID>/versions \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"version_id":"1.0.0",
"plugin_properties":[
{
"technical_name": "one_more_property",
"value": {
"value": ""
},
"property_type": "STRING",
"origin": "PLUGIN",
"writable": true,
"deletable": true
},
{
"technical_name": "provider",
"value": {
"value": "mediarithmics"
},
"property_type": "STRING",
"origin": "PLUGIN_STATIC",
"writable": false,
"deletable": false
},
{
"technical_name": "name",
"value": {
"value": "My Plugin Name"
},
"property_type": "STRING",
"origin": "PLUGIN_STATIC",
"writable": false,
"deletable": false
}
]
}'# Create the plugin instance
curl -X POST \
https://api.mediarithmics.com/v1/integration_batch_instances \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"group_id": "com.my-client.integration_batch",
"artifact_id": "integration-batch-my-client",
"version_id": "my-version-id",
"organisation_id": "my-plugin-id",
"name": "The name of my instance",
"archived": false,
"cron" :"* * * 7 *",
"cron_status": "ACTIVE | PAUSED",
"ram_size": "LOW | MEDIUM | LARGE | EXTRA_LARGE",
"disk_size": "LOW | MEDIUM | LARGE | EXTRA_LARGE",
"cpu_size": "LOW | MEDIUM | LARGE | EXTRA_LARGE"
}'// Create an execution
curl -X POST \
https://api.mediarithmics.com/v1/integration_batch_instances/<INSTANCE_ID>/executions \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"parameters": {
execution_type: "MANUAL | CRON",
expected_start_date: 1562595783663
},
"organisation_id": "1185",
"user_id": "1007",
"error": null,
"status": "PENDING",
"external_model_id: "42",
"external_model_name": "PUBLIC_INTEGRATION_BATCH",
"start_date": 1562595789171,
"job_type": "BATCH_INTEGRATION",
}
contextual_key:String @TreeIndex(index:"USER_INDEX")ts:Timestamp! @TreeIndex(index:"USER_INDEX")semantic_tagging:[SemanticTagging] @Property(path:"$properties.$semantic_tagging")
targeting_list_ids:[String] @TreeIndex(index:"USER_INDEX") @ReferenceTable(type:"CORE_OBJECT", model_type:"CONTEXTUAL_TARGETING_LISTS") @Property(path:"$properties.$targeting_list_ids")type SemanticTagging {
provider_id:String @Property(path:"$provider_id") @TreeIndex(index:"USER_INDEX")
provider_version:String @Property(path:"$provider_version") @TreeIndex(index:"USER_INDEX")
entities:[SemanticEntity] @Property(path:"$entities")
iab_categories:[SemanticIABCategory] @Property(path:"$iab_categories")
}
type SemanticEntity {
entity_id:String @Property(path:"$entity_id") @TreeIndex(index:"USER_INDEX")
name:String @Property(path:"$name") @TreeIndex(index:"USER_INDEX")
type:String @Property(path:"$type")
wikidata_id:String @Property(path:"$wikidata_id")
}
type SemanticIABCategory {
category_id:String @TreeIndex(index:"USER_INDEX") @Property(path:"$category_id")
name:String @TreeIndex(index:"USER_INDEX") @Property(path:"$name")
}www.mediarithmics.io/homepage
actu.mediarithmics.io/
mediarithmics.io/actumediarithmics*.io/homepage<script type="text/javascript">
/* YOU CAN EDIT THIS PART */
const siteToken = "<SITE_TOKEN>" // token to change
const snippetName = "ctMics" // snippet name that can be changed
/* YOU SHOULD NOT EDIT THIS PART */
!function(e,t,s,i){"use strict";var a=e.ctscimhtiraidem||{};var r="call".split(" ");a._queue=a._queue||{},a._names=a._names||[],a._names.push(s),a._queue[s]=a._queue[s]||[],a._startTime=(new Date).getTime(),a._snippetVersion="ct-1.0";for(var n=0;n<r.length;n++)!function(e){var t=a[s]||{};(a[s]=t)[e]||(t[e]=function(){a._queue[s].push({method:e,args:Array.prototype.slice.apply(arguments)})})}(r[n]);e.ctscimhtiraidem=a,e[s]=a[s];e=t.createElement("script");e.setAttribute("type","text/javascript"),e.setAttribute("src",`https://events.mics-notrack.com/v1/sites/${i}/contextual_targeting.js`),e.setAttribute("async","true"),t.getElementsByTagName("script")[0].parentNode.appendChild(e)}(window,document,snippetName,siteToken);
</script>
ctMics.forceSetTargeting() // replace "ctMics" by the snippetName you used when implementing the contextual snippetUse the bulk import endpoints to create a document import with theUSER_IDENTIFIERS_DELETIONdocument type and APPLICATION_X_NDJSON mime type. Only ndjson data is supported for user activities.
Create an execution with your commands formatted in ndjson. Each command can either be a user account deletion, a user email deletion or a user agent deletion.
You can, of course, remove different identifier types at the same time. Please note that the uploaded data is in ndjson and not json. That means the different deletions are not separated by commas, but by a line separator \n
field
type
description
type
String
USER_ACCOUNT
user_account_id
String
The User Account Id.
compartment_id
Number (Optional)
The Compartment Id associated with the User Account Id.
field
type
description
type
String
USER_EMAIL
hash
String
Hash of the Email address.
field
type
description
type
String
USER_AGENT
user_agent_id
String
A other than a device point id. Ex: "vec:123456" or "net:9:12345".
User device point ids ("udp:123456") are not supported by this document import job and will cause the job to be rejected.
Use the Device points deletionjob type if you want to delete device points along with all their associated technical ids.
Example:

Use this feature to add or remove UserPoint from segments.
Use the bulk import endpoints to create a document import with theUSER_SEGMENTdocument type and APPLICATION_X_NDJSON or TEXT_CSV mime type.
Create anwith your user segment commands formatted in ndjson or csv depending on the mime type you chose.
Each line in the uploaded file can have the following properties:
You can have UPDATE and DELETE operations in the same file upload.
Please note, if not using csv, that the uploaded data is in ndjson and not json. That means the different profiles are not separated by commas, but by a line separator \n
Use segment_ids or segment_technical_names if you need to handle multiple segments for a single user.
You can use raw data stored in a file using the data_file datasource.
It works this way :
You upload a data file in the platform using the data_file API
You reference this file in the dashboard as a dataset
This source can then be transformed and display like all other data sources.
Use the data_file API to upload any JSON file containing your raw data. Its structure is not fixed.
There are two types of datasets that you can use :
A default key / value dataset is an array of key / value objects.
The whole structure of the dashboard is exactly the same as with other data sources.
For more information on datasets and datasources, see .
For a quick start on how to upload a dashboard, see .
The data source declaration is :
Here is an example with the JSON file we used previously
You can use a {SEGMENT_ID} token in uri and/or JSON_path properties. It will be replaced by the current segment if the dashboard is loaded on a segment's page. If the dashboard is loaded at any other scope, the token will not be replaced.
The GraphQL API gives you the ability to query all kinds of UserPoint data as collected by the platform (profile, activities, segments, identifiers, etc.) with a single request.
Each GraphQL query is meant to fetch a single UserPoint data. To query multiple UserPoint in a single query, depending on your use case, please refer to the and/or to the .
You can track read and click actions in emails sent to your customers.
To track the opening of the emails, you have to include a pixel
To track the clicks on the links included in your emails, you have to replace each of your links with a Click Tracking URL
For each event, the datamart will identify the users reading/clicking the email by:
# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_IDENTIFIERS_DELETION",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/1162/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '
{
"type": "USER_ACCOUNT",
"compartment_id": "1000",
"user_account_id": "8541254132"
}
{
"type": "USER_EMAIL",
"hash": "982f50d88d437d13bdbd541edfv4fe5176cc8d862f8cbe7ca4f0dc8ea"
}
{ "type": "USER_AGENT", "user_agent_id": "vec:89998434" }
'{
"type": "USER_ACCOUNT",
"compartment_id": "1000",
"user_account_id": "8541254132"
}{
"type": "USER_EMAIL",
"hash": "982f50d88d437d13bdbd541edfv4fe5176cc8d862f8cbe7ca4f0dc8ea"
}{ "type": "USER_AGENT", "user_agent_id": "vec:89998434" }

The schema used in the GraphQL endpoint is based on the customer defined schema. To add/remove fields from the GraphQL API, the schema has to be updated accordingly.
POST /v1/datamarts/:datamartId/query_executions/graphql
datamartId
string
The datamartId in which the UserPoint data will be looked up
query
string
This parameter should include your GraphQL query, starting with "query MyQuery {...". Do not forget to escape double quotes if needed.
{
"data": { ...GraphQL response... }
}{
"error": "..."
}To test/write GraphQL queries, it is best to use a dedicated editor.
We recommend you to use the GraphQL Playground that is available in our computing console.
Through the GraphQL endpoint, only queries (reads) are supported. Mutations (writes) are not supported.
This endpoint is being rate-limited and will respond with 429 HTTP status code if the QPS exceed its limits. Please discuss with your account manager to have more information about this rate limiting and to request any limit increase.
The user activities number retuned is also limited. A query return to maximum 100 activities. Be careful, there is not warning and you can't change this limit yet.
Moreover, an user event is always in an activity so, this limitation is indirectly apply on the events. The query will return only the event in the hundred first activities.
You can select a UserPoint by any identifier using different functions.
For more information, see User identifiers
To only select elements matching a specific clause, use @filter. This clause is a WHERE object tree expressions, so you can use any thing like it was this kind of expression.
Be carful the @filter is applied after the limitation of the activities. So if your query does not return activities it only mean there isn't activities which respect your condition in the hundred first of the UserPoint select.
You can create any use case you want using the list of dimensions and metrics that are available.
{
"date_ranges": [
{
"start_date": "2021-09-26T00:00:00",
"end_date": "2021-10-28T23:59:59"
}
],
"dimensions": [],
"metrics": [
{
"expression": "users"
}
]
} {
"id": "1",
"name": "Demographics",
"other_metadata_as_you_wish": "SEGMENT",
"genders": [
{
"key": "male",
"value": 358
},
{
"key": "female",
"value": 66
}
],
"ages": [
{
"key": "18-24",
"count": 277
},
{
"key": "45-54",
"count": 8
},
{
"key": "65+",
"count": 9
},
{
"key": "25-34",
"count": 12
},
{
"key": "35-44",
"count": 9
},
{
"key": "55-64",
"count": 3
}
],
"total": 666
}{
"key_value_dataset": [
{
"key": "Dimension 1",
"value": 666
}
...
{
"key": "Dimension X",
"value": 999
}
]
}{
...
"total": 666
}{
"type": "data_file",
// URI of the JSON data file containing data
// Format "mics://data_file/tenants/1426/dashboard-1.json"
"uri": String,
// Path of the property in the JSON that should be used as dataset
// This allows you to have multiple datasets in the same JSON file
// Should use the JSONPath syntax. See https://jsonpath.com/
// For example, "$[0].components[1].component.data"
"JSON_path": String,
// Optional. Title of the series for tooltips and legends
"series_title": String
}{
"sections": [
{
"title": "Section",
"cards": [
{
"x": 0,
"charts": [
{
"title": "Gender",
"type": "Bars",
"dataset": {
"type": "data_file",
"uri": "mics://data_file/tenants/XXX/dashboard-1.json",
"JSON_path": "$.genders"
}
}
],
"y": 0,
"h": 3,
"layout": "vertical",
"w": 4
},
{
"x": 4,
"charts": [
{
"options": {
"legend": {
"enabled": true,
"position": "right"
}
},
"dataset": {
"type": "data_file",
"uri": "mics://data_file/tenants/XXX/dashboard-1.json",
"JSON_path": "$.ages",
"series_title": "count"
},
"title": "Age range",
"type": "Pie"
}
],
"y": 0,
"h": 3,
"layout": "vertical",
"w": 5
},
{
"x": 9,
"charts": [
{
"title": "Totals",
"type": "Metric",
"dataset": {
"type": "data_file",
"uri": "mics://data_file/tenants/XXX/dashboard-1.json",
"JSON_path": "$.total"
}
}
],
"y": 0,
"h": 3,
"layout": "vertical",
"w": 3
}
]
}
]
}# Return the UserPoint creation_ts field
query MyQuery {
user_point(user_point_id: "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx") {
creation_ts
}
}# Return profiles, accounts, segments and scenarios data of a UserPoint
# Warning: This query relies on a customer defined schema so it may not
# work as-is on your datamart.
query MyQuery {
user_point(user_point_id: "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx") {
id
creation_ts
profiles {
gender
birth_date
}
accounts {
compartment_id
user_account_id
}
segments {
id
}
scenarios {
scenario_id
}
}
}# Select by UserPoint ID
query MyQuery {
user_point(user_point_id: "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx") {
id
creation_ts
}
}
# Select by email hash
query MyQuery {
user_point_by_email_hash(email_hash: "xxxxxxxxxxxxxxxxxxx") {
id
creation_ts
}
}
# Select by user account
# Both compartment ID and user account ID are mandatory
query MyQuery {
user_point_by_user_account_id(
compartment_id: "XXXX",
user_account_id: "xxxx-xxx-xx-xxxxx"
) {
id
creation_ts
}
}
# Select by user agent ID
query MyQuery {
user_point_by_user_agent_id(user_agent_id: "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"){
id
creation_ts
}
}query micsQuery {
user_point(user_point_id:"xxx") {
activities {
#only return events with the name "$home_view"
events @filter(clause: "name == \"$home_view\""){
ts
}
}
}
} "report_view": {
"items_per_page": 100,
"total_items": 1,
"columns_headers": [
"users"
],
"rows": [
[
114353272
]
]
}{
"date_ranges": [
{
"start_date": "2021-10-20T00:00:00",
"end_date": "2021-10-28T23:59:59"
}
],
"dimensions": [
{"name": "date_yyyymmdd"}
],
"metrics": [
{
"expression": "sessions"
}
]
} "report_view": {
"items_per_page": 100,
"total_items": 9,
"columns_headers": [
"date_yyyymmdd",
"sessions"
],
"rows": [
[
"20211020",
1372624
],
[
"20211021",
1368085
],
...
]
}{
"date_ranges": [
{
"start_date": "2021-10-20T00:00:00",
"end_date": "2021-10-28T23:59:59"
}
],
"dimensions": [
{"name": "type"}
],
"metrics": [
]
}"report_view": {
"items_per_page": 100,
"total_items": 4,
"columns_headers": [
"type"
],
"rows": [
[
"DISPLAY_AD"
],
[
"SITE_VISIT"
],
[
"USER_SCENARIO_NODE_ENTER"
],
[
"USER_SCENARIO_NODE_EXIT"
]
]
}{
"date_ranges": [
{
"start_date": "2021-10-20T00:00:00",
"end_date": "2021-10-28T23:59:59"
}
],
"dimensions": [
{"name": "date_yyyymmdd"},
{"name": "device_form_factor"}
],
"dimension_filter_clauses": {
"operator": "AND",
"filters": [
{
"dimension_name": "channel_id",
"operator": "EXACT",
"not": false,
"expressions": [
"666"
]
}
]
},
"metrics": [
{"expression": "users"},
{"expression": "revenue"}
]
}"report_view": {
"items_per_page": 100,
"total_items": 54,
"columns_headers": [
"date_yyyymmdd",
"device_form_factor",
"users",
"revenue"
],
"rows": [
[
"20211020",
"OTHER",
141,
222.74
],
[
"20211020",
"PERSONAL_COMPUTER",
821923,
87656567.1
],
[
"20211020",
"SMART_TV",
11,
null
]
[
"20211020",
"SMARTPHONE",
1901978,
98435875.79
],
...
]
}
The User Agent ID acting as a .
segment_id
String (Optional)
The Id of the segment in which the User is inserted/deleted.
segment_ids
List of strings (Optional)
The list of ids of segments in which the User is inserted/deleted. For instance
["segment_id_1","segment_id_2"]
segment_technical_name
String (Optional)
The technical name of the segment in which the User is inserted/deleted.
segment_technical_names
List of strings (Optional)
The list of technical names of segments in which the User is inserted/deleted. For instance
["technical_name_1","technical_name_2"]
expiration_duration
Integer (Mandatory)
The number of minutes before the user will be removed from the segment. 0 means that the User will never leave the segment
expiration_ts
Number (Optional)
The timestamp of the expiration date of the User in the segment. A value of 0 means that the user will never leave the segment
data_bag
Escaped JSON String (Optional)
The data bag associated with the user/segment relationship
operation
Enum (Mandatory)
Either UPDATE or DELETE
compartment_id
String (Optional)
The Compartment ID acting as a user identifier in correlation with the user account ID
user_account_id
String (Optional)
The User Account ID acting as a user identifier in correlation with the user account ID.
email_hash
String (Optional)
The Email Hash acting as a user identifier.
user_agent_id
String (Optional)
Run your query in the query tool and save it as a technical query. Note the ID of the query.
In the computing console, go to dashboards and add/edit a dashboard
Choose a name and save your dashboard.
Switch to the Advanced tab.
Edit the JSON
See the DashboardContent object for a quick reference.
Run your query in the query tool and save it as a technical query. Note the ID of the query.
In the computing console, go to dashboards and add/edit a dashboard
Add or edit a chart and go to the Advanced tab
Edit the JSON and preview your changes.
See the Chart object for a quick reference.
This quickstart guide uses the Long term access tokens authentication method. Choose and configure your own authentication method. For more information, see Authentication.
Your dashboard could use OTQL queries or activities analytics queries to retrieve data. We will use both in this tutorial, and OTQL queries need to be registered using the Creating a query endpoint.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/queries
Register an OTQL query in the platform
We will create two OTQL queries for this tutorial. The first one counts the UserPoint in the datamart, the second one lists the devices they use.
You first create a DashboardRegistration object to reference your dashboard and define where it is visible.
POST https://api.mediarithmics.com/v1/dashboards
Here is a sample body payload for a home dashboard with all the important properties
You can now upload content in your dashboard using the DashboardContent object.
PUT https://api.mediarithmics.com/v1/dashboards/:id/content?organisation_id=:organisation_id
Here is a sample body payload for a content using the queries we previously created.
Go to your datamart's home page, and your dashboard is now displayed with the two charts we created !
BarsThe available options for the bars chart are
See Hicharts.filter for the data_labels.filter format.
Here are the different states of an Area chart depending on its options and the dataset
The available options for the area chart are
See Hicharts.filter for the data_labels.filter format.
Here are the different states of the Pie charts depending on its options and the dataset
The available options for the pie charts are
See Hicharts.filter for the data_labels.filter format.
Here are the different states of the Radar charts depending on its options and the dataset.
The available options for the Radar charts are
See Hicharts.filter for the data_labels.filter format.
Metric charts are either displayed as percentages or as count, depending on its options.
percentage formatcount formatHere are the available options for the e charts
Bars chartPie chart with legendRecommended option: using the user's Email Hash, included by you in the pixel/tracking URL
Reading their cookies if possible:
It is always working when the users are clicking (e.g. Click Tracking URL)
It is sometimes possible when the users are opening the emails, depending on their email client (e.g. Pixel)
Calls to those URLs generate events.
or if your are using the $uids field (see Passing user identifiers in pixel-based tracking)
or if you are using a custom domain:
It is possible to set custom properties
$ev
String
The event name. $email_view for email opening tracking
$dat_token
String
The id of the audience datamart in the mediarithmics platform.
$cuid
String (Optional)
The user account id.
Calls to those URLs generate click events
It is also possible to set custom properties.
The $redirect parameter is used to define the destination of the url redirection. The URL put in the $redirect parameter should be URL Encoded (RFC 3986).
$ev
String
The event name. $email_click for email opening tracking
$dat_token
String
The id of the audience datamart in the mediarithmics platform.
$redirect
String
The redirect url. This string should url encoded. (RFC 3986). Warning: this parameters must be placed at the end of the URL. Any parameters that will be placed after the $redirect parameters will not be saved.


Every time an event is sent, we run the event rules associated with its channel. They are predefined actions to extract properties, reshape data or identify a user from a property.
If the action you wish to do on each event is not possible with an event rule, you should have a look at Activity Analyzers.
You can go to a channel's settings, edit, and then scroll down to event rules.
This Event Rule will help you take an existing property from an event and copy it into the origin of the activity. You can copy:
The URL of the Event
The Referrer of the Event
Any Event Property
This Event Rule allows you to write a pattern that will be matched against all the $url values of incoming $page_view events. The pattern can extract some values from the Url Path and from the Url Query String.
If you want more information about the different parts of an URL,
Url Match only works for $page_view events. Any other event won't be processed by the Url Match event rule.
Please note that $page_view events will be deleted at the end of the activity processing stage.
Useful if for technical and/or organizational reasons, you can't customize the Tracking JS Snippet on a web page to retrieve information from the data layer.
Use the that will automatically track $page_view events with the URL of the page.
Use the Url Match event rule to extract properties from the page's URL and add them to the event.
For example, if a $page_view event is tracked with the URL https://foo.bar/category/0001/article/super-awesome-article, you would be able to generate this kind of event
The URL Match is taking 2 parameters:
The URL Pattern that will be used to match the $url value of $page_view events
The event template that will be used to generate the event if the $url is matching the URL pattern
Patterns are URL in which you can add:
A variable extraction rule for path parameters with:variableName
A wildcard with *
Example *//foo.bar/category/:categoryId/article/:articleId
* at the beginning will match both URLs in and in http in the URL
:categoryId will extract the value in the URL corresponding to the categoryId. As it is a named variable, it can be used as a value in the event that will be generated.
:articleId works as:categoryId
The Query String values are automatically extracted in variables that have the name of the parameter in the Query String. https://foo.bar/a/b/c?var=value&var2=value2 extracts both value and value2 in variables named var and var2.
The event template is containing the following information:
The $event_name that will be used for the generated event as a static string
A list of properties in a key-value way that will be added in the event properties
In the properties values you can either pass:
A static string directly. Ex: sport
A string in the format {{variableName}} that will be replaced by a value extracted in the URL pattern
Let's take the following event rule:
Url pattern: *//foo.bar/category/:categoryId/article/:articleId
Event template:
$page_view event with Url: https://foo.bar/category/0001/article/super-awesome-article?origin=email will generate
$page_view event with Url http://foo.bar/category/0001/article/super-awesome-article will generate
$page_view event with Url https://oof.bar/category/0001/article/super-awesome-article will generate nothing as the Url domain (oof.bar) is not matching the pattern.
page_view event with Url https://foo.bar/category/0001/article/super-awesome-article will generate nothing as the "source" event name is not $page_view.
This event rule allows you to extract a property from an event to convert it into a user identifier, such as an Email Hash or a User account ID.
You can apply a hash on this extraction. We currently support the following hash methods:
SHA_256
SHA_1
SHA_384
SHA_512
This event rule is useful when you don't want to pass an identifier as a global property of the activity, but rather have it computed directly by your datamart.
If several values of an identifier are found within an activity, we keep only the last one.
In the scenario above, you will be able to extract the user_id, store it as a User account ID.
This event rule is used for Contextual targeting purpose. You can find more information about it in the section
mediarithmics modules can trigger alerts to grab the attention of users/integrators on specific points to improve or fix.
Alerts are displayed in the UI, but can also be accessed by API if you want to automate actions or grab them in your own reports.
Here are the various alert types that exist:
Alerts have several properties associated:
type: The type of the alert (e.g., SEGMENT_COMPUTATION_ERROR, SEGMENT_VOLUME_DROP...)
id: The unique identifier of the alert
datamart_id: The identifier of the datamart associated with the alert
The alerting system supports polymorphism to accommodate specific fields for various alert types.
To provide a familiar terminology to users, alerts can be opened or closed. However, in the system, the open/closed state is represented by the archived field. Opening an alert sets the archived field to false, while closing an alert sets it to true. The closed state implies that the alert is no longer active or visible to users.
To avoid having multiple instances of the same alert, the system employs a prevention mechanism. Each alert has count and count_last_ts properties. When triggering a new alert, we check if there is already an active alert for the same target. If such an alert exists, the system increments the count property and updates the count_last_ts to reflect the latest trigger. This prevents the proliferation of identical alerts and ensures that only one alert remains active with an incremented counter.
For example, if a segment has a query error and the issue persists without resolution, the system will increment the count property of the existing alert rather than creating multiple duplicate alerts.
The count_last_ts property stores the timestamp of the most recent trigger, while the created_ts property stores the timestamp of the initial trigger.
To manage the storage of alerts and ensure their relevance, the system implements an expiration mechanism. Alerts have an expiration duration associated with them. It is set to the created_ts + 1 month. You can't modify this behavior.
A cleaning job runs regularly to identify and delete all expired alerts from the database. This prevents the accumulation of unnecessary historical data.
The API allows users and integrators to interact with alerts through the following functionalities:API
GET https://api.mediarithmics.com/v1/alerts
.
You have to fill in either the organisation_id, datamart_id or community_id parameters.
Archived alerts are not returned by default. You need to ask them through the archived parameter.
PUT https://api.mediarithmics.com/v1/alerts/:alertId
DELETE https://api.mediarithmics.com/v1/alerts/:alertId
No other operations or modifications are permitted through the API.
Users and integrators are restricted from editing any field other than the archived flag for an alert.
This page shows you how to get started using the activities analytics API to query your data in mediarithmics.
This quickstart guide uses the Long term access tokens authentication method. Choose and configure your own authentication method. For more information, see Authentication.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/user_activities_analytics
Here is a sample as body payload with all the important properties
The API will answer with a containing a .
Congratulations! You've sent your first request to the Activities analytics API.
To configure this feature, please follow the next steps IN ORDER:
Attributes definition
ML function creation
ML function activation
Schema update
A designated cohort is assigned to a user depending on attributes (also named features in DataScience) you have defined to characterize your users. You will need to format those attributes using JSON format (see below).
For instance, let's imagine that you want to create cohorts based on:
os_family - defined on UserAgentInfo nested in UserAgent
age - defined on UserProfile
city - defned on UserEvent
You will therefore define the following JSON:
There are 3 types of attributes available:
FREQUENCY_ENUM: use this type for a finite list of values like operating systems.
FREQUENCY_NUMBER: use this type for classifying number buckets like age. Using the above example:
First bucket: >= 0 & < 10
The field_path must contain the path of the attribute from the UserPoint definition (see for more info)
A ML function requires a query to fetch data used in its configuration. In the case of cohort-based lookalike, it requires an appropriate query to fetch fields used as attributes and specified in the JSON.
Following our previous example, the graphQL query will be :
Please follow the next steps to instantiate the ML function developed by mediarithmics to assign a cohort to your userpoints:
Head to Settings > Datamart > ML Functions
Click on New Ml Function, pick the datamart where to apply the ML function then choose simhash-cohorts-calulation
Enter the following information on the ML function configuration panel:
Note that only one Cohort-based Lookalike model can be set up at a time in an organisation.
Once the ML function has been instantiated, you will need to update batch_mode parameter to true and activate the ML function by running the following API :
Two changes have to be made in your runtime schema :
Add a field clustering_cohort in UserPoint as follow :
Create a new ClusertingCohort type as follow :
Don't hesitate to have a look at to learn more about how to update your schema.
You can ask your Account manager to run an initial loading on your datamart to calculate cohorts on existing userpoints.
# Create the document import
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_SEGMENT",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Create the execution
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{
"operation": "UPDATE",
"expiration_duration": <INTEGER>,
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"segment_id": "<SEGMENT_ID>"
}'{
"datamart_id": {{datamartId}},
"query_language": "OTQL",
"query_text": "select @count{} FROM UserPoint"
}{
"datamart_id": {{datamartId}},
"query_language": "OTQL",
"query_text": "SELECT {agents {user_agent_info{form_factor @map}}} FROM UserPoint"
}{
"title": "My awesome dashboards",
"scopes": [
"home"
],
"segment_ids": [],
"builder_ids": [],
"archived": false,
"organisation_id": "{{orgId}}",
"community_id": "{{communityId}}"
}{
"sections": [
{
"title": "",
"cards": [
{
"x": 0,
"y": 0,
"h": 3,
"layout": "vertical",
"w": 8,
"charts": [
{
"title": "User points",
"type": "Metric",
"dataset": {
"type": "OTQL",
"query_id": "{{ID of query 1}}"
}
},
{
"title": "Device form factors",
"type": "Bars",
"dataset": {
"type": "OTQL",
"query_id": "{{ID of query 2}}"
},
"options": {}
}
]
}
]
}
]
} {
"title": "Device form factors", // Could be an empty string
"type": "Bars", // Bars || Pie || Metric || Radar
"dataset": { // See Datasets and datasources page
"type": "OTQL",
"query_id": "50171"
},
"options": {} // Options depending on the type of chart
}"options": {
"legend": {
enabled: boolean; # Show or hide the legend. Defaults to FALSE.
position: ‘bottom’ | ‘right’; # Display legend on the bottom or on the right of the chart. Defaults to bottom
},
"colors": string[]; # Defaults to current theme colors
"format": ‘count’ | ‘percentage’ | ‘index’; # Defaults to count. Pass percentage or index if dataset is compatible (comes from a percentage or index transformation) to automatically change labels, and tooltips and have more info.
"drilldown": boolean; # Enables drill down if dataset is compatible. Defaults to FALSE
"stacking": boolean; # Enables stacking if dataset is compatible. Prioritized over drilldown. Defaults to FALSE
"type": ‘bar’ || ‘column’; # Set to ‘bar’ to display horizontal bars instead of columns. Defaults to ‘column’
"plot_line_value": int; # Set to draw a line at the specified value
"hide_x_axis": boolean; # TRUE to hide X axis. Defaults to FALSE
"hide_y_axis": boolean; # TRUE to hide Y axis. Defaults to FALSE
"tooltip": {format: string} # Highcharts tooltip pointFormat. Defaults to “{point.y}” if dataset don’t have a -count field, “{point.y}% ({point.count})” otherwise.
"big_bars": boolean; # TRUE displays large bars close to each other, FALSE smaller bars with more space between them. Defaults to TRUE
}"options": {
"legend": {
enabled: boolean; # Show or hide the legend. Defaults to FALSE.
position: ‘bottom’ | ‘right’; # Display legend on the bottom or on the right of the chart. Defaults to bottom
},
"colors": string[]; # Defaults to current theme colors
"format": ‘count’ | ‘percentage’; # Defaults to count. Pass percentage if dataset is compatible (comes from a percentage transformation) to automatically change labels, and tooltips and have more info.
"type": ‘area’ || ‘line’; # Set to ‘line’ to only display lines and no area. Defaults to ‘area’
"plot_line_value": int; # Set to draw a line at the specified value
"hide_x_axis": boolean; # TRUE to hide X axis. Defaults to FALSE
"hide_y_axis": boolean; # TRUE to hide Y axis. Defaults to FALSE
"double_y_axis": boolean; # TRUE for each serie to have its own scale (and shows 2 vertical axis). Defaults to FALSE
"tooltip": {format: string} # Highcharts tooltip pointFormat. Defaults to “{point.y}” if dataset don’t have a -count field, “{point.y}% ({point.count})” otherwise.
}"options": {
"legend": {
"enabled": boolean; # Show or hide the legend. Defaults to FALSE.
"position": ‘bottom’ | ‘right’; # Display legend on the bottom or on the right of the chart. Defaults to bottom
},
"colors": string[]; # Defaults to current theme colors
"drilldown": boolean; # Enables drill down if dataset is compatible. Defaults to FALSE
"inner_radius": boolean; # True displays the chart as a Donut, false as a Pie. Defaults to TRUE
"is_half": boolean; # True to only display half of a donut or half of a pie. Defaults to FALSE.
"size": number | string | undefined; # The diameter of the pie relative to the plot area. Can be a percentage (75%) or a value.
"data_labels": {
"enabled": boolean; # If labels are displayed or not, default to TRUE
"distance": int; # Distance between labels and the figure. Defaults to 10 if isHalf:true, else 0
"format": string; # Labels text. See Labels and string format. Defaults to “{point.percentage:.2f}%” if legend.enable:false, “{point.name} {point.percentage:.2f}%” otherwise.
"filter": Highcharts.filter; # To hide labels under certain values. Defaults to undefined.
};
"tooltip": {"format": string} # Highcharts tooltip pointFormat. Defaults to “{point.percentage:.2f}%” if legend.enable:false, “{point.name} {point.percentage:.2f}%” otherwise.
}"options": {
"legend": {
"enabled": boolean; # Show or hide the legend. Defaults to FALSE.
"position": ‘bottom’ | ‘right’; # Display legend on the bottom or on the right of the chart. Defaults to bottom
},
"colors": string[]; # Defaults to current theme colors
"format": ‘count’ | ‘percentage’; # Defaults to count. Pass percentage if dataset is compatible to automatically change labels, and tooltips.
"data_labels": {
"enabled": boolean; # If labels are displayed or not, default to TRUE
"format": string; # Labels text. See Labels and string format. Defaults to “{point.y}” if dataset don’t have a -count field, “{point.y}%” otherwise
"filter": Highcharts.filter; # To hide labels under certain values. Defaults to undefined.
};
"tooltip": {"format": string} # Highcharts tooltip pointFormat. Defaults to “{point.y}” if dataset don’t have a -count field, “{point.y}% ({point.count})” otherwise.
}"options": {
"format": ‘count’ | ‘percentage’ | 'float'; # Defaults to count. If count, simply displays the number, else displays the number with %
}https://events.mediarithmics.com/v1/touches/pixel?
$ev=$email_view&
$dat_token=<DATAMART_TOKEN>&
$email_hash=78b04074e616166938cf672f70f41b4d&
Any Custom Properties ...https://events.mediarithmics.com/v1/touches/pixel?
$ev=$email_view&
$dat_token=<DATAMART_TOKEN>&
$uids=jso-[{"$tpe":"EM","$eh":"78b04074e616166938cf672f70f41b4d"}]&
Any Custom Properties ...https://analytics.custom-domain.com/v1/touches/pixel?
$ev=$email_view&
$dat_token=<DATAMART_TOKEN>&
$email_hash=78b04074e616166938cf672f70f41b4d&
Any Custom Properties ...https://events.mediarithmics.com/v1/touches/click?
$ev=$email_click&
$dat_token=<DATAMART_TOKEN>&
$email_hash=78b04074e616166938cf672f70f41b4d&
Any Custom Properties &
$redirect=<CLICK_URL>String (Optional)
The user email.
$email_hash
String (Optional)
The user email hash.
$cb
String (Optional)
The cache buster parameter. It should contain a random string.
$uids
JSON as string (Optional)
The list of user identifiers of the user.
any custom property name
Any Type
Any custom property
$cuid
String (Optional)
The user account id.
String (Optional)
The user email.
$email_hash
String (Optional)
The user email hash.
$cb
String (Optional)
The cache buster parameter. It should contain a random string.
$uids
JSON as string (Optional)
The list of user identifiers of the user.
any custom property name
Any Type (Optional)
Any custom property.

datamartId*
number
The ID of the datamart to query
metrics*
array
Array of Metric to retrieve.
dimension_filter_clauses
object
Filters to apply on dimensions before calculating the metric. For more information, see FilterClause.
dimensions*
array
Dimensions to group metrics by.
date_ranges*
array
Periods to analyze. Each date range is an object with a start_date and an end_date. See DateRange.
{
"status": "ok",
"data": {
"report_view": {
"items_per_page": 100,
"total_items": 7,
"columns_headers": [
"type"
],
"rows": [
[
"DISPLAY_AD"
],
[
"EMAIL"
],
[
"SITE_VISIT"
],
[
"USER_SCENARIO_NODE_ENTER"
],
[
"USER_SCENARIO_NODE_EXIT"
],
[
"USER_SCENARIO_START"
],
[
"USER_SCENARIO_STOP"
]
]
}
}
}ML function initial loading
Pick attributes from various typology (UserEvent, UserProfile, …)
Select between 3 & 10 attributes
Have between 50 & 300 values of attributes (from all various attributes)
Keep the default of 1024 cohorts (Cohort Id Bit Size = 10, see below for more information about this)
Third bucket: anything that didn't fell into the 2 defined buckets
FREQUENCY_TEXT: use this type an infinite (or long) liste of values like keywords, cities, ... Choose wisely the vector_size parameter as it will be used as a modulo on values to reduce the disparity of values to a fixed number
Name: Cohort ML Function
Hosting Object Type: UserPoint
Field Type Name: ClusteringCohort
Field Name: clustering_cohort
Query: <Insert here the graphQL query that need to be run to extract attributes used to calculate your cohort>
Properties
Features: <Insert here the one-line JSON>
Cohort Id Bit Size: <Wil be used to define number of cohorts in your datamart as 2^(Cohort Id Bit Size)>
Click on Save button
{
// Retrieve the data in the specified date range
// Mandatory. The data is only queryable for the last 4 months
// Only one range is allowed now, but the API is prepared to accept
// multiple ranges in the future.
// Tip : you can use dates in "now-Xd/d" format as in OTQL queries
"date_ranges": [
{
"start_date": "2021-10-10T00:00:00",
"end_date": "2021-10-25T23:59:59"
}
],
// List of dimensions to retrieve
"dimensions": [
{
"name": "date_yyyy_mm_dd"
},
{
"name": "channel_id"
}
],
// Filters on dimensions
"dimension_filter_clauses": {
"operator": "OR",
"filters": [
{
"dimension_name": "type",
"operator": "EXACT",
"expressions": [
"SITE_VISIT"
]
}
]
},
// Order by dates, beginning with the most recent
"order_by": {
"field_name": "-date_yyyy_mm_dd"
},
// List of metrics to retrieve
"metrics": [
{
"expression": "users"
},
{
"expression": "number_of_transactions"
}
]
}{
"status": "ok",
"data": {
"report_view": {
// Note : pagination not implemented yet
"items_per_page": 100,
"total_items": 100,
// To know which data is in which column
"columns_headers": [
"date_yyyy_mm_dd",
"channel_id",
"users",
"number_of_transactions"
],
// Data
"rows": [
[
"2021-10-10",
666,
3881,
17800.0
],
[
"2021-10-10",
555,
1838,
4200.0
],
[
"2021-10-11",
666,
532,
3900.0
],
[
"2021-10-11",
555,
8,
100.0
]
// ...[
]
}
}
}[
{
"type": "FREQUENCY_ENUM",
"field_path": "agents.user_agent_info.os_family",
"values": [
"OTHER",
"WINDOWS",
"MAC_OS",
"LINUX",
"ANDROID",
"IOS"
]
},
{
"type": "FREQUENCY_NUMBER",
"field_path": "profiles.age",
"intervals": [
{
"from": 0,
"to": 10
},
{
"from": 10,
"to": 100
}
]
},
{
"type": "FREQUENCY_TEXT",
"field_path": "events.city",
"vector_size": 100
}
]{agents {user_agent_info {os_family}} profiles {age} events{city}}PUT https://api.mediarithmics.com/v1/ml_functions/<id_ml_function>
{
"batch_mode": true,
"status": "ACTIVE"
}type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
...
clustering_cohort:ClusteringCohort
...
}type ClusteringCohort {
id:ID! @TreeIndex(index:"USER_INDEX")
expiration_ts:Timestamp @TreeIndex(index:"USER_INDEX")
cohort_id:String! @TreeIndex(index:"USER_INDEX")
last_modified_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
}MD5
MD2
article_id
{{articleId}}
category_id
{{categoryId}}
visit_origin
{{origin}}
source_event_rule
1
organisation_id: The identifier of the organisation associated with the alert
community_id: The identifier of the community associated with the alert
created_ts: The timestamp indicating when the alert was created
archived: A flag indicating whether the alert is closed/archived (true) or open (false)
archived_ts: The timestamp indicating when the alert was closed/archived
archived_by: The identifier of the user that closed/archived the alert
expiration: The expiration timestamp for the alert
count: The number of times the alert has been triggered
last_count_ts: The timestamp of the most recent trigger of the alert
Total drop (in percentage) of the segment volume since the alert was first triggered
archived
Boolean
true to return archived alerts.
SEGMENT_DEFINITION_ERROR
Error in the segment definition. More information here
SEGMENT_VOLUME_DROP
Segment volume drops by more than a configured threshold (in percentage).
In the segment computation process, after it has been computed, we check
If volumes have dropped by more than XX%
If none of the segment labels are in the blocklist when in blocklist mode
If any of the segment labels are in the allowlist when in allowlist mode
INITIAL_LOADING_ERROR
Error during initial loading of a feed attached to a segment More information here
SEGMENT_COMPUTATION_ERROR
Error during the calculation of a segment More information here
-
segment
{
'segment_id': 'xxx',
'segment_name': 'The segment name',
'segment_type': 'USER_QUERY',
'user_points_count': 71989,
'feeds_count': 0
}
SEGMENT_INITIAL_LOADING
alert_sub_type
sub-type of the error:
INITIAL_LOADING_EXECUTION_ON_ERROR
INITIAL_LOADING_RECORDS_ERROR
INITIAL_LOADING_NOT_STARTING
INITIAL_LOADING_RUNNING_TOO_LONG
SEGMENT_INITIAL_LOADING
feed_id
ID of the feed concerned by the error
SEGMENT_VOLUME_DROP
organisation_id
Int
ID of the organisation in which to find alerts
datamart_id
Int
ID of the datamart in which to find alerts
community_id
Int
ID of the community in which to find alerts
type
AlertType
Such as SEGMENT_COMPUTATION_ERROR. See Alert types.
alertId
Int
ID of the alert to edit
archive
boolean
true to close an alert, false to open it.
alertId
Int
ID of the alert to delete
drop_rate



Alert Configurations allow customization of settings for different alert types within the Alerting module.
Configurations are used to define specific behavior, thresholds, and rules associated with each alert type. They are dedicated resources.
Each alert type can have a different set of configurations. By configuring alert types, you can tailor the behavior of the alerts to meet specific requirements.
Alert configurations can be personalized for a specific organization. This means that each organization can have its own set of configuration values for the alert types. If a configuration is not set, a default value set by mediarithmics will be used.
Alert configurations are identified by combining three values: config_key, organisation_id, and alert_type. The config_key uniquely identifies a specific configuration setting, while the organisation_id and alert_type specify the organization and alert type to which the configuration belongs.
Here is a list of available configuration keys and their sample values:
You can list/edit configurations for your organizations by API.
GET https://api.mediarithmics.com/v1/alert_type_configs
If a configuration from the allowed list is not setup, it won't be returned by this call but fall back to the default platform value in usage.
POST https://api.mediarithmics.com/v1/alert_type_configs/config_key=:configKey/organisation_id=:organisationId/alert_type=:alertType
PUT https://api.mediarithmics.com/v1/alert_type_configs/config_key=:configKey/organisation_id=:organisationId/alert_type=:alertType
DELETE https://api.mediarithmics.com/v1/alert_type_configs/config_key=:configKey/organisation_id=:organisationId/alert_type=:alertType
Configurations can be archived using the PUT request. An archived configuration is not used anymore by the platform (fallback to default value) but is easier to reactivate later.
{
"$ts": 1568296040000,
"$event_name": "article_view",
"$properties": {
"category": "0001",
"article_id": "super-awesome-article",
"source": "web"
}
}{
"$ts": 1568296040000,
"$event_name": "article_view",
"$properties": {
"category": "0001",
"article_id": "super-awesome-article",
"visit_origin": "email",
"source_event_rule": "1"
}
}{
"$ts": 1568296040000,
"$event_name": "article_view",
"$properties": {
"category": "0001",
"article_id": "super-awesome-article",
"visit_origin": "{{origin}}",
"source_event_rule": "1"
}
}{
"$event_name": "PageView",
"$properties": {
"user_id": "<USER_ID>"
}
}Value is an integer & reprents a number of hours Defaults to 24
initial_loading_running_too_long_hours_threshold
alert_type: SEGMENT_INITIAL_LOADING
Threshold in hours after which an alert will be triggered if the initial loading is taking more time than expected
Value is an integer & reprents a number of hours Defaults to 24
initial_loading_records_error_threshold
alert_type: SEGMENT_INITIAL_LOADING
An alert will be triggered if the percentage of errors is above the one defined in this config.
Value is an integer. Defaults to 10
volume_drops_segment_labels_mode:
alert_type: SEGMENT_VOLUME_DROP
Defines whether segment labels in volume_drops_segment_labels_ids config are whitelisted or blacklisted.
To edit in the UI, go to alerts on the segment list. For more information, see Using the Segments page.
blacklist | whitelist. Volume drops apply to all segments if not defined
volume_drops_segment_labels_ids:
alert_type: SEGMENT_VOLUME_DROP
List of segment labels that will or won't receive volume drop alerts depending on the volume_drops_segment_labels_mode configuration
To edit in the UI, go to alerts on the segment list. For more information, see Using the Segments page.
Sample value: 1,2,3
Volume drops apply to all segments if not defined
volume_drops_threshold
alert_type: SEGMENT_VOLUME_DROP
Percentage of volume drop in a segment that triggers the alert.
Value is an integer such as 15. Defaults to 10
organisation_id*
Int
ID of the organisation
configKey*
string
Configuration key. Use the list of allowed configurations. Other keys won't have an impact.
organisationId*
string
ID of the organisation for which to create the configuration
alertType*
string
AlertType. Use the list of allowed configurations.
config_value*
string
The value of the configuration. Use the list of allowed configurations for the correct value format, depending on your configuration.
configKey*
string
Configuration key. Use the list of allowed configurations. Other keys won't have an impact.
organisationId*
string
ID of the organisation for which to create the configuration
alertType*
string
AlertType. Use the list of allowed configurations.
config_value*
string
The value of the configuration. Use the list of allowed configurations for the correct value format, depending on your configuration.
archived
Boolean
true to archive a configuration. false to reactivate it. For more information, see Archived configurations.
configKey*
string
Configuration key. Use the list of allowed configurations. Other keys won't have an impact.
organisationId*
string
ID of the organisation for which to create the configuration
alertType*
string
AlertType. Use the list of allowed configurations.
initial_loading_not_starting_hours_threshold
alert_type: SEGMENT_INITIAL_LOADING
Threshold in hours after which an alert will be triggered if the initial loading hasn't started
GET https://api.mediarithmics.com/v1/dashboards?organisation_id=:organisation_id
Returns a paginated resource list wrapper of DashboardRegistration objects.
POST https://api.mediarithmics.com/v1/dashboards
Receives a DashboardRegistration object as body.
PUT https://api.mediarithmics.com/v1/dashboards/:id?organisation_id=:organisation_id
Receives a DashboardRegistration object as body.
GET https://api.mediarithmics.com/v1/dashboards/:id?organisation_id=:organisation_id
Returns a DashboardRegistration object.
DELETE https://api.mediarithmics.com/v1/dashboards/:id?organisation_id=:organisation_id
Dashboard content endpoints let you manage the sections, cards and charts in a specific dashboard.
GET https://api.mediarithmics.com/v1/dashboards/:id/content?organisation_id=:organisation_id
PUT https://api.mediarithmics.com/v1/dashboards/:id/content?organisation_id=:organisation_id
This object represents a dashboard and where it should be displayed.
title string
The title of the dashboard, as displayed in the UI
scopes[] enum(home,segments,builders)
The list of scopes where the dashboard is visible. Mandatory, but can be an empty array.
segment_ids[] string
When scopes property contains segments, you can specify a list of segment IDs to only display the dashboard on those specific segments. Mandatory, but can be an empty array.
builder_ids[] string
When scopes property contains builders, you can specify a list of standard segment builder IDs to only display the dashboard on those specific builders. Mandatory, but can be an empty array.
archived boolean
Set to true to hide a dashboard from the UI without deleting it.
dashboard_content_id string
Identifier of the DashboardContentWrapper that's been associated with the dashboard registration.
community_id string
ID of the community on which the dashboard is visible.
organisation_id string
ID of the organisation on which the dashboard is visible. Must be on the community_id community.
created_ts timestamp
When the dashboard registration was created. ReadOnly.
created_by user ID
By who the dashboard registration was created. ReadOnly.
last_modified_ts timestamp
When the dashboard registration was last modified. Not updated when dashboard content is updated as DashboardContent object has its own created_ts field. ReadOnly.
last_modified_by user ID
By who the dashboard was last modified. Not updated when dashboard content is updated as DashboardContent object has its own created_by field. ReadOnly.
This object is returned when doing a GET request to get the content of a dashboard. It returns useful metadata as well as dashboard's content
id string
Content's identifier, used in DashboardRegistration to associate a dashboard and its content.
content object(DashboardContent)
Dashboard's JSON representation.
organisation_id string
ID of the organisation on which the dashboard is visible.
created_ts timestamp
When the dashboard content was created. ReadOnly.
created_by user ID
By who the dashboard content was created. ReadOnly.
This object represents the sections, cards and charts displayed in a dashboard.
available_filters[] object(Filter)
The list of filters activated for the dashboard.
sections[] object(Section)
The list of sections inside a dashboard.
A filter is displayed at the top of a dashboard. The user can select a value and all the queries in the dashboard adapt to the selected value
A query fragment tells the dashboard how to adapt each query to the value(s) selected by the user.
A section gives you a title and a new grid to display cards.
title string
The title of the section, displayed in the UI.
cards[] object(Card)
The list of cards to display in the section.
A white zone in the section, that displays and organizes charts.
x,y,h,w int
The position of the card in the section's grid. See Sections, cards and charts for a guide on how to use it.
layout enum(vertical, horizontal)
Wether charts in the card will stack horizontally or vertically.
charts[] object(Chart)
The list of charts to display in the card.
A chart displayed in a card.
title string
The chart's title, displayed in the UI.
type enum(Pie, Bars, Radar, Metric)
The type of chart to display
colors[] string
Optional. You can use this property to override default chart colors, which are defined by the theme of the site. Define as many color codes (in #FFFFFF format) as needed by the chart.
dataset object(Dataset)
How to get data for the chart
options object(PieOptions, BarsOptions, RadarOptions, MetricOptions, AreaOptions)
Optional. Options specific to the type of chart that has been selected.
{
"title": String,
"scopes": [Scope]
"segment_ids": [String],
"builder_ids": [String],
"archived": Boolean,
"dashboard_content_id": String,
"organisation_id": String,
"community_id": String,
// Readonly fields
"created_ts": Timestamp,
"created_by": String,
"last_modified_ts": Timestamp,
"last_modified_by": String
}{
"id": String,
"content": DashboardContent
"organisation_id": String,
"created_ts": Timestamp,
"created_by": String
}{
"available_filters": [Filter]
"sections": [Section]
}{
// Using technical names of compartments, segments or channels
// will result in IDs being automatically replaced by names in the UI
"technical_name": String,
"title": String,
"values_retrieve_method": 'Query', // Only available value at the moment
// OTQL query to retrieve list of selectable values
// Use a query string, not the ID of a query
"values_query": String,
// How to adapt queries in the dashboard to the selected value(s)
"query_fragments": [QueryFragment],
"multi_select": Boolean, // If the user can select multiple values
}{
// Any available data source such as 'activities_analytics' or 'OTQL'
"type": String,
// Only for OTQL type, chooses which queries should be transformed
// Select 'ActivityEvent' to transform queries FROM ActivityEvent
"starting_object_type": String,
// The query part to add
"fragment": String,
}{
"title": String,
"cards": [Card],
}{
"x": Int,
"y": Int,
"h": Int,
"w": Int,
"layout": "vertical" || "horizontal",
"charts": [Chart],
} {
"title": String,
"type": "Pie" || "Bars" || "Radar" || "Metric" || "Area",
"dataset": Dataset,
"options": PieOptions || BarsOptions || RadarOptions || MetricOptions || AreaOptions
}joinThis joins two key / value datasets into a key / values dataset. It is important to set the name of each series with the series_title property. It takes from one to any number of sources.
This calculates the representation of each value in the complete dataset. Only one sources is accepted.
You usually want to use the format: percentage option of the associated data visualisation to automatically change the labels and tooltips and formats to display percentage% (count)
This calculates the ratio between two numbers (source 1 / source 2 * 100). It only accepts two sources that should each return numbers.
This calculates the representation of values from a key / value dataset in comparison to an other key / value dataset.
For example, if 10% of the users in a segment viewed content associated with tag 1, while 5% of the users in the whole datamart viewed content associated with this same tag, the index of tag 1 in segment in comparison to the whole datamart is 10 / 5 * 100 = 200.
This is typically used to see which values are more/less represented in the first data source compared to the second one. An index above 100 means the value is more represented in the first data source than in the second, a value under 100 means the value is less represented in the first data source than in the second.
This is usually represented in a Bars chart with a plot_line_value of 100 and an index format :
For each value in the first dataset, it automatically calculates its percentage representation in the first and the second source, then does the formula source value (in percentages) / comparison value (in percentages) * 100 .
This formats timestamps and date fields to the specified date format. Available date formats are Moment.js date formats.
Use this transformation to allow the display of friendly dates to the user or to allow joining multiple data sources into the same dataset by putting returning dates in the same format.
Dates must be in the 2021-11-05T00:00:00.000Z format or in timestamp to be formatted.
Typical compatible queries are :
OTQL queries returning timestamps or @date_histogram.
Activities analytics queries returning the date_time dimension
Collection volumes queries returning the date_time dimension.
This transforms a key / value or key / values dataset into a single number to be displayed in Metric charts.
avg calculates the average of values
count calculates the number of values
first returns the first value
last returns the last value
max returns the maximum value
min returns the minimum value
sum returns the sum of all values
This transforms identifiers such as channel IDs, compartment IDs and segment IDs into the corresponding channel names, compartment names and segment names.





Users can have multiple UserEmail registered on the platform. They have the following properties:
hash
ID
Hashed user email. Always use the same hashing function (ex: SHA-256) in your datamart and all its integrations to allow proper matching between data flows.
String
Optional. User's email, not hashed.
The hash property is mandatory and is the property used to identify a user (provided in $hash property in User activities request).
"dataset": {
"type": "transformation-1",
// Transformations always take a list of sources
// Even if only one is used
"sources": [
{
// Transformations can be chained
"type": "transformation-2"
"sources": [
{
// End the end we have one more
// query data sources
"type": "OTQL"
...
}
]
}
]
}"dataset": {
"type": "to-list",
"sources": [
{
"type": "OTQL",
"query_id": "666", // SELECT @count{} FROM UserPoint WHERE...
"series_title": "Unknown"
},
{
"type": "OTQL",
"query_id": "777", // SELECT @count{} FROM UserPoint WHERE...
"series_title": "With online account"
},
{
"type": "OTQL",
"query_id": "888", // SELECT @count{} FROM UserPoint WHERE...
"series_title": "With fidelity program"
}
]
}"dataset": {
"type": "join",
"sources": [
{
"type": "OTQL",
"query_id": 666, // Select {interests @map} FROM UserPoint WHERE...
"series_title": "Unkwnown"
},
{
"type": "OTQL",
"query_id": 777, // Select {interests @map} FROM UserPoint WHERE...
"series_title": "With fidelity program"
}
]
}"type": "Radar",
"dataset": {
"type": "to-percentages",
"sources": [{
"type": "OTQL",
"query_id": 666 // Select {interests @map} FROM UserPoint WHERE...
}]
},
"options": {
"format": "percentage"
}"dataset": {
"type": "ratio",
"sources": [
{
"type": "OTQL",
"query_id": "666" // SELECT @count{} FROM UserPoint WHERE...
}, // Returns 100k
{
"type": "OTQL",
"query_id": "777" // SELECT @count{} FROM UserPoint
} // Returns 200k
]
}
// Result is 100k/200k*100 = 50"type": "Bars",
"dataset": {
"type": "index",
// Use limits like "limit:20" wisely
// as the index will be calculated for each return value, then ordered.
// If you only do a @map with a limit of 10 elements returned and you are asking
// to show the top 10 indexes, you will have the top 10 indexes from the top 10 values
// A value could be in position 20 by numbers, but in position 2 by index
"sources": [
{
// This query adapts to the current segment
"type": "OTQL",
"query_id": 666, // SELECT {interests @map} FROM UserPoint
"series_title": "Segment"
},
{
// Same query without adapting to the current segment
// and always returns data for the whole datamart
"type": "OTQL",
"query_id": 666,
"series_title": "Datamart"
"adapt_to_scope": false
}
],
"options": {
"limit": 10 // Number of elements to display. 10 by default
"order": "Ascending" | "Descending" // Descending by default
// This means that indexes will only be calculated for values
// representing 0.65% of values in source 1.
"minimum_percentage" : 0.65 // 0 by default. Values between 0 and 100
}
},
"options": {
"type": "bar",
"plotLineValue": 100,
"format": "index" // So that the index is correctly displayed in tooltips
}"dataset": {
"type": "format-dates",
"sources": [ // Only one source allowed
{
"type": "OTQL",
"query_id": "666" // SELECT {date @date_histogram} FROM UserEvent WHERE...
},
],
"date_options": {
"format": "YYYY-MM-DD"
}
}"type": "Metric",
"dataset": {
"type": "reduce",
"sources": [{
"type": "OTQL",
"query_id": 666 // Select {interests @map} FROM UserPoint WHERE...
}],
"reduce_options": {
// avg || count || first || last || max || min || sum
"type": "count"
}
},// This returns channel IDs associated with the value
"dataset":
{
"type": "OTQL",
"query_id": "666" // SELECT {channel_id @map} FROM UserEvent WHERE...
}
}
// This returns channel names associated with the value
"dataset": {
"type": "get-decorators",
"sources": [ // Only one source allowed
{
"type": "OTQL",
"query_id": "666" // SELECT {channel_id @map} FROM UserEvent WHERE...
},
],
"decorators_options": {
"model_type": "CHANNELS", // CHANNELS || COMPARTMENTS || SEGMENTS
// Optional if the data source returns sub buckets,
// to define the transformation for those sub buckets
"buckets": {
// Recursive
"buckets": {
"model_type": "SEGMENTS"
}
}
}
}creation_ts
Timestamp
When the email was registered on the platform
expiration_ts
Timestamp (optional)
Email's eventual expiration timestamp


An Activity Analyzer is a Plugin that allows you to modify an activity on the fly before storing it. It runs as a part of the processing pipeline, for each activity of the channel it is associated with.
This feature is useful for:
Reformatting data (adapting the ingestion data model to the datamart schema)
Enriching events (for instance by fetching product information based on a product id)
Improving data quality (filtering unwanted events, matching input values to standard catalogs, parsing URLs into categories etc.)
If you don't know what a plugin is, you can find the
Activity analyzers have only one predefined endpoint to implement
POST myworker/v1/activity_analysis
This entry point is called any time an activity is processed by the platform. The activity analyzer receives an activity and responds to the request by returning a new activity. It cannot modify the identifiers that are passed in the incoming activities.
See to learn why you should use the activity_analyzer_id parameter to retrieve the instance properties.
The code of the activity analyzer can call the following API endpoints to retrieve.
GET https://api.mediarithmics.com/v1/activity_analyzers/:id
Use the activity_analyzer_id from the incoming request to retrieve the activity analyzer instance that has been called.
GET https://api.mediarithmics.com/v1/activity_analyzers/:id/properties
Get the properties associated with the activity analyzer instance
See the plugins documentation to see .
An activity analyzer has the ACTIVITY_ANALYZER plugin type. Its group id should be {domain.organisation.activity-analyzer} (for example com.mediarithmics.activity-analyzer). Its artifact id should be the name of the activity analyzer, ie update-product-infos.
Use our to create your activity analyzer in nodejs : the required routes are already defined and you only have to override specific functions.
We can provide you with a hello world project using our SDK. Please contact your Account manager in order to have access to it.
The project structure and files work as .
Your should extend ActivityAnalyzerPlugin class and implement the instanceContextBuilder and onActivityAnalysisfunctions from the plugins SDK.
onActivityAnalysis function is called every time an activity runs through the activity analyzer. It is responsible for the activity transformation.
The instance context built in instanceContextBuilder is cached to improve performances. It should retrieve and store the plugin properties and configuration files used by the code.
Don't forget to catch your errors. You should log / respond with the appropriate message to facilitate debugging.
Your instance context interface should extend ActivityAnalyzerBaseInstanceContext
Like other plugins, activity analyzer need to be instantiated. To create an instance, connect to Navigator and head toward Settings > Datamart > Activity Analyzers. You will get a list of existing instances and a button to create new ones.
Click on New Activity Analyzer.
Select the activity analyzer you want to instantiate.
Enter a name to easily recognize the instance, select an Error recovery strategy and fill Properties if you need to overwrite some of them. Save your modifications to create a new instance of your activity analyzer.
The error recovery strategy determines how the activity is processed when the plugin fails.
Once your activity analyzer instance is created, you can link it to one or multiple channels. To do so, connect to Navigator and head toward Settings > Datamart > Channels and select the channel where you want your activity analyzer to be executed.
Go to the Activity Analyzers category.
Click on Add an Activity Analyzer and select your instance.
Several activity analyzers can be used on the same channel. In this case, they will process the same activity in a sequence of your choice: the second analyzer will process the activity as rendered by the first one and so on...
Make sure to define the right order and error recovery strategies.
As activity analyzers are plugin, you can monitor them.
Go to the navigator > monitoring and search for the UserPoint associated with the activity.
Click on the view json button on any activity on a timeline
You can check if all the properties are OK and if your activity analyzers processed the activity as expect
In case of problem, you can look at two properties added to the activity. processed_by will tell you if the activity has been processed by your activity analyzer, and $error_analyzer_id will give you an error ID if the activity analyzer returned an error response.
The dimensions and metrics allowed in the activities analytics API.
The following dimensions can be requested in reports
You can query dimensions specific to the events that happened during each activity.
Informations about the device used during each activity.
The following dimensions are populated by the :
origin_campaign_name / origin_campaign_technical_name / origin_campaign_id
origin_sub_campaign_technical_name / origin_sub_campaign_id
The following dimensions are populated by the :
location_source
location_country
location_region
The following metrics can be displayed in reports.
Audience segment metrics are a way to offer custom metrics on segments to users. They are visible on the segment listing and segment details pages.
The value of each metric is calculated regularly for each segment and saved to offer a historic view of its values.
The Data Studio > Funnel page in the navigator uses an API that you can leverage to analyze funnel conversions in your own tools. For more information on the feature, see .
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/user_activities_analytics
Use this call to get suggestions or autocomplete values for a dimension

String
Usually the same as the ID as the source, like your CRM.
compartment_id
String
Compartment associated with the user account
creation_ts
Timestamp
Account's creation timestamp
expiration_ts
Timestamp (optional)
Account's eventual expiration timestamp
Always use the user_account_id in correlation with a compartment_id to identify a user by its account. If you don't specify a compartment_id, then the default compartment will be used.
A Compartment is a group that organizes specific user account identifiers. Each compartment has a unique ID that is used to identify the corresponding user account identifier. For instance, “UTIQ Martechpass” might have the compartment id 20.
There are two categories of Compartments:
The First-party user account compartments which compartments contain user account identifiers created within your organization, such as CRM identifiers.
Shared user account compartments which include user account identifiers shared from another organisation.
For instance, if you wish to add UTIQ to your datamart, first subscribe to it within your community organisation. It will then be automatically shared with your other organisations, but you’ll still need to activate it within your datamart.
user_account_id

activity_analyzer_id
string
The ID of the activity analyzer instance that should be used to process the activity. Used to retrieve the instance properties.
datamart_id
string
The ID of the datamart
activity
object
The UserActivity Object to analyze
id
string
ID of the activity analyzer, typically the activity_analyzer_id from the incoming request.
id
string
ID of the activity analyzer, typically theactivity_analyzer_id from the incoming request
STORE_WITH_ERROR_ID
The activity will be sent without any modification to the next activity analyzer.
STORE_WITH_ERROR_ID_AND_SKIP_UPCOMING_ANALYZERS
The activity will be saved without modification of the activity analyzer in failure. It doesn't be sent to the next plugin.
DROP
The activity won’t be saved




{
"status": "ok",
"data": {
// New UserActivity object
}
}{
"status": "error",
"error": "Your error message"
}{
"status": "ok",
"data": {
"id": "1000",
"name": "my analyzer",
"organisation_id": "1000",
"visit_analyzer_plugin_id": "1001",
"group_id": "com.mediarithmics.visit-analyzer",
"artifact_id": "default"
}
}{
"status": "ok",
"data": [
{
"technical_name": "debug",
"value": { "value": false },
"property_type": "BOOLEAN",
"origin": "PLUGIN",
"writable": true,
"deletable": false
},
{
"technical_name": "topic_properties",
"value": { "value": "vertical" },
"property_type": "STRING",
"origin": "PLUGIN",
"writable": true,
"deletable": false
}
],
"count": 2
}import { core } from "@mediarithmics/plugins-nodejs-sdk";
import { CustomInstanceContext } from "./interfaces/InstanceContextInterface";
export class ActivityAnalyzerPlugin extends core.ActivityAnalyzerPlugin {
// Called to update a UserActivity
// Uses the instance context built with instanceContextBuilder
// to adapt to the properties and technical files
protected async onActivityAnalysis(
request: core.ActivityAnalyzerRequest,
instanceContext: CustomInstanceContext)
: Promise<core.ActivityAnalyzerPluginResponse> {
try{
const updatedActivity = request.activity;
// Your code to modify the activity.
// Exemple adding product infos in each event
// If the technical configuration allows it
if (instanceContext.technicalConfig.updateActivities){
updatedActivity.$events.forEach(event => {
if (event.$properties && event.$properties.$items && event.$properties.$items.length > 0) {
event.$properties.$items.forEach((item: any) => {
var product = Products.find(p => p.$id == item.$id);
item.$name = product.$name;
item.categories = product.categories;
item.inStock = product.inStock;
});
}
});
}
const response: core.ActivityAnalyzerPluginResponse = {
status: "ok",
data: updatedActivity
};
return Promise.resolve(response);
}
catch (err) {
const errorResponse: core.ActivityAnalyzerPluginResponse = {
status: 'error',
data: request.activity
};
this.logger.error(`TRANSFORMATION ERROR while processing activity: ${JSON.stringify(request.activity)}`);
return Promise.resolve(errorResponse)
}
}
// Build the instance context
// by fetching properties and configuration files
protected async instanceContextBuilder(activityAnalyzerId: string)
: Promise<CustomInstanceContext> {
const baseInstanceContext = await super.instanceContextBuilder(activityAnalyzerId);
try {
// Retrieve a technical configuration file
const validator = new Jsonschema.Validator();
const technicalConfig: ITechnicalConfig = await this.validateJSONSchema(TECH_CONFIG_FILE, validator, technicalConfigurationSchema, activityAnalyzerId);
// Retrieve a property from the plugin instance
const eventExclusionList = baseInstanceContext.properties.findStringProperty("events_exclusion_list");
// Return the completed instance context
const result: CustomInstanceContext = {
...baseInstanceContext,
event_exclusion_list: eventExclusionList,
technicalConfig: technicalConfig
};
this.logger.debug(`Loaded InstanceContext with: ${JSON.stringify(result,null,4)}`);
return Promise.resolve(result);
} catch (err) {
this.logger.error(`Something bad happened during the build of the Instance Context ${err}`);
return Promise.reject(`Something bad happened during the build of the Instance Context ${err}`);
}
};
}import { core } from "@mediarithmics/plugins-nodejs-sdk";
export interface CustomInstanceContext
extends core.ActivityAnalyzerBaseInstanceContext
{
event_exclusion_list: string[];
technicalConfig: ITechnicalConfig;
}{
"processed_by": "<YOUR_ANALYZER_ID>",
"$error_analyzer_id": "<ERROR_ID>"
}Duration of the session in seconds
segment_id
Segment ID
IDs of the segments in which the user was when doing the activity. Note : querying this dimension can throw an error if date ranges of the query are too big.
date_yyyymmdd
Date
Date in the YYYYMMDD format
date_yyyymmddhh
Date + Hour
Date in the YYYYMMDDHH format
date_yyyy_mm_dd
Date
Date in the YYYY_MM_DD format
date_yyyy_mm_dd_hh_mm
Date + Hour + minutes
Date in the YYYY_MM_DD_HH_mm format
has_conversion
Has conversion
Boolean. If a $conversion event happened during the activity. For more information, see .
goal_id
Goal ID
IDs of the goals triggered during the activity
has_bounced
Has bounced
Boolean. If the user only visited one page during the activity.
transaction_amount
Transaction amount
Amount spent by the user during the activity.
number_of_user_events
Number of events
Total number of the user triggered during the activity.
All events (custom and predefined) are counted in this total, except : $conversion $ad_click $ad_view $email_view $email_click $email_sent $email_delivered $email_soft_bounce $email_hard_bounce $email_unsubscribe
number_of_ad_views
Number of $ad_view events
Total number of named $ad_view during the activity.
number_of_ad_clicks
Number of $ad_click events
Total number of named $ad_click during the activity.
number_of_email_views
Number of $email_view events
Total number of named $email_view during the activity.
number_of_email_clicks
Number of $email_click events
Total number of named $email_click during the activity.
number_of_confirmed_transactions
Number of $transaction_confirmed events
Total number of named $transaction_confirmed during the activity
Device model. For example 10 plus 4K Ultraslim
device_agent_type
Agent type : MOBILE_APP WEB_BROWSER
origin_message_id / origin_message_technical_nameorigin_keywords
origin_creative_name / origin_creative_technical_name / origin_creative_id
origin_engagement_content_id
origin_social_network
origin_referral_path
location_iso_region
location_city
location_iso_city
location_latitude
location_longitude
Average number of sessions per UserPoint
Calculated from the number of sessions and the number of UserPoint.
Note : this metric cannot be used with other metrics
avg_revenue_per_user_point
Average revenue per UserPoint
Revenue divided by the number of distinct UserPoint
Note : this metric cannot be used with other metrics
avg_number_of_transactions_per_user_poit
Average number of transactions per UserPoint
Number of transactions devided by the number of distinct UserPoint
Note : this metric cannot be used with other metrics
avg_session_duration
Average session duration
Calculated by doing an average of the session_duration dimension.
revenue
Revenue
Sum of transaction_amount dimension
avg_transaction_amount
Average transaction amount per activity
Sum of transaction_amount dimension divided by the number of transactions.
Note : this metric cannot be used with other metrics
avg_number_of_user_events
Average number of user events per activity
Sum of number_of_user_events dimension divided by the number of activities.
number_of_user_events
Total number of user events
Sum of number_of_user_events dimension
type
Activity type
See User activity object for a list of all activity types.
date_time
Date + time
The combined value of date and time of the activity in timestamp format
channel_id
Channel ID
The ID of the channel on which the activity was registered
session_duration
event_type
Event type. Only $transaction_confirmed $item_view $list_item_view $basket_view events are stored at the moment
brand
Brands of items related to the events
category1
Category 1 of items related to the events
category2
Category 2 of items related to the events
category3
Category 3 of items related to the events
category4
Category 4 of items related to the events
device_form_factor
Type of device : PERSONAL_COMPUTER SMART_TV GAME_CONSOLE SMARTPHONE TABLET WEARABLE_COMPUTER OTHE
device_os_family
OS of the device : WINDOWS MAC_OS LINUX ANDROID IOS OTHER
device_os_versions
Version of the OS, for example Windows 8 ios10
device_browser_family
Browser used during the activity : CHROME IE FIREFOX OPERA STOCK_ANDROID BOT EMAIL_CLIENT MICROSO_EDGE OTHER
device_browser_version
Browser's version. For example 10.3.4, 2.2
device_brand
Device brand. For example Acer Free
users
Active users
The number of distinct active users
sessions
Activities / Sessions
The number of activities / sessions
conversion_rate
Conversion rate
Calculated with (Number of activities with conversions / Total number of activities)
Session duration
device_model
avg_number_of_sessions_per_user_point
The total number of UserPoint is always calculated and displayed, even if there are no custom metrics.
Audience segment metrics are configured per datamart and built on top of OTQL queries.
Each metric has:
An associated OTQL Query
A technical name, possible values being emails, user_accounts, desktop_cookie_ids, mobile_cookie_ids or mobile_ad_ids. You can't use a custom value, and each of these values can only be used once per datamart.
A display name shown in the UI
A status: DRAFT, LIVE or ARCHIVED.
An icon, from a set of possible icons.
A metric goes from DRAFT status to LIVE and from LIVE status to ARCHIVED. You cannot republish an ARCHIVED metric. You can only remove it.
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/audience_segment_metrics
datamartId
string
The ID of the datamart
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/audience_segment_metrics
This creates a DRAFT metric.
datamartId
integer
The ID of the datamart
Body
object
The metric you wish to create
Here is a sample body payload:
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/audience_segment_metrics/:metricId/action
This action transitions the metric go from DRAFT to LIVE.status. Any existing metric in LIVE status with the same technical name is ARCHIVED.
datamartId
integer
The ID of the datamart
metricId
integer
The ID of the metric to publish
Body
object
{ "action": "PUBLISH" }
DELETE https://api.mediarithmics.com/v1/datamarts/:datamartId/audience_segment_metrics/:metricId
datamartId
integer
The ID of the datamart
metricId
integer
The ID of the metric to remove
Only five custom audience segment metrics per datamart are allowed—one per available technical name.
@cardinality aggregations are not supported in the queries.
Each metric is associated with an icon taken from the following catalogue.
display
users
email-inverted
phone
adGroups
ads
automation
bell
bolt
check-rounded-inverted
check-rounded
check
chevron-right
chevron
close-big
close-rounded
close
code
creative
data
delete
display
dots
download
email-inverted
extend
filters
full-users
file
gears
goals-rounded
goals
image
info
laptop
library
magnifier
menu-close
minus
optimization
options
partitions
pause
pen
phone
play
plus
query
question
refresh
settings
smartphone
status
tablet
user
users
user-query
user-pixel
user-list
video
warning
You can import UserActivity using our dedicated API endpoint.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/user_activities
The body of the request must be a UserActivity object.
datamartId
integer
The ID of the datamart in which the UserActivity should be imported
Content-Type
string
application/json
body
object
The UserActivity object to import
The body must be a valid UserActivity object.
Identification of the user or of the device is achieved through the $user_identifiers property. We encourage you to use as many identifiers as available in your environment at the time of the capture.
A UserProfile can be created / updated by registering a UserActivity containing a $set_user_profile_properties event. In that case you would need to use $user_account_id and $compartment_id inside $properties to identify the UserProfile to update:
A UserChoice can be created / updated by registering a UserActivity containing a $set_user_choice event.
This method is used when you want to achieve real-time tracking but can't use the mediarithmics JavaScript Tag. In mobile applications for example.
Please note that those events will go through the processing pipeline before being stored as a UserChoice. You must ensure no activity analyzers is removing them during that process.
The attribute :userPointSelector is used to select the UserPoint on which apply the query. You can provide the following values :
An other way to create / update a UserProfile is to use the user_profiles API endpoint . Prefer this method if you are able to integrate various API endpoints and if you don't need to track the UserProfile update as an event for further retrieval.
PUT https://api.mediarithmics.com/v1/datamarts/:datamartId/user_points/:userPointSelector/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccount
The body of the request must be a UserProfile object.
userAccount
string
The user_account_id linked to the user_profile that should be imported
compartmentId
integer
The ID of the compartment in which the UserProfile should be imported
datamartId
integer
The ID of the datamart in which the UserProfile should be imported
userSelector
string
The identifier of the user for whom the UserProfile should be imported. see the options of the user selector.
update_strategy
Enum (Optional)
Values are PARTIAL_UPDATE, PARTIAL_DELETE, FORCE_REPLACE
()
Legacy parameters (use update_strategy instead)
force_replace
boolean (optional)
If true, then the UserProfile will be completely replaced by the object passed in the user_profile field.
If false, the object passed in the user_profile field will be merged with the existing UserProfile of the UserPoint.
merge_objects
boolean (optional)
Only considered if force_replace is false.
Manage the comportement between two objects with a same property.
If false (default value), the new object overrides the existing one.
If true the new object is merged in deep to the existing one (see ).
Content-Type
string
application/json
body
object
The UserProfile object to import
The body must be a valid UserProfile object.
Beware of :
<COMPARTMENT_ID> & <USER_ACCOUNT_ID> which are used to select the UserPoint
:compartmentId & :userAccountId which are used to select the profile to update
An other way to create / update a UserChoice is to use the user_choices API endpoint . Prefer this method if you are able to integrate various API endpoints.
PUT https://api.mediarithmics.com/v1/datamarts/:datamartId/user_points/:userSelector/user_choices/processing_id=:processingId
datamartId
integer
The datamart ID
userSelector
integer
An identifier to the UserPoint for which the UserChoice should be added.
processingId
integer
The ID of the associated processing
Body
object
The payload
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/user_points/:userSelector/user_choices/processing_id=:processingId
datamartId
integer
The datamart ID
userSelector
integer
An identifier to the UserPoint for which the UserChoice should be added.
processingId
integer
Optional. The ID of the processing for which you want to list UserChoices.
Body
object
The payload
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/user_points/:userSelector/user_choices/processing_id=:processingId/change_log
datamartId
integer
The datamart ID
userSelector
integer
An identifier to the UserPoint for which the UserChoice should be added.
processingId
integer
The ID of the processing for which you want to get UserChoice history.
Body
object
The payload
datamartId
number
The ID of the datamart
metrics
array
Empty array
dimension_filter_clauses
object
Dimensions filters clause to apply.
dimensions
array
Names of the dimensions to retrieve. Usually only one dimension.Use multiple dimensions to get possible values of a dimension if the other dimension is set. For example, using the dimensions TYPE and EVENT_TYPE we can ask for the possible values of EVENT_TYPE if TYPE is SITE_VISIT.
date_ranges
array
Periods to analyze. Each date range is an object with a start_date and an end_date.
{
"status": "ok",
"data": {
"report_view": {
"items_per_page": 100,
"total_items": 7,
"columns_headers"
Here is a sample body payload
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/user_activities_funnel
datamartId
number
The ID of the datamart
limit
number
When spliting a step on a specific field, sets the maximum values to be retrieved to optimize the query with what would be displayed.For example, set to 5 if you only show the 5 best channel IDs in the UI when splitting by channel ID to optimize the query.
in
object
Period to query the funnel. Should be in the last 4 months maximum.
for
object
List of steps in the funnel
Here is a sample payload:
You can build queries with the following dimensions:
Activity Date DATE_TIME
Activity Type TYPE
Ad Group Id ORIGIN_SUB_CAMPAIGN_ID
Brand BRAND
Channel Id CHANNEL_ID
Campaign Id ORIGIN_CAMPAIGN_ID
Category 1 CATEGORY1
Category 2 CATEGORY2
Category 3 CATEGORY3
Category 4 CATEGORY4
Creative Id ORIGIN_CREATIVE_ID
Device Brand DEVICE_BRAND
Device Browser DEVICE_BROWSER_FAMILY
Device Carrier DEVICE_CARRIER
Device Form Factor DEVICE_FORM_FACTOR
Device Model DEVICE_MODEL
Device OS DEVICE_OS_FAMILY
Has conversion HAS_CONVERSION
Has clicked HAS_CLICKED
Has bounced HAS_BOUNCED
Event type EVENT_TYPE
Is in segment SEGMENT_ID
Campaign Id CAMPAIGN_ID
Goal Id GOAL_ID
Product Id PRODUCT_ID
This object represents a group of filters to apply in a request.
It has:
An operator field to apply either an AND or an OR between the filters
A filters array for the list filters to apply. For more information, see Dimensions filters.
This object represents a filter in a filters clause.
It has;
A dimensions_name field to select the dimension it applies on. For more information, see Dimensions.
A not boolean field to apply boolean logic
An operator field to select one of the following queries:
EXACT will force the dimension to match the first expression set
LIKE will allow the dimension to only contain the first expression set
IN_LIST will allow the dimension to be one of the expressions set
A list of expressions representing the keywords to search for.
POST https://api.mediarithmics.com/v1/communities/:communityId/contextual/semantic_extraction
GET https://api.mediarithmics.com/v1/organisations/:organisationId/contextual/targeting_lists
GET https://api.mediarithmics.com/v1/organisations/:organisationId/contextual/targeting_lists/:targetingListId
GET https://api.mediarithmics.com/v1/organisations/:organisationId/contextual/queries/:queryId
GET https://api.mediarithmics.com/v1/communities/:communityId/contextual/categories/:categoryId
GET https://api.mediarithmics.com/v1/communities/:communityId/contextual/entities/:entityId
POST https://api.mediarithmics.com/v1/communities/:communityId/contextual/analytics/overall_metrics
GET https://api.mediarithmics.com/v1/audience_segments/:segmentId/contextual_targetings
POST https://api.mediarithmics.com/v1/audience_segments/:segmentId/contextual_targetings/:contextualTargetingId/actions
POST https://api.mediarithmics.com/v1/audience_segments/:segmentId/contextual_targetings/:contextualTargetingId/actions
{
"status": "ok",
"data": [
{
"id": "1555",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "user_accounts",
"display_name": "User Profiles",
"icon": "users",
"status": "LIVE",
"creation_date": 1613125152462,
"last_modified_date": 1613125152462,
"last_published_date": null
},
{
"id": "1558",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "mobile_cookie_ids",
"display_name": "User Profiles",
"icon": "users",
"status": "LIVE",
"creation_date": 1613125314757,
"last_modified_date": 1613125314757,
"last_published_date": null
},
{
"id": "1566",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "mobile_ad_ids",
"display_name": "User Profiles 7",
"icon": "users",
"status": "ARCHIVED",
"creation_date": 1613128930707,
"last_modified_date": 1613128930707,
"last_published_date": null
},
{
"id": "1569",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "desktop_cookie_ids",
"display_name": "User Profiles 4",
"icon": "gears",
"status": "LIVE",
"creation_date": 1613129103522,
"last_modified_date": 1613129103522,
"last_published_date": null
},
{
"id": "1570",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "desktop_cookie_ids",
"display_name": "User Profiles 4",
"icon": "gears",
"status": "DRAFT",
"creation_date": 1613129261878,
"last_modified_date": 1613129261878,
"last_published_date": null
}
],
"count": 5,
"total": 5,
"first_result": 0,
"max_result": 50,
"max_results": 50
}{
"status": "ok",
"data": {
"id": "1571",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"query_id": "50659",
"technical_name": "emails",
"display_name": "User Profiles 7",
"icon": "users",
"status": "DRAFT",
"creation_date": 1613130322659,
"last_modified_date": 1613130322659,
"last_published_date": null
}
}{
"status": "error",
"error": "Json object is not structured as expected",
"error_code": "BAD_REQUEST_FORMAT",
"error_id": "e18c26a1-7497-470d-8480-2bcb66fc8f16"
}{
"datamart_id": "<<DATAMART ID>>",
"query_id": "<OTQL QUERY ID>",
"technical_name": "<TECHNICAL_NAME>",
"display_name": "User Profiles",
"icon": "users"
}{
"$ts" : 3489009384393,
"$type" : "APP_VISIT",
"$session_status" : "IN_SESSION",
"$user_identifiers" : [{
"$type": "USER_ACCOUNT",
"$compartment_id" : "<COMPARTMENT_ID-1>",
"$user_account_id" : "<ACCOUNT_ID-1>"
},
{
"$type": "USER_ACCOUNT",
"$compartment_id" : "<COMPARTMENT_ID-1>",
"$user_account_id" : "<ACCOUNT_ID-2>"
},
{
"$type": "USER_AGENT",
"$user_agent_id" : "<USER_AGENT_ID>"
},
{
"$type": "USER_EMAIL",
"$hash" : "<USER_EMAIl_HASH>",
"$email" : "<USER_EMAIl>"
}],
"$app_id" : "1023",
"$events" : [
{
"$ts" : 3489009384393,
"$event_name" : "$app_open",
"$properties" : {}
}]
}{
"$ts" : 3489009384393,
"$type" : "APP_VISIT",
"$session_status" : "IN_SESSION",
"$user_agent_id" : "<USER_AGENT_ID>",
"$compartment_id" : "<COMPARTMENT_ID>",
"$user_account_id" : "<ACCOUNT_ID>",
"$app_id" : "1023",
"$events" : [
{
"$ts" : 3489009384393,
"$event_name" : "$app_open",
"$properties" : {}
}]
}{
"$ts" : 3489009384393,
"$type" : "APP_VISIT",
"$session_status" : "IN_SESSION",
"$user_identifiers" : [{
"$type": "USER_ACCOUNT",
"$compartment_id" : "<COMPARTMENT_ID>",
"$user_account_id" : "<ACCOUNT_ID>"
}],
"$app_id" : "1023",
"$events" :[{
"$ts" : 1679588413000,
"$event_name" : "$set_user_profile_properties",
"$properties" : {
"$compartment_id" : "<COMPARTMENT_ID>",
"$user_account_id" : "<ACCOUNT_ID>",
"gender" : "Male",
"zipcode" : "78000"
}
}]
}// Sample UserActivity to add using the tracking API
{
"$user_account_id":"<your_user_account_id>",
"$compartment_id":<your_compartement_id>,
"$type":"<your_activity_type (ex: SITE_VISIT)",
"$site_id": "<your_site_id>",
"$session_status":"NO_SESSION",
"$ts":<a_timestamp (ex:1572947762)>,
"$events": [
{
"$event_name":"$set_user_choice",
"$ts":<a_timestamp (ex:1572948120)>,
"$properties":{
"$processing_id": "<your_processing_id>", // Mandatory
"$choice_acceptance_value":<true/false>, // Mandatory
"<your_custom_field>" : "<your_custom field_value>"
}
}
]
}// Select a UserPoint using a user_point_id
/v1/datamarts/<DATAMART_ID>/user_points/<USER_POINT_ID>/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccountId
/v1/datamarts/<DATAMART_ID>/user_points/user_point_id=<USER_POINT_ID>/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccountId
// Select a UserPoint using a user_agent_id
/v1/datamarts/<DATAMART_ID>/user_points/user_agent_id=<USER_AGENT_ID>/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccountId
// Select a UserPoint using a user_account_id + compartment_id
/v1/datamarts/<DATAMART_ID>/user_points/compartment_id=<COMPARTMENT_ID>,user_account_id=<USER_ACCOUNT_ID>/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccountId
// Select a UserPoint using an email_hash
/v1/datamarts/<DATAMART_ID>/user_points/email_hash=<EMAIL_HASH>/user_profiles/compartment_id=:compartmentId/user_account_id=:userAccountId{
"$compartment_id" : ":compartment_id",
"$user_account_id" : ":user_account_id",
"gender" : "female",
"zipcode" : "75001"
}// Sample payload
{
"$choice_ts": "<a_timestamp (ex:1573135588140)>", // Mandatory
"$choice_acceptance_value":<true/false>, // Mandatory
"<your_custom_field>" : "<your_custom field_value> // Optional
}{
"status": "ok",
"data": {
"global": {
"total": 879879879,
"steps": [
{
"name": "Step 1",
"count": 546546546,
"interaction_duration": 0
},
{
"name": "Step 2",
"count": 897987984651,
"amount": 1213213.27,
"conversion": 11221,
"interaction_duration": 515151
}
]
},
"grouped_by": []
}
}{
"date_ranges": [
{
"start_date": "2021-04-22T00:00:00",
"end_date": "2021-04-29T23:59:59"
}
],
"dimensions": [
{
"name": "TYPE"
}
],
"dimension_filter_clauses": {
"operator": "OR", // OR or AND
"filters": [
{
"dimension_name": "TYPE",
"operator": "LIKE", // LIKE, EXACT or IN_LIST
"expressions": [
""
]
}
]
},
"metrics": []
}{
"for": [
{
"name": "Step 1",
"filter_clause": {
"operator": "OR",
"filters": [
{
"dimension_name": "TYPE",
"not": false,
"operator": "EXACT",
"expressions": [
"DISPLAY_AD"
]
}
]
}
},
{
"name": "Step 2",
"filter_clause": {
"operator": "AND",
"filters": [
{
"dimension_name": "EVENT_TYPE",
"not": false,
"operator": "EXACT",
"expressions": [
"$transaction_confirmed"
]
},
{
"dimension_name": "CHANNEL_ID",
"not": false,
"operator": "IN_LIST",
"expressions": [
"8888",
"6666"
]
}
]
}
}
],
"in": {
"type": "DATES",
"start_date": "2021-04-23",
"end_date": "2021-05-01"
},
"limit": 5
}"filter_clause": {
"operator": "OR", // OR or AND
"filters": [
...
]
} // TYPE should be DISPLAY_AD
{
"dimension_name": "TYPE",
"not": false,
"operator": "EXACT",
"expressions": [
"DISPLAY_AD"
]
}
// TYPE should contain SITE
// SITE_VISIT activities will be used
{
"dimension_name": "TYPE",
"not": false,
"operator": "LIKE",
"expressions": [
"SITE"
]
}
// TYPE should not contain SITE
{
"dimension_name": "TYPE",
"not": true,
"operator": "LIKE",
"expressions": [
"SITE"
]
}
// CHANNEL_ID should be either 8888 or 6666
{
"dimension_name": "CHANNEL_ID",
"not": false,
"operator": "IN_LIST",
"expressions": [
"8888",
"6666"
]
}$email_complaint$set_user_profile_properties$set_user_consent$content_correction$quit_while_running$cleaned_referrer
communityId*
integer
The ID of your community or organisation
url*
string
The URL for which you want to retrieve the list of attached targeting lists.
channel_id*
string
The ID of the channel associated with the URL.
organisationId*
integer
The ID of the organisation for which you want to retrieve the targeting lists
organisationId*
integer
The ID of the organisation
targetingListId*
Integer
The ID of the targeting list to retrieve
organisationId*
integer
The ID of the organisation
queryId*
Integer
The ID of the query retrieved in a seamntic targeting list
communityId*
integer
The ID of the community
categoryId*
Integer
The ID of the category to be retrieved
organisationId*
integer
The ID of the organisation
entityId*
Integer
The ID of the entity to be retrieved
organisationId*
integer
The ID of the organisation
segmentId*
integer
The ID of the audience segment on which
segmentId*
integer
The ID of the audience segment on which
contextualTargetingId*
Integer
The ID of the contextual targeting retrieved in creation response
type*
string
"PUBLISH"
volume_ratio*
float
% of audience segment reach (30-day page views) that will be used to calculate lift
activation_platforms*
array of strings
List of activation platforms where to activate the contexutal targeting. For instance ["XANDR"]
segmentId*
integer
The ID of the audience segment on which
contextualTargetingId*
Integer
The ID of the contextual targeting retrieved in creation response
type*
string
"ARCHIVE"
{
"status": "ok",
"data": {
"targeting_lists": [
{
"id": "123"
},
{
"id": "456"
}
]
}
}{
"status": "ok",
"data": [
{
"type": "SEMANTIC",
"id": "1",
"organisation_id": "1",
"status": "INIT",
"live_activation_ts": null,
"name": "Targeting_List_Name",
"short_description": "",
"token": "cardinal-ack",
"url_count": 21860,
"last_30_days_page_views": 8066996,
"created_ts": 1692607279222,
"created_by": "1",
"last_modified_ts": null,
"last_modified_by": null,
"archived": false,
"query_id": "1"
}
],
"count": 1,
"total": 1,
"first_result": 0,
"max_results": 10
}{
"status": "ok",
"data": {
"type": "SEMANTIC", // SEMANTIC, PANEL_BASED
"id": "1",
"organisation_id": "1",
"segment_id": "7105867", // Only for PANEL_BASED type
"volume_ratio": 0.8313403, // Only for PANEL_BASED type
"status": "INIT",
"live_activation_ts": null,
"last_lift_computation_ts": 1700100423716, // Only for PANEL_BASED type
"name": "Targeting_List_Name",
"short_description": "",
"token": "cardinal-ack",
"url_count": 21860,
"last_30_days_page_views": 8066996,
"created_ts": 1692607279222,
"created_by": "1",
"last_modified_ts": null,
"last_modified_by": null,
"archived": false,
"query_id": "1" // Only for SEMANTIC type
}
}{
"status": "ok",
"data": {
"id": "1",
"organisation_id": "1",
"query_data": {
"language_name": "JSON_SEMANTIC",
"language_version": "1",
"query": {
"include": {
"entity_ids": [
"1",
"2"
],
"iab_category_ids": [
"1"
]
},
"exclude": {
"entity_ids": [],
"channel_ids": [
"1",
"2",
"3",
"4",
"5"
],
"iab_category_ids": []
}
}
},
"created_by": "1",
"created_ts": 1692607279084,
"last_modified_by": null,
"last_modified_ts": null
}
}{
"status": "ok",
"data": {
"id": "25",
"name": "News and Politics>Politics>War and Conflicts",
"url_count": 13549,
"last_30_days_page_views": 4914430
}
}{
"status": "ok",
"data": {
"id": "287",
"name": "Paris Saint-Germain Football Club",
"type": "SoccerClub",
"wikidata_id": "Q483020",
"url_count": 4601,
"last_30_days_page_views": 2095819
}
}{
"status":"ok",
"data":
{
"url_count":13549,
"last_30_days_page_views":4914430
}
}{
"status": "ok",
"data": [
{
"type": "PANEL_BASED",
"id": "1",
"organisation_id": "1",
"segment_id": "1",
"volume_ratio": 0.8313403,
"status": "LIVE",
"live_activation_ts": 1678193619193,
"last_lift_computation_ts": 1700100423716,
"name": "Targeting_List_Name",
"short_description": null,
"token": "double-beer-maryland",
"url_count": 3746,
"last_30_days_page_views": 63824027,
"created_ts": 1678193048401,
"created_by": "1",
"last_modified_ts": 1700055578893,
"last_modified_by": "1",
"archived": false
}
],
"count": 1,
"total": 1,
"first_result": 0,
"max_results": 2147483647
}{
"status":"ok",
"data": {
"id":"143",
"segment_id":"128765",
"volume_ratio":0.33580595,
"status":"PUBLISHED",
"activation_platforms":["XANDR"],
"live_activation_ts":null,
"last_lift_computation_ts":1676542526247,
"created_ts":1676542452678,
"created_by":"3594",
"last_modified_ts":1676543040147,
"last_modified_by": "3594",
"archived":false
}
}{
"status": "ok",
"data": {
"id": "143",
"segment_id": "128765",
"volume_ratio": 0.33580595,
"status": "LIVE",
"activation_platforms": [
"XANDR"
],
"live_activation_ts": 1676543074426,
"last_lift_computation_ts": 1676542526247,
"created_ts": 1676542452678,
"created_by": "3594",
"last_modified_ts": 1676548450358,
"last_modified_by": "3594",
"archived": true
}
}You can do the same with the result of a join
With this technique, you can also combine data from different data sources where the date would be returned in different formats.
A nice way to display collection volumes is by showing the actual number of elements in the collection with a quick history of the volumes.
Another tip when showing collection volumes is to replace lists of metrics with bar charts. This makes it easier to visualize proportions, especially if you have a reference number like the total number of UserPoint.
You may want to compare a particular audience you are building or that's been built to the whole datamart or to a specific reference audience.
For example to answer the question Do users in this audience have different viewing modes than all users ? you can build a dashboard at the builders and/or the segments scope with :
The number of UserPoint visiting through each viewing mode for your audience
The number of UserPoint visiting through each viewing mode for all users
Index calculation to visualize which viewing modes are more/less used in your audience
When doing any chart that returns channels, compartments or segments, you will usually want to display names instead of IDs in the UI.
For this, use the get-decorators transformation to replace IDs with names.
@cardinality OTQL queries return a key / value dataset. In lots of cases, this dataset only has one value but can't be displayed as a metric as it is not in the correct format.
We can use the reduce transformation to put the dataset in the correct format.
Generate a credentials JSON file
In the Google Cloud Console, navigate to the APIs & Services > Library page.
In the search bar, type "Cloud Resource Manager API".
Click on the search result for the Cloud Resource Manager API and click the Enable button
We recommend using a dedicated service account with the appropriate set of permissions.
To create a service account follow these steps :
Select you project
Click on “Create service account”
Input the necessary informations : service account name, service account id (automatically generated), description. Click on “Create and continue”
You can either define access right now or do it from the IAM menu later. If you do it now give the recommended roles are :
bigquery.jobUser and bigquery.dataEditor
This will allow the service account to query data in the tables of the project the service account has access to as well as create temporary tables for computation in your project.
You can also grant only read permissions with roles :
bigquery.jobUser and bigquery.dataViewer
This will allow the service account to query data in the tables of the project the service account has access to but not create temporary tables for computation. Therefore it will limit what will be possible in terms of no copy use cases and differential ingestion.
In both cases, validate by clicking “Done”
If you want to Edit the service account roles go to https://console.cloud.google.com/iam-admin/iam, select the service account from the list and edit roles by clicking on the pencil on the right side.
(If the service account does not appear in the list, try clicking on "Grant access", copy paste the service account mail and assign roles from here)
Note you need to have at least the roles/iam.serviceAccountAdmin role to perform these actions
To export a credentials JSON file follow these steps :
Select you project
Select your service account and on the “…” menu select “Manage keys”
Then click on “Add key”>”Create new key”
Select JSON then “Create”
The key is automatically downloaded
Note you need to have at least the roles/iam.serviceAccountKeyAdmin role to perform these actions


Schemas are associated with datamarts by managing objects with the following properties:
Make sure you learned about schema's concepts and how they are structured.
The process for publishing a schema is as follows:
Create a new schema definition
Upload the schema associated with the definition
Validate the schema
Publish the schema
After updating a schema, you can immediately use all its properties into the select part of your and any operator that doesn't require indexing.
If you add a new indexed property, only new elements going into your datamart will be indexed. You will be able to run WHERE queries and operators needing indexing, but your queries will not return values for elements already in your datamart. You can add a new indexed property by either adding a new property with or adding the @TreeIndex directive to an existing property.
If you remove an indexed property, you will instantly stop being able to run WHERE queries and operators needing indexing for this query.
You can ask your mediarithmics contact to start a complete reindexing of your datamart if required
You can use the to power up your schema definitions workflow.
Retrieve the LIVE schema or the specified one for a given datamart, and save it as a .gql file named schema-<DATAMARTID>-<SCHEMAID>.gql.
Use it to download the schema you want to start with, usually the current LIVE version.
If you want to show the schema in the output, use the --stdout flag.
Create a draft, upload a file, validate it, and publish the schema with this all-in-one command.
Automatically handles status and :
If a draft schema already exists, update it before publishing.
Latest LIVE schema is cloned, if available, and --noClone flag is not set
You can keep your schema as a DRAFT and not publish it using the --skipPublishflag.
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas
Returns all schemas and their status.
Archived schemas are displayed, meaning you can watch your history
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId/text
Allows you to visualize how your schema is in the current version, or how it has been in archived versions
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId/clone
Creates a new DRAFT schema if there's no existing one, by cloning an existing one.
This is the preferred method to create a DRAFT schema, as it will keep all settings from the previous version, like cluster versions, and index sizes.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas
Creates a new DRAFT schema if there's no existing one without cloning.
For the schema to keep initial settings—for example, elastic search version, and index size—you need to clone the previous LIVE version. Only use this endpoint if you know the required settings and you can set them up before publishing.
PUT https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId/text
Add the schema to content to the raw body of the request
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId/validation
Tells you if the uploaded schema is valid, and shows errors if there are any.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/graphdb_runtime_schemas/:schemaId/publication
The selected schema goes LIVE, and the actual LIVE schema is ARCHIVED. Will show an error if you didn't validate the schema.
Data coming into your datamart is stored in a multi-model database, optimizing it for different usages. To display performance analytics for elements like session duration, conversions, and funnel, the platform duplicates some user activities and information and optimizes them.
For that purpose, it is important that you use predefined event names and properties when possible. Custom events won't be taken into account when calculating metrics. For example, don't create order events when tracking an e-commerce site, but the predefined $transaction_confirmed event. $transaction_confirmed events are used when calculating conversions and amounts but not order events.
Here is a sample event that can be used in analytics:
The list of predefined events that are used in analytics are as follows.
Activities analytics data is kept 4 month in order to optimize performances.
While it is better to use predefined events when possible, it isn't always the best solution for you. To keep having analytics correctly stored, you can transform your custom events to predefined ones.
An event transformation is linked to a datamart. Here is a sample transformation:
Event transformations use property mappings to choose which property in your custom event becomes which property in the predefined event.
Here is a sample property mapping to start with for $transaction_confirmed events.
Map events JSON as it is stored in the database and visible in user's timelines (post ).
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_event_transformation
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_event_transformation
PUT https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_event_transformation/:transformationId
DELETE https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_event_transformation/:transformationId
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_mapping
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_mapping
DELETE https://api.mediarithmics.com/v1/datamarts/:datamartId/analytics_mapping/:mappingId
A dataset is built based on at least one data source, and optional transformations and processed for visualisation in Charts.
You can retrieve data from the following data sources :
Activities analytics data cube
data cube
data cube
data cube
Depending on the query you run and the transformations you apply, you can build different types of datasets. Here is a recap of which datasets are created from which data sources and transformations and the available visualisations for each.
Here is an example dataset with only one data source, that returns a number :
You can build the same kind of dataset with a different data source, like :
Use this type of dataset in charts to display a single number.
Queries in the preceding paragraph were only returning numbers, but you can build key / value datasets with more complex queries like and .
You can pass this kind of dataset in , and charts to visualize the content.
Key / value datasets also come from transformations like to-list, to create a list from multiple numbers. You can note the series_title property that gives you control over the title that will be displayed in tooltips and legends.
You can go further by adding up to three levels of buckets in your dataset with and .
This can then be displayed with and charts, with drill down or multiple / stacking bars.
The join transformation with multiple key / value datasets with common keys creates a single dataset with multiple values associated with each key.
The two groups can be displayed together in and charts to efficienly compare their data.
A dataset is formed with a tree of data sources and transformations chained.
series_title propertyAll data sources have a series_title property. This is useful when combining multiple sources together to set the title associated with each source. This will be reflected in tooltips and legends. Here is an example of a Datamart and a Segment data sources combined together.
datamart_id propertyAll data sources have a datamart_id property allowing you to specify the datamart on which to run the query. It defaults to current datamart. This allows you to bring data for an other datamart or to create a dashboard at the community level that aggragates data from sub organisations.
The user loading the dashboard should have the permissions to query the specified datamart or the chart will throw an error for this user.
adapt_to_scope propertyBy defaults, all data sources will try to adapt to the page on they are executed, with the adapt_to_scope property set to TRUE.
The goal is to :
Filter data for the current segment when a dashboard is displayed on a segments page
Filter data based on the current query when a dashboard is displayed on a builder.
For OTQL data sources :
On home scopes, nothing is changed and the query is run as is.
On segments scopes, the current segment's query is added at the end of the OTQL query. That means that only OTQL queries FROM UserPoint will adapt to the scope.
On
For activities analytics data sources :
On home and builders scopes, nothing changes and the query is run as is.
On segments scopes, activities are filters so that only those of users that were in the segment while having the activity will be kept.
The mediarithmics is used to track the visitor's navigation on a website. It is imported in a JavaScript snippet that needs to be executed on all the pages you wish to track.
You can track the exposition of users to the different kind of Ads format that exist today:
Display Ads: All Ads that are either static image/animations or HTML5 animated Ads
Video Ads: All Video Ads that the User can see through a marketing campaign
All Ads tracking is done using Pixels and Click Tracking URLs.
In each datamart, all device information is stored within a device graph:
Devices are represented by UserDevicePoint
Device identifiers are represented by UserDeviceTechnicalId
A UserPoint can have multiple user device points. A user device point can have multiple user device technical identifiers, and one device info.
{
"title": "Application events (last 6 months)",
"type": "Bars",
"dataset": {
"type": "format-dates",
"sources": [
{
// @date_histogram query
"type": "OTQL",
"query_id": "666"
}
],
"date_options": {
"format": "YYYY-MM-DD"
}
}
}{
"title": "Montly events per channel or type-",
"type": "Bars",
"dataset": {
"type": "format-dates",
"sources": [
{
// This works with a join but this can also work from a single source
// without the join
"type": "join",
"sources": [
{
"type": "OTQL",
// Select {date @date_histogram } FROM UserEvent
// WHERE channel_id = XXX
"query_id": "666",
"series_title": "Group 1"
},
{
"type": "OTQL",
// Select {date @date_histogram } FROM UserEvent
// WHERE channel_id = YYY
"query_id": "777",
"series_title": "Group 2"
},
{
"type": "OTQL",
// Select {date @date_histogram } FROM UserEvent
// WHERE channel_id = ZZZ
"query_id": "888",
"series_title": "Group 3"
}
]
}
],
"date_options": {
"format": "YYYY-MM-DD" // The date format we want to return
}
},
// Show the legend for a better event display
"options": {
"legend": {
"enabled": true,
"position": "bottom"
},
"big_bars": false // Allow space between dates
}
}{
"title": "Events",
"type": "Bars",
"dataset": {
"type": "join",
"sources": [
// Get some counts from activities analytics by month
{
"type": "activities_analytics",
"query_json": {
"dimensions": [
{
"name": "date_YYYYMMDD"
}
],
"metrics": [
{
"expression": "number_of_user_events"
}
]
},
"series_title": "activities_analytics"
},
// Get other counts from OTQL by month with @date_histogram
// and format the result in the same format as activities analytics
{
"type": "format-dates",
"sources": [
{
"type": "OTQL",
"query_id": "666"
}
],
"series_title": "OTQL",
"date_options": {
"format": "YYYYMMDD"
}
}
]
},
"options": {
"hide_x_axis": true // We hide the x axis as there are a lot of values
}
}{
// Card display options.
// Here we show a small card with a vertical layout
"x": 0,
"y": 0,
"h": 2,
"w": 3,
"layout": "vertical",
// The two charts in the card
"charts": [
{
// The number of UserPoint as a metric
"title": "UserPoint",
"type": "Metric",
"dataset": {
"type": "OTQL",
"query_id": "666" // SELECT @count FROM UserPoint
}
},
{
// Bars showing a history of the number of UserPoint
"title": "",
"type": "Bars",
"dataset": {
// We do use the format-dates transformation to display
// friendly dates instead of timestamps
"type": "format-dates",
"sources": [
{
"type": "collection_volumes",
"query_json": {
"dimensions": [
{
"name": "date_time"
},
{
"name": "collection"
}
],
"dimension_filter_clauses": {
"operator": "AND",
"filters": [
{
"dimension_name": "datamart_id",
"operator": "EXACT",
"expressions": [
YOUR_DATAMART_ID
]
},
{
"dimension_name": "collection",
"operator": "EXACT",
"expressions": [
"UserPoint"
]
}
]
},
"metrics": [
{
"expression": "count"
}
]
}
}
],
"date_options": {
"format": "YYYY-MM-DD HH:mm"
}
},
// We hide axis to have a nice little chart only showing trends
// with the ability for the user to get values by hovering the bars
"options": {
"hide_x_axis": true,
"hide_y_axis": true
}
}
]
}{
"x": 0,
"charts": [
{
"title": "UserPoint",
"type": "Metric",
"dataset": {
"type": "OTQL",
"query_id": "666"
}
},
{
"title": "Activability",
"type": "Bars",
"dataset": {
"type": "to-list",
"sources": [
{
"type": "OTQL",
"query_id": "111",
"series_title": "Total UserPoint"
},
{
"type": "OTQL",
"query_id": "222",
"series_title": "With accounts"
},
{
"type": "OTQL",
"query_id": "333",
"series_title": "With emails"
},
{
"type": "OTQL",
"query_id": "444",
"series_title": "With web cookies"
},
{
"type": "OTQL",
"query_id": "555",
"series_title": "With apple Mobile ID"
},
{
"type": "OTQL",
"query_id": "666",
"series_title": "With google Mobile ID"
}
]
},
"options": {
"type": "bar",
"hide_y_axis": true,
"colors": [
"#333333"
]
}
}
],
"y": 0,
"h": 5,
"layout": "vertical",
"w": 4
}{
"title": "Viewing modes",
"type": "Bars",
"dataset": {
"type": "index",
"sources": [
{
"type": "OTQL",
"query_id": "666", // SELECT {events {session_mode @map}} FROM UserPoint
"series_title": "Segment"
},
{
"type": "OTQL",
"query_id": "666", // SELECT {events {session_mode @map}} FROM UserPoint
"series_title": "Datamart",
"adapt_to_scope": false
}
],
"options": {
"limit": 10,
"minimum_percentage": 1,
"sort": "Descending"
}
},
"options": {
"type": "bar",
"plot_line_value": 100,
"format": "index"
}
}{
"title": "Data by channels",
"type": "Bars",
"dataset": {
"type": "get-decorators",
"sources": [
{
"type": "to-percentages",
"sources": [
{
"type": "OTQL",
"query_id": "666" // SELECT { channel_id @map} FROM UserEvent WHERE ...
}
]
}
],
"decorators_options": {
"model_type": "CHANNELS"
}
},
"options": {
"format": "percentage"
}
}{
"title": "Number of different event names retrieved",
"type": "Metric",
"dataset": {
"type": "reduce",
"sources": [
{
"type": "OTQL",
"query_id": "666" // SELECT {nature @cardinality} FROM ActivityEvent
}
],
"reduce_options": {
"type": "first"
}
}
}// DRAFT Schema
{
"id": "1385",
"datamart_id": "1509",
"status": "DRAFT",
"creation_date": 1609888013947,
"last_modification_date": 1610443926552,
"publication_date": null,
"suspension_date": null
}
// LIVE Schema
{
"id": "1281",
"datamart_id": "1509",
"status": "LIVE",
"creation_date": 1603198414771,
"last_modification_date": 1603198415117,
"publication_date": 1603198415733,
"suspension_date": null
}
// ARCHIVED Schema
{
"id": "1276",
"datamart_id": "1509",
"status": "ARCHIVED",
"creation_date": 1603102827888,
"last_modification_date": 1603102828087,
"publication_date": 1603102828479,
"suspension_date": 1603102848269
}



The name of the predefined event it is transformed into. Allowed values are :
$transaction_confirmed
$item_view
$basket_view
mapping_id
Integer
The ID of the to apply when transforming the event
Event name
Usage
Important properties
$transaction_confirmed
Home dashboards (E-Commerce Engagement)
Segment dashboards (E-Commerce Engagement)
Funnel Analytics
$items : list of products in the transaction
$items.$qty : for the conversion amounts
$items.$price: for the conversion amounts
$items.$id : for the product IDs in funnel analytics
$items.$brand: for the brand filter in funnel analytics
$items.$category1, $items.$category2, $items.$category3 and $items.$category4 : for the categorization in funnel analytics
$item_view
Funnel Analytics
$items : contains only one product
$items.$price: for the conversion amounts
$items.$id : for the product IDs in funnel analytics
$items.$brand: for the brand filter in funnel analytics
$items.$category1, $items.$category2, $items.$category3 and $items.$category4 : for the categorization in funnel analytics
$basket_view
Funnel Analytics
$items : list of products in the basket
$items.$qty : for the conversion amounts
$items.$price: for the conversion amounts
$items.$id : for the product IDs in funnel analytics
$items.$brand: for the brand filter in funnel analytics
$items.$category1, $items.$category2, $items.$category3 and $items.$category4 : for the categorization in funnel analytics
Property
Type
Description
datamart_id
Integer
The ID of the datamart where the transformation is applied
channel_id
Integer
The ID of the channel where the transformation is applied
source_event_name
String
The name of your custom event you wish to transform into a predefined event
target_event_name
datamartId
integer
The datamart ID
datamartId
integer
The datamart ID
Body
string
The event transformation object
transformationId
integer
The transformation ID
datamartId
integer
The datamart ID
Body
string
The event transformation object
transformationId
integer
The transformation ID
datamartId
integer
The datamart ID
datamartId
integer
The datamart ID
datamartId
integer
The datamart ID
Body
object
The property mapping object
mappingId
integer
The mapping ID
datamartId
integer
The datamart ID
String
Queries
None
Transformations
Charts
Transformations
For a list of available transformations, see Transformations.
buildersFROM UserPointSingle number
Queries
Activities analytics without dimensions
Collection volumes without dimensions
Transformations
None
Key / value
Queries
Activities analytics queries with dimensions
Collection volumes queries with dimensions
Transformations
Key / value / buckets
Queries
Activities analytics queries with multiple dimensions
Collection volumes queries with multiple dimensions
Transformations
None
Key / values
By inserting the code snippet in a general purpose template (ex: header template for an e-shop) which is used in all the web site pages.
The code of the Visit Tracking snippet is non-blocking—it does not impact on the page rendering time. The snippet can be inserted in the <head> part of the web page.
You can use the snippet to track in real-time, what your users are doing on your website. If you want to track users from your backend, please consider using our API. If you want to import bulk events or activities, please consider using the Bulk Import feature.
Cookies used by the tag are considered as advertising cookies. You must obtain user consent before using it.
The mediarithmics tracking snippet is made of two parts:
A technical tag which contains JavaScript code to asynchronously load the TAG in the page. This part should not be edited, except when customizing the TAG name (see below).
The configuration that you should fill according to your context (site token / event name / event properties / etc.)
Here is an example of the tracking snippet you should implement on every page.
To implement multiple tags or customize the tag for your own needs, you can do the following. Here is an example of a tag called umbrella_corp.
If you want to get your own domain name, to fully remove mediarithmics from your website, contact your technical support to get your own domain name.
Here is a an example of the full transparent snippet implementation.
You don't have to push data on page load. You can, for example, bind an event to a button-click.
If you want to pass one or multiple identifiers for your user, you can leverage the addIdentifier(type: string, identifier: object) method for each identifier you want to pass
Hereafter are the formats:
If type == "USER_ACCOUNT" then identifier object must have the following structure {$user_account_id: string}or {$user_account_id: string, $compartment_token: string}
If type == "USER_EMAIL" then identifier object must have the following structure {$email_hash: string}or {$email_hash: string, $email: string}
If type == "USER_AGENT" then identifier object must have the following structure {$user_agent_id: string}
If invalid arguments are passed to addIdentifier, no error is thrown in the browser console
The main purpose of the job is to send the right events with the right properties. This ensures good usability of the data for the users.
We provide out-of-the-box tracking that will generate a $page_view event. This is the behavior if you are using our snippet as is.
$page_view events will be dropped when the session closes (and disappear from the monitoring timeline), so it will only be useful to test that events are correctly sent to mediarithmics.
If you need to trigger a custom event on a particular page, you can use the mics.push function to specify an event name and an object containing the event properties.
In this example, we send the event named view my form with the property form subject set to auto trial :
Refer to the list of base event names and prefer using them to custom event names, as the platform will handle some automatic processing for them.
It is recommended to implement the mediarithmics snippet at several key places, with specific event names:
Home page $home_view
Category or search results page $item_list_view
Product page $item_view
Basket page $basket_view
Transaction confirmation page $transaction_confirmed
A list of products should be associated with any event with the reserved property $items.
When the user views a list of products in a category page or in a search results page. Only the first 3 items in the list should be declared.
The $id field corresponds to the product id.
The $id field corresponds to the product id.
The $id field corresponds to the product id.
The $price field should contain the product price without the currency
The $qty field should contain the quantity of this product in the basket
The $currency field is optional. If there is only one catalog, the catalog currency is used by default
The $id field corresponds to the product id. It should be the same id as the one used in the product feed
The $price field should contain the product price without the currency
The $qty field should contain the quantity of this product in the basket
The $transaction_id field should contain the ID of the transaction
The $currency field is optional. If there is only one catalog, the catalog currency is used by default
If the user is identified, you can register updates to its profile from the JS Snippet. It is generally used in user profile pages of your site.
Use a special event called $set_user_profile_properties . The properties and values associated with the event will be written as key-value pairs on the UserProfile . By default, the anonymous UserProfile associated with the default compartment of the datamart will be updated. By default, the property force_replace is set to false when sending an event $set_user_profile_properties .
The following special properties are available to specify which UserProfile should be updated:
$set_user_profile_comp_token
(Optional) The compartment token representing the compartment in which the profile should be written. If not provided, this will be the default compartment.
$set_user_profile_user_account_id
(Optional) The user account id under which the profile should be written. If not provided, it is the anonymous profile.
Example 1: Write the gender to the anonymous UserProfile of the default compartment.
Example 2: Write the gender to the anonymous UserProfile of a given compartment.
Example 3: Write the gender to the UserProfile identified by a User Account Id of a given compartment.
Some behaviors for the JavaScript can be set server-side, in the same way as tag managers. These configurations can be found on the related channel in Navigator settings.
Select the identifiers that will automatically be used by the JavaScript tag to identify devices. The possible options include:
mediarithmics first party cookies
mediarithmics third party cookies (vector ID)
In addition, mediarithmics tag can automatically retrieve a user or device identifier from the following partners if you have previously integrated with them:
ID5
First ID
Utiq martechpass (mobile)
Enable this setting if you want the javascript tag to automatically consider any TCF-compliant CMP on the website before third-party cookie creation and cookie matchings.
Select if you want the javascript tag to trigger cookie matchings with Google and Xandr. For other partners, please check with your support.
By default, mediarithmics third-party cookie (vector ID) is not generated if the browser does not support matchings with Google and Xandr. Use this setting to force its generation even if these matchigns do not succeed.
You can check the complete reference of the JS Tag tool here.
A One Tag approach in web tracking refers to a method of tracking website visitors and their behavior on a website using a single tracking code. The advantage of a one tag approach is that it is relatively simple to implement and maintain, as it only requires the use of a single tracking code on the website.
The mediarithmics javascript tag can easily be used in a "one tag" approach by referencing the relevant variables in the browser Document Object Model.
Here are several examples of a One Tag implementation with different tag management services.
By default, the mediarithmics tag already collects the url, the referrer, the user agent and the time. It is not necessary to provide these properties:
For Google Tag Manager, it is possible to collect all the relevant properties in the data layer:
For Commanders Act, all the properties are stored in the tc_vars variable:
You should consider using this feature to get ad view and ad click directly within your datamart when using DSPs .
There are two predefined user events that should be tracked during the exposition of a user to an Ad :
$ad_view
Pixel
The 'view/impression' of the Display Ad to the User
$ad_click
Click Tracking URL
The 'click' of the Ad by the User
Use the following URL in your tracking pixel to send an $ad_view event to the platform.
$ev
String
The event name. $ad_view for Display Ad impression tracking
$dat_token
String
The token (not the ID) of the datamart in the mediarithmics platform.
$catn
String
Campaign technical name
Use the following click-tracking URL to send an $ad_click event to the platform.
$ev
String
The event name. $ad_click for Display Ad click tracking
$dat_token
String
The id of the audience datamart in the mediarithmics platform.
$redirect
String
The redirect url. This string should be URL Encoded. (RFC 3986)
You can use the following macros as a minimum configuration for tracking on DV 360 (ex-Doubleclick Bid Manager):
You can use the following macros as a minimum configuration for tracking on Campaign Manager 360 (ex-Doubleclick Campaign Manager):
You can use the following macros as a minimum configuration for tracking on Ad Manager:
You can use the following macros as a minimum configuration for tracking on Xandr:
You can use the following macros as a minimum configuration for tracking on The Trade Desk:
You can track additional properties by using custom properties such as:
Basic video ad tracking can be achieved by integrating the display ad pixel (with $ad_view events) and the click-tracking URL (with $ad_click events) into your video ad format.
For more advanced capabilities, a specific integration can be setup based on the visit pixel (with custom completion events) and an Activity Analyzer. Please advise with your Account Representatives during the design phase.
The following is true for all types of pixel-based tracking (events, ads, email, conversions, ...) which use the events.mediarithmics.com/v1/touches/pixel API endpoint
You can pass one or more user identifiers when using the $uids field.
Don't forget to correctly encode the URL
$tpe
Constant String
AC
$ctok
String
The token of the compartment
$acid
String
The user account id of the user
$tpe
Constant String
EM
$eh
String
The email hash
$e
String (Optional)
The "raw" email
$tpe
Constant String
AG
$agid
String
The
Use the registry token, not the id, when formatting the user agent id.
Eg: dev:<registry_token>:<value>
Device technical identifiers are related to a registry. There are 6 types of registries:
INSTALLATION_ID
For each site on which the installation ID feature is activated, a registry of type INSTALLATION_ID is created.
MOBILE_ADVERTISING_ID
Registries representing mobile advertising ids that can be shared across editors are related to this type. It includes for instance Android Advertising IDs (AAID) and Apple Identifier for Advertisers (IDFA).
MOBILE_VENDOR_ID
Registries representing mobile ids that can be only be shared among apps of the same developer account are related to this type. This includes for instance Apple Identifier for Vendors (IDFV).
TV_ADVERTISING_ID
Includes all registries that refer to Smart TVs and TV boxes: AAID on Android TV and Android boxes, IDFA on Apple TV, Amazon Advertising ID on Fire TV, Tizen Advertising ID on Samsung Smart TV etc.
NETWORK_DEVICE_ID
For registries designating IDs that are device-related & managed by third parties actors. For instance ID5, First-ID etc. ⚠️Some network identifiers are directly user-related and not device related (such as the ones generated from the email)
Registries are related to organisations. You can manage them by going to Navigator > Settings > Organisation > Device registries.
You can create your own registries under the types MOBILE_VENDOR_ID and CUSTOM_DEVICE_ID, or subscribe to existing registries under other types.
Once created, you can activate them on the organisation datamarts.
Registries of type INSTALLATION_ID are automatically created and removed by the platform.
Do not forget to create or subscribe to the required registries before using them to identify user data. If user data is received under unknown registries, identifiers will be removed and data may not be ingested properly.
Some device registry identifiers require Channel configuration updates to trigger tracking using them on your websites properly.
When two device technical ids are associated, the related device points are merged. This can happen:
When capturing user activities with multiple identifiers
When using the identifier association feature
Hereafter is the description of the documents and their properties, as they can be fetched on the APIs.
id
String
The document identifier. Formatted as udp:-1234
type
Enum UserIdentifierType
In the case of device points, this property takes the value "USER_DEVICE_POINT"
creation_ts
Timestamp
Timestamp at which the device point was created
brand
String
The brand of the device (ex.: "Apple")
model
String
The model of the device (ex.: "Iphone 14")
agent_type
Enum UserAgentType
Indicates the type of device. Possible values: WEB_BROWSER or MOBILE_APP
user_agent_id
String
The value of the identifier. See format
registry_id
String
The registry to which the technical identifier
type
Enum RegistryType
The type of the registry to which it is attached.
See for possible values
user_agent_id property allows to use device identifiers in a single property by concatenating several informations such as registry type, registry id and the id value.
The installation id can be seen in two formats:
ins:<registry_id>:<value> for all authenticated endpoints
ins:<registry_token>:<version_prefix><base64(value)> for storage in the browser cookie and for non-authenticated endpoints (pixel routes), with version_prefix currently defined as a
Example:
ins:1001:0d4a58ca-14e5-11ee-be56-0242ac120002
ins:my_registry_token:aMGQ0YTU4Y2EtMTRlNS0xMWVlLWJlNTYtMDI0MmFjMTIwMDAy
Until their planned deprecation, mediarithmics offers to use its third-party cookie, the vector ID. More information can be found on the cookie documentation.
mediarithmics third-party cookie can be used as a user identifier with two different formats:
vec:<value>
mum:<value>
Both formats are equivalent: the first one is the exact format stored in the cookie, the second one is the one stored as a technical identifier in the the datamart.
Example: vec:89998434 / mum:89998434
For a given device point, the vector ID can be retrieved:
In the technical_identifiers list as a device technical id of type MUM_ID ; in which case the identifier format will be mum:89998434
In the mappings list related to the device point
The formatting of advertising cookie values from Google and Xandr is a bit specific due to the historical activity of mediarithmics as a DSP provider:
tech:goo:<value> for Google advertising cookies
tech:apx:<value> for Xandr advertising cookies
For other partners, their third-party cookie value is attached to a web domain that is defined by mediarithmics. The identifier format is as follows:
web:<web_domain_id>:<value>
The web domain designates the partner-domain.com and value the identifier value inside the cookie on partner-domain.com.
The generic format for mobile advertising ids is:
mob:<os>:<encoding>:<value>.
The os field designates the OS of device: and for Android, ios for iOS.
The encoding field describes how the value is encoded. Available values are: raw for no encoding, sha1 for SHA1, md5 for MD5.
The value field contains the mobile advertising id value, encoded according to the previous field. The non-encoded ID should be in lower case for Android and in uppercase for iOS.
Example:
The compressed format for mobile vendor ids is:
mov:<os>:<registry_id>:<value>
The os field designates the OS of device: and for Android, ios for iOS.
The registry_id field designates the registry to which this device id should be linked.
value refers to the identifier value as generated within the mobile application.
net:<registry_id>:<value> for device technical ids of type NETWORK_ID
dev:<registry_id>:<value> for type CUSTOM_DEVICE_ID
tv:<registry_id>:<value> for type TV_ADVERTISING_ID
udp:<value> if you want to use directly the device point identifier instead of a custom device identifier.
User agent is a legacy format that is being replaced replaced by user device points and user device technical identifiers.
User agents are the legacy format to store device identifiers. Several user agents can be attached to a UserPoint, and each user agent has a device info object.
All user agents are identified by a vector ID.
User agents have the following properties:
vector_id
String
Unique ID generated by mediarithmics and associated with each agent
device
Object
Device pieces of information such as operating system and browser
creation_ts
Timestamp
When the user agent was registered on the platform
Two web browsers on the same desktop PC are considered as two distinct agents. For example, your Chrome browser on your Windows laptop is a different device than your Firefox browser on the same laptop. On smartphones, the web browser and the phone itself are considered as two distinct agents.
Agent-based operations like visiting a website or seeing an ad will generally automatically leverage the user agent to identify the user. You can use an agent to identify a user, usually with a user_agent_id field in the requests.
The value field contains the string value in lower case, after the optional encoding.
datamartId
integer
The ID of the datamart
datamartId
integer
The ID of the datamart
schemaId
integer
The ID of the schema
datamartId
string
The ID of the datamart
schemaId
integer
The ID of the schema
schemaId
integer
The ID of the schema to clone. Usually the current LIVE version.
datamartId
integer
The ID of the datamart
datamartId
integer
The ID of the datamart
datamartId
integer
The ID of the datamart
schemaId
integer
The ID of the schema
body
string
Raw schema to upload
datamartId
integer
The ID of the datamart
schemaId
integer
The ID of the schema
datamartId
integer
The ID of the datamart
schemaId
integer
The ID of the datamart

Audience features you create are available in the navigator, in Audience > Builders > Standard.
Let's take the following query as an example:
It is technically an OTQL query based on your schema in which you registered parameters. In this case, it represents a business feature that may be used regularly by your users: selecting UserPoint that perform a transaction in a particular date range and for specific products.
Users will see this selector in the builder instead of an OTQL query.
This audience feature will automatically be combined with other audience features, which the user can select to create segments.
Users will have access to the standard segment builder if at least one audience feature is set up. But you need multiple ones to create value for users.
Good knowledge of the schema and the queries that are usually created in your datamart is important for the success of this feature.
Here is a sample process to follow to enable audience features and create value for your users:
Know your schema. What is it optimized for? Which queries are regularly created? What are your actual segments and what are their queries? You should be able to create a list of useful audience features with these pieces of information.
Set up audience features, in the UI and/or by script.
Monitor usage and update audience features regularly!
You can store audience features in folders. Any audience feature without a folder will be assigned to the root folder. Here are some examples.
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_feature_folders
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_feature_folders/{audience_feature_folders_id}
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_feature_folders
PUT https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_feature_folders/{audience_feature_folders}
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_features
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_features/{audience_feature_id}
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_features
PUT https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_features/{audience_feature_id}
Create query parameters using the $parameter_namesyntax. Example:
The field in the audience feature will have the name you enter after the $ . Spaces in field name are not authorized. The type of selector in the audience feature is automatically chosen based on the field type
If you want to be able to select several values in a field, use keyword in instead of the classic ==.
You can create parameters for frequency requests with the @ScoreSum directive:
A computed field is a dynamic field defined in the schema, linked to a script that is triggered for each new user activity or profile update and performs a computation of a function's result on a regular basis. It is used when the desired outcome cannot be achieved with existing directives or to simplify an OTQL query by replacing multiple directives with a single field.
Those fields can be used in segment definition to improve analysis and sharing, or in the Experiment feature to enhance control groups creation.
Those examples are handled by computed fields:
Retrieve the most recent event (e.g. last visit, last transaction, etc.)
Get the total amount for a specific type of event (e.g. sum of transactions in the last 30 days, total expenditure on a particular category of product, etc.).
Weighted sum (Affinity total, etc.)
Compare multiple channels (e.g. to find the best channel for transactions).
This example should be handled by a standard directive:
Get all users with at least a specific amount for one category
This example can't be handled by a computed field:
Identify the top X% of a specific user group (e.g. the top 10% of buyers).
Once live, a computed field behaves like a regular field, making it transparent to the user. Here's how you can use it in practice:
Use Case: Identify Users Who Have Spent More Than 100€ on IT Products in the Last 3 Months
With Standard Directives:
This query uses standard directives to sum the basket amounts for IT products over the last 3 months.
With a Computed Field:
In this version, the computed field IT_amount_3months is used directly in the query. The field is calculated on a regular basis, meaning no computation is performed during query execution. This results gives faster query response times, as the value is precomputed and readily available.
{
"$ts": 3489009384393,
"$event_name": "$transaction_confirmed", // Conversion detected
"$properties": {
"$items": [
{
"$id": "product_ID", // Used to filter in funnel analytics
"$qty": 20, // Used for conversion amounts
"$price": 102.8, // Used or conversion amounts
"$brand": "Apple" // Used to filter in funnel analytics
"$category1": "Category 1", // Used to filter in funnel analytics
"$category2": "Category 2", // Used to filter in funnel analytics
"$category3": "Category 3", // Used to filter in funnel analytics
"$category4": "Category 4" // Used to filter in funnel analytics
},
{
"$id": "product_ID2",
"$qty": 12,
"$price": 3.4,
"$brand": "Microsoft"
}
],
"$currency": "EUR"
}
}{
"id": 15, // Read only
"datamart_id": "3333", // Read only
"created_ts": <>, // Read only
"created_by": <>, // Read only
"last_modified_by": <>, // Read only
"last_modified_ts": <>, // Read only
"channel_id": "8888",
"source_event_name": "order",
"target_event_name": "$transaction_confirmed",
"mapping_id": 15
}// Comments are here to help you understand,
// remove them before uploading mappings as they are not accepted in JSON
{
"id": 17, // Read only
"created_by": <>, // Read only
"created_ts": <>, // Read only
"mapping": {
"values": [
{
// Maps $properties.order.order_products[] to $items[]
"target_attribute_name": "$items",
"source_attribute_path": {
"attribute_name": "$properties",
"sub_path": {
"attribute_name": "order",
"sub_path": {
"attribute_name": "order_products"
}
}
},
// Alternative : mapping directly $properties if there are no multiple
// elements in the event. It will be treated as an array of only one item
"target_attribute_name": "$items",
"source_attribute_path": {
"attribute_name": "$properties"
},
// Maps $properties.order.order_products.xxx to $items.xxx
// You can't use other target attributes than the ones
// in this example.
// But they are not all mandatory : a non used target
// attribute will use the default value
"children": [
// $properties.order.order_products.product.id
// becomes $items.$id
// to be used by analytics database
{
"target_attribute_name": "$id",
"source_attribute_path": {
"attribute_name": "product",
"sub_path": {
"attribute_name": "id"
}
}
},
// $properties.order.order_products.qty
// becomes $items.$qty
// to be used by analytics database
{
"target_attribute_name": "$qty",
"source_attribute_path": {
"attribute_name": "qty"
}
},
{
"target_attribute_name": "$price",
"source_attribute_path": {
"attribute_name": "price"
}
},
{
"target_attribute_name": "$ean",
"source_attribute_path": {
"attribute_name": "ean",
}
},
{
"target_attribute_name": "$brand",
"source_attribute_path": {
"attribute_name": "brand",
}
},
{
"target_attribute_name": "$category1",
"source_attribute_path": {
"attribute_name": "cat1"
}
},
{
// In this example $category2 will be
// event.$event_name instead of
// $properties.order.order_products[].$event_name
// thanks to absolute_path
"target_attribute_name": "$category2",
"source_attribute_path": {
"absolute_path": true,
"attribute_name": "$event_name",
}
},
{
"target_attribute_name": "$category3",
"source_attribute_path": {
"attribute_name": "cat3"
}
},
{
"target_attribute_name": "$category4",
"source_attribute_path": {
"attribute_name": "cat4"
}
}
]
}
]
}
}{
"status": "ok",
"data": [
{
"datamart_id": ":datamartId",
"channel_id": "8888",
"source_event_name": "order",
"target_event_name": "$transaction_confirmed",
"mapping_id": xxx
},
{
...
}
]
}{
"status": "ok",
"data": {
id: 17,
created_by: <>
...
}
}{
"status": "ok"
}{
"status": "error",
"error": "Deleting a property mapping used in a transformation is forbidden",
"error_code": "BAD_REQUEST_DATA",
"error_id": "9460be23-0f80-4acb-9a33-dd8a1c5d08a9"
}"dataset": {
"type": "OTQL",
"query_id": 666 // SELECT @count FROM UserPoint
}"dataset": {
"type": "activities_analytics",
"query_json": { // This query returns the number of active users
"dimensions": [],
"metrics": [
{
"expression": "users"
}
]
}
}// key-value dataset built with an OTQL query
"dataset": {
"type": "OTQL",
"query_id": 666 // SELECT {gender @map} FROM UserProfile
}
// key-value dataset built with an activities analytics query
"dataset": {
"type": "activities_analytics",
"query_json": { // This query returns the number of active users per channel
"dimensions": [
{"name": "channel_id"}
],
"metrics": [
{
"expression": "users"
}
]
}
}"dataset": {
"type": "to-list",
"sources": [
{
"type": "OTQL",
"query_id": "666",
"series_title": "Female"
},
{
"type": "OTQL",
"query_id": "777",
"series_title": "Male"
}
]
}// key-value dataset built with an OTQL query
"dataset": {
"type": "OTQL",
"query_id": 666 // SELECT {cat1 @map{cat2 @map{cat3 @map}}} FROM UserProfile
}
// key-value dataset built with an activities analytics query
"dataset": {
"type": "activities_analytics",
"query_json": { // Number of active users per day per channel
"dimensions": [
{"name": "date_yyyymmdd"}
{"name": "channel_id"}
],
"metrics": [
{
"expression": "users"
}
]
}
}"dataset": {
"type": "join",
"sources": [
{
"type": "OTQL",
"query_id": 777, // Select {interests @map} FROM UserPoint WHERE ...
"series_title": "Group 1"
},
{
"type": "OTQL",
"query_id": 666, // Select {interests @map} FROM UserPoint WHERE...
"series_title": "Group 2"
}
]
}"dataset": {
"type": "transformation-name",
"sources": [
{
"type": "transformation-name",
"sources": [
{
// OTQL data source
"type": "OTQL",
// ID of the OTQL query to call
"query_id": Int,
// Optional. Title of the series for tooltips and legends
"series_title": String,
// Optional. Datamart on which to run the query.
// Defaults to current datamart
"datamart_id": Int,
// Optional. To adapt the query to the current scope
// for example by adding current segment's query
// when dashboard is executed on a segment
// Defaults to TRUE
// COMING SOON
"adapt_to_scope": Boolean
// Optional. To run the query in a specific precision
// To be used when charts take too long to load and
// a lower precision is accepted
// Defaults to FULL_PRECISION
"precision": "FULL_PRECISION" | "LOWER_PRECISION" | "MEDIUM_PRECISION"
}
]
},
{
"type": "activities_analytics",
// JSON representation of the activities analytics query
"query_json": Object,
// Optional. Title of the series for tooltips and legends
"series_title": String,
// Optional. Datamart on which to run the query.
// Defaults to current datamart
"datamart_id": Int,
// Optional. To adapt the query to the current scope
// for example by only selecting activities of users
// that were in the segment while doing it
// when dashboard is executed on a segment
// Defaults to TRUE
// COMING SOON
"adapt_to_scope": Boolean
},
{
"type": "collection_volumes",
// JSON representation of the activities analytics query
"query_json": Object,
// Optional. Title of the series for tooltips and legends
"series_title": String
},
{
"type": "resources_usage",
// JSON representation of the activities analytics query
"query_json": Object,
// Optional. Title of the series for tooltips and legends
"series_title": String
},
{
"type": "data_ingestion",
// JSON representation of the activities analytics query
"query_json": Object,
// Optional. Title of the series for tooltips and legends
"series_title": String
},
{
"type": "data_file",
// URI of the JSON data file containing data
// Format "mics://data_file/tenants/1426/dashboard-1.json"
"uri": String,
// Path of the property in the JSON that should be used as dataset
// This allows you to have multiple datasets in the same JSON file
// Should use the JSONPath syntax. See https://jsonpath.com/
// For example, "$[0].components[1].component.data"
"JSON_path": String,
// Optional. Title of the series for tooltips and legends
"series_title": String
}
]
}"dataset": {
"type": "join",
"sources": [
{
"type": "OTQL",
"query_id": 777, // Select {interests @map} FROM UserPoint WHERE ...
"series_title": "Segment"
},
{
"type": "OTQL",
"query_id": 666, // Select {interests @map} FROM UserPoint WHERE...
"series_title": "Datamart",
"adapt_to_scope": false
}
]
}mics.addProperty("$user_account_id", "<USER_ACCOUNT_ID>" )
mics.addProperty("$comp_token", "<COMPARTMENT_TOKEN>" );<script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"mics");
mics.init("<SITE_TOKEN>");
// Enables client-side feeds
mics.call("syncFeeds");
/* CUSTOMIZE THE TAG CALL BELOW */
// remove next line to customize what you track
mics.pushDefault();
</script><script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"umbrella_corp");
umbrella_corp.init("<SITE_TOKEN>");
// Enables client-side feeds
umbrella_corp.call("syncFeeds");
/* CUSTOMIZE THE TAG CALL BELOW */
// remove next line to customize what you track
umbrella_corp.pushDefault();
</script><script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"umbrella_corp");
umbrella_corp.init({ mode: "VISIT", site_token: "<SITE_TOKEN>", domain_name: "<YOUR_DOMAIN_NAME>" });
// Enables client-side feeds
umbrella_corp.call("syncFeeds");
/* CUSTOMIZE THE TAG CALL BELOW */
// remove next line to customize what you track
umbrella_corp.pushDefault();
</script><script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"mics");
mics.init({mode: "VISIT", site_token: "<SITE_TOKEN>"});
mics.call("syncFeeds");
$(".your_button").on("click", function () {
mics.push("click button", {
button_type: "button1"
// ...
});
});
</script>mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"account 1",
$compartment_token:"token1"
}
);
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"account 2",
$compartment_token:"token2"
});
);
mics.addIdentifier(
"USER_EMAIL",
{
$email_hash:"email hash",
$email:"email address"
}
);
mics.addIdentifier(
"USER_AGENT",
{
$user_agent_id:"user agent id"
}
);mics.pushDefault();mics.push("view my form", {
"form subject": "auto trial"
});// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the event
mics.push("$home_view");// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the event
mics.push("$item_list_view", {
"$items": [
{"$id": "78798978"},
{"$id": "444444"},
{"$id": "78808900"}
]
});// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the event
mics.push("$item_view", {
"$items": [
{ "$id": "89999999" }
]
});// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the event
mics.push("$basket_view", {
"$items" : [
{"$id" : "78794", "$price" : 10.8, "$qty" : 1 },
{"$id" : "78677", "$price" : 56.99, "$qty" : 1 }
],
"$currency" : "EUR"
});// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the event
mics.push("$transaction_confirmed", {
"$items" : [
{"$id" : "78794", "$price" : 10.8, "$qty" : 1 },
{"$id" : "78677", "$price" : 56.99, "$qty" : 1 }
],
"$transaction_id" : "transact-XYZ",
"$currency" : "EUR"
});// Add the user account id if available
mics.addIdentifier(
"USER_ACCOUNT",
{
$user_account_id:"<USER_ACCOUNT_ID>",
$compartment_token:"<COMPARTMENT_TOKEN>"
}
);
// Push the profile
mics.push("$set_user_profile_properties", {
gender: "male",
// add any other properties you whish
});umbrella_corp.push(
'$set_user_profile_properties',
{
'gender': 'female'
}
);umbrella_corp.push(
'$set_user_profile_properties',
{
'$set_user_profile_comp_token': 'my_compartment_token',
'gender': 'female'
}
);umbrella_corp.push(
'$set_user_profile_properties',
{
'$set_user_profile_comp_token': 'my_compartment_token',
'$set_user_profile_user_account_id': '456',
'gender': 'female'
}
);<script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"mics");
mics.init("<SITE_TOKEN>")
mics.push("hit", {})
</script><script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"mics");
function micsGetAllProperties() {
var data={};
if (Object.keys(dataLayer).length > 0) {
dataLayer.forEach((item, index) => {
Object.keys(item).forEach( key => {
if (isNaN(key) && key!="event" && !key.startsWith("gtm")) {
data[key] = item[key];
}
})
})
}
return data;
};
mics.init("<SITE_TOKEN>")
mics.push("hit", { "data": micsGetAllProperties()})
</script><script type="text/javascript">
/* YOU SHOULD NOT EDIT THIS PART */
!function(t,e,a){"use strict";var i=t.scimhtiraidem||{};function s(t){var e=i[a]||{};i[a]=e,e[t]||(e[t]=function(){i._queue[a].push({method:t,args:Array.prototype.slice.apply(arguments)})})}t.googletag=t.googletag||{},t.googletag.cmd=t.googletag.cmd||[],t.googletag.cmd.push(function(){var e=t.localStorage.getItem("mics_sgmts"),a=JSON.parse(e),i=a||{};Object.keys(i).forEach(function(e){t.googletag.pubads().setTargeting("mics_"+e,i[e].map(String))})});var r="init call config push pushDefault addIdentifier addProperties addProperty onFinish onStart _reset".split(" ");i._queue=i._queue||{},i._names=i._names||[],i._names.push(a),i._queue[a]=i._queue[a]||[],i._startTime=(new Date).getTime(),i._snippetVersion="2.0";for(var o=0;o<r.length;o++)s(r[o]);t.scimhtiraidem=i,t[a]=i[a];var n=e.createElement("script");n.setAttribute("type","text/javascript"),n.setAttribute("src","https://static.mediarithmics.com/tag/2/tag.min.js"),n.setAttribute("async","true"),e.getElementsByTagName("script")[0].parentNode.appendChild(n)}(window,document,"mics");
mics.init("<SITE_TOKEN>")
mics.push("hit", { "data": tc_vars})
</script>https://events.mediarithmics.com/v1/touches/pixel? \
$ev=$ad_view& \
$dat_token=<DATAMART_TOKEN>&
$catn=<CAMPAIGN_TECHNICAL_NAME>&
$scatn=<AD_GROUP_TECHNICAL_NAME>&
$crtn=<CREATIVE_TECHNICAL_NAME>&
$cb=<CACHEBUSTER>&
gdpr=<GDPR>&
gdpr_consent=<GDPR_CONSENT_184>
... any custom propertyhttps://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=<CAMPAIGN_TECHNICAL_NAME>&
$scatn=<AD_GROUP_TECHNICAL_NAME>&
$crtn=<CREATIVE_TECHNICAL_NAME>&
$cb=<CACHEBUSTER>&
$redirect=<CLICK_URL>&
gdpr=<GDPR>&
gdpr_consent=<GDPR_CONSENT_184>
... any custom propertyhttps://events.mediarithmics.com/v1/touches/pixel?
$ev=$ad_view&
$dat_token=<DATAMART_TOKEN>&
$catn=${CAMPAIGN_ID}&
$scatn=${INSERTION_ORDER_ID}&
$crtn=${CREATIVE_ID}&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=${CACHEBUSTER}https://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=${CAMPAIGN_ID}&
$scatn=${INSERTION_ORDER_ID}&
$crtn=${CREATIVE_ID}&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=${CACHEBUSTER}&
$redirect=${CLICK_URL_ENC}https://events.mediarithmics.com/v1/touches/pixel?
$ev=$ad_view&
$dat_token=<DATAMART_TOKEN>&
$catn=%ebuy!&
$scatn=%eaid!&
$crtn=%ecid!&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=%nhttps://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=%ebuy!&
$scatn=%eaid!&
$crtn=%ecid!&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=%n&
$redirect=<CLICK_URL>https://events.mediarithmics.com/v1/touches/pixel?
$ev=$ad_view&
$dat_token=<DATAMART_TOKEN>&
$catn=%ebuy!&
$scatn=%eaid!&
$crtn=%ecid!&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=%%CACHEBUSTER%%https://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=%ebuy!&
$scatn=%eaid!&
$crtn=%ecid!&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=%%CACHEBUSTER%%&
$redirect=<CLICK_URL>https://events.mediarithmics.com/v1/touches/pixel?
$ev=$ad_view&
$dat_token=<DATAMART_TOKEN>&
$catn=${CP_CODE}&
$scatn=${CPG_CODE}&
$crtn=${CREATIVE_CODE}&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$cb=${CACHEBUSTER}https://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=${CP_CODE}&
$scatn=${CPG_CODE}&
$crtn=${CREATIVE_CODE}&
$cb=${CACHEBUSTER}&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$redirect=${CLICK_URL_ENC}https://events.mediarithmics.com/v1/touches/pixel?
$ev=$ad_view&
$dat_token=<DATAMART_TOKEN>&
$catn=%%TTD_CAMPAIGNID%%&
$scatn=%%TTD_ADGROUPID%%&
$crtn=%%TTD_CREATIVEID%%&
$cb=%%TTD_CACHEBUSTER%%&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}https://events.mediarithmics.com/v1/touches/click?
$ev=$ad_click&
$dat_token=<DATAMART_TOKEN>&
$catn=%%TTD_CAMPAIGNID%%&
$scatn=%%TTD_ADGROUPID%%&
$crtn=%%TTD_CREATIVEID%%&
$cb=%%TTD_CACHEBUSTER%%&
gdpr=${GDPR}&
gdpr_consent=${GDPR_CONSENT_184}&
$redirect=%%TTD_CLK_ESC%%...
domain=%%TTD_SITE%%&
device=%%TTD_DEVICETYPE%%&
...https://events.mediarithmics.com/v1/touches/pixel? \
$ev=$ad_view& \
$dat_token=<DATAMART_TOKEN>&
$catn=<CAMPAIGN_TECHNICAL_NAME>&
$scatn=<AD_GROUP_TECHNICAL_NAME>&
$crtn=<CREATIVE_TECHNICAL_NAME>&
gdpr=<GDPR>&
gdpr_consent=<GDPR_CONSENT>&
$cb=<CACHEBUSTER>&
$uids=jso-[{"$tpe":"AG","$agid":"vec:1234"}]&
... any custom propertyUserPoint
|
|---------UserDevicePoint--------------------------DeviceInfo: PC - CHROME BROWSER
| |---------UserDeviceTechnicalId: 1P cookie
| |---------UserDeviceTechnicalId: Network ID
| |---------UserDeviceTechnicalId: 3P cookie
|
|---------UserDevicePoint--------------------------DeviceInfo: ANDROID TABLET
| |---------UserDeviceTechnicalId: Mobile advertising id
| |---------UserDeviceTechnicalId: Mobile vendor idmob:ios:raw:12345654-ABCD-1234-A1B2-123456789876
mob:and:raw:12345678-abcd-1234-a1b2-123456789876UserPoint
|
|---------UserAgent: vec:1111--------------------------DeviceInfo: PC - CHROME BROWSER
|
|---------UserAgent: vec:2222--------------------------DeviceInfo: ANDROID TABLET// Browser based agent
{
"vector_id": "vec:12345654321",
"device": {
"form_factor": "SMARTPHONE",
"os_family": "IOS",
"browser_family": "SAFARI",
"browser_version": null,
"brand": null,
"model": null,
"os_version": null,
"carrier": null,
"raw_value": null,
"agent_type": "WEB_BROWSER"
},
"creation_ts": 1591712194234,
"mappings": [
{
"user_agent_id": "tech:goo:Cazrazrkazeeza-azeree9-azezrze",
"realm_name": "GOOGLE_OPERATOR"
},
{
"user_agent_id": "tech:apx:12345654321654321",
"realm_name": "APP_NEXUS_OPERATOR"
}
]
}// Mobile application agent
{
"vector_id": "vec:12345654321",
"device": {
"form_factor": "OTHER",
"os_family": "OTHER",
"browser_family": null,
"browser_version": null,
"brand": null,
"model": null,
"os_version": null,
"carrier": null,
"raw_value": null,
"agent_type": null
},
"creation_ts": 1573405364473,
"mappings": [
{
// IDFA user agent id with raw encoding: mob:ios:raw:6d92078a-8246-4ba4-ae5b-76104861e7dc
// IDFA user agent id with SHA1 encoding: mob:ios:sha1:d520a80c026be39edeb9c6e3f37c01f2da5f5e97
// AAID user agent id with raw encoding: mob:and:raw:97987bca-ae59-4c7d-94ba-ee4f19ab8c21
// AAID user agent id with MD5 encoding: mob:and:md5:ba06c008973b8a1bff6e087c6149227f
"user_agent_id": "mob:ios:raw:12345654-8246-1234-ae5b-123456454654",
}
]
},USAGE
$ mics-cli schema:fetch DATAMARTID [SCHEMAID]
ARGUMENTS
DATAMARTID the ID of the datamart
SCHEMAID [default: LIVE] the ID of the schema, or LIVE to get the live schema
OPTIONS
--stdout output the content of the file instead of saving it in a file{
"status": "ok",
"data": [
{
"id": "1266",
"datamart_id": "1509",
"status": "ARCHIVED",
"creation_date": 1602860201358,
"last_modification_date": 1602860221562,
"publication_date": 1602860275189,
"suspension_date": 1602860362588
},
{
"id": "1281",
"datamart_id": "1509",
"status": "LIVE",
"creation_date": 1603198414771,
"last_modification_date": 1603198415117,
"publication_date": 1603198415733,
"suspension_date": null
},
{
"id": "1385",
"datamart_id": "1509",
"status": "DRAFT",
"creation_date": 1609888013947,
"last_modification_date": 1610443926552,
"publication_date": null,
"suspension_date": null
}
],
"count": 3,
"total": 3,
"first_result": 0,
"max_result": 2147483647,
"max_results": 2147483647
}{
"status": "ok",
"data": {
"id": "1266",
"datamart_id": "1509",
"status": "ARCHIVED",
"creation_date": 1602860201358,
"last_modification_date": 1602860221562,
"publication_date": 1602860275189,
"suspension_date": 1602860362588
}
}##
type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
events:[ActivityEvent!]!
creation_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
id:ID!
}
##
type ActivityEvent {
referrer:String @TreeIndex(index:"USER_INDEX") @Property(path:"$properties.$referrer")
url:String @TreeIndex(index:"USER_INDEX") @Property(path:"$properties.$url")
date:Date! @TreeIndex(index:"USER_INDEX") @Function(params:["ts"], name:"ISODate")
nature:String @Property(path:"$event_name") @TreeIndex(index:"USER_INDEX")
ts:Timestamp!
id:ID!
}
{
"status": "ok",
"data": {
"id": "1395",
"datamart_id": "1509",
"status": "DRAFT",
"creation_date": 1610449867207,
"last_modification_date": 1610449867207,
"publication_date": null,
"suspension_date": null
}
}{
"status": "error",
"error": "Impossible to create a new schema, there is already one draft schema",
"error_code": "CONSTRAINT_VIOLATION_EXCEPTION",
"error_id": "690596a9-b0e0-43b3-88e8-b90d08b98029"
}{
"status": "ok",
"data": {
"id": "1395",
"datamart_id": "1509",
"status": "DRAFT",
"creation_date": 1610449867207,
"last_modification_date": 1610449867207,
"publication_date": null,
"suspension_date": null
}
}{
"status": "error",
"error": "Impossible to create a new schema, there is already one draft schema",
"error_code": "CONSTRAINT_VIOLATION_EXCEPTION",
"error_id": "690596a9-b0e0-43b3-88e8-b90d08b98029"
}{
"status": "ok"
}{
"status": "ok",
"data": {
"datamart_id": "1509",
"schema_id": "1266",
"tree_index_operations": [
{
"datamart_id": "1509",
"index_selection_id": "2121",
"index_name": "USER_INDEX",
"init_strategy": "FORCE_NO_BUILD",
"driver_version_major_number": 1,
"driver_version_minor_number": 2,
"current_index_id": "574",
"current_index_size": "SMALL",
"new_index": false,
"new_index_size": "SMALL",
"init_job": null,
"error_code": null,
"error_message": null
}
],
"schema_errors": []
}
}{
"status": "error",
"error": "2 error(s) found when validating schema : Type 'UserPoint' which is root of tree index 'USER_INDEX' requires a scalar field named 'creation_ts' to be annotated with '@TreeIndex(index:\"USER_INDEX\")' and to be typed 'Timestamp!', Type 'UserAccount' which is root of tree index 'USER_INDEX' requires a scalar field named 'compartment_id' to be annotated with '@TreeIndex(index:\"USER_INDEX\")' and to be typed 'String!'",
"error_code": "BAD_REQUEST_DATA",
"error_id": "6a52cea9-6de4-40f5-972e-a8480c268d19"
}{
"status": "ok",
"data": {
"datamart_id": "1509",
"schema_id": "1395",
"tree_indices": [
{
"index_name": "USER_INDEX",
"new_index": false,
"index_id": "574",
"index_size": "SMALL",
"init_strategy": "FORCE_NO_BUILD",
"driver_version_major_number": 1,
"driver_version_minor_number": 2,
"init_job": null
}
]
}
}{
"status": "error",
"error": "Impossible to publish schema, validate the schema before a new publication",
"error_code": "BAD_REQUEST_DATA",
"error_id": "76e70020-02cb-40ec-8109-adf204aa0a7b"
}SELECT @count{} FROM UserPoint WHERE
events {
nature = "$transaction_confirmed" and
date >= $date
and products {brand in $brand and name in $name}
}$list_item_view
To learn about predefined events, see Predefined event names
$scatn
String
Sub-campaign technical name
$crtn
String
Creative technical name
$cb
String
The cache buster parameter. It should contain a random string. optional
$cuid
String
User account identifier of the user
$uaid
String
Mobile identifier of the user to identify the User agent
$email_hash
String
Email Hash identifier of the user
$comp_token
String
Compartment token (not the ID)
$uids
JSON as string (Optional)
The list of user identifiers of the user.
gdpr
Number
TCF v2.2 parameter to indicate if gdpr applies or not (values: 1 or 0)
gdpr_consent
String
TCF v2.2 parameter containing the encoded consent string
any custom property name
Any Type
Any custom property. optional
$catn
String
Campaign technical name
$scatn
String
Sub-campaign technical name
$crtn
String
Creative technical name
$cb
String
The cache buster parameter. It should contain a random string. optional
$cuid
String
User account ID identifier of the user
$email_hash
String
Email Hash identifier of the user
$comp_token
String
Compartment token (not the ID)
$uids
JSON as string (Optional)
The list of user identifiers of the user.
gdpr
Number
TCF v2.2 parameter to indicate if gdpr applies or not (values: 1 or 0)
gdpr_consent
String
TCF v2.2 parameter containing the encoded consent string
any custom property name
Any Type
Any custom property. optional
CUSTOM_DEVICE_ID
If you want to use your own device identifier (for instance: a 1P cookie that you generate), you can create it under this type.
MUM_ID
"mediarithmics User Mapping Identifier"
This type and the single registry it contains are dedicated to hosting mediarithmics 3P cookie (vector_id) and references to partners' 3P cookies, until their deprecation
last_activity_ts
Timestamp
Last time an event was ingested from this device. ⚠️This property is not updated at the moment.
device
Object of type DeviceInfo
Description of the device through a set of normalized properties
technical_identifiers
List of objects
List of available identifiers for the device
browser_family
Enum BrowserFamily
If the device is a browser.
Possible values: OTHER, CHROME, IE, FIREFOX, SAFARI, OPERA, STOCK_ANDROID, BOT, EMAIL_CLIENT, MICROSOFT_EDGE.
browser_version
String
The version of the browser.
form_factor
Enum FormFactor
Indicates the format of the device. Possible values: PERSONAL_COMPUTER, SMART_TV, GAME_CONSOLE, SMARTPHONE, TABLET, WEARABLE_COMPUTER, OTHER.
os_family
Enum OperatingSystemFamily
The operating system family of the device. Possible values: WINDOWS, MAC_OS, LINUX, ANDROID, IOS, OTHER.
os_version
String
The version of the operating system (ex.: "macOS 10.15 Catalina")
carrier
String
The service provider that ensures connectivity of the device.
creation_ts
Timestamp
Timestamp at which the document was created
last_activity_ts
Timestamp
Last time an event was ingested with this identifier. ⚠️This property is not updated at the moment.
expiration_ts
Timestamp
Timstamp at which this document will expire. ℹ️For the moment, technical ids are stored for a period of 1 year after their creation
mappings
Array
Additional identifiers called the device mappings. They are cookie-based identifiers or mobile application identifiers associated with the agent.

// With standard directives
SELECT { id } FROM UserPoint
WHERE activity_events @ScoreSum(min : 100) {
basket {
items @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT" AND date >= "now-3M/M"
}
}
}//With a COmputed field "IT_amount_3months"
SELECT { id } FROM UserPoint WHERE IT_amount_3months >= 100name*
string
The name of the folder
name*
string
The name of the folder
name
string
The name of the audience feature
folder_id
string
The ID of folder where the audience feature is stored
name
string
The name of the audience feature
folder_id
string
The ID of folder where the audience feature is stored
datamart_id
integer
The ID of the datamart
audience_feature_folders_id
integer
The ID of the folder
datamart_id
integer
The ID of the datamart
datamart_id*
integer
The ID of the datamart
children_ids
array
IDs of folder's children
audience_features_ids
array
IDs of audience features attached to the folder
parent_id
string
The ID of the folder's parent
datamart_id
string
The ID of the datamart
datamart_id*
integer
The ID of the datamart
audience_feature_folders_id*
integer
The ID of the audience feature folder
children_ids
array
IDs of folders's children
audience_features_ids
array
IDs of audience features attached to the folder
parent_id
string
The ID of the folder's parent
datamart_id
string
The ID of the datamart
datamart_id
integer
The ID of the datamart
audience_feature_id
integer
The ID of the audience feature
datamart_id
integer
The ID of the datamart
datamart_id
integer
The ID of the datamart
object_tree_expression
string
The WHERE statement of the query associated to the audience feature
description
string
The description of your audience feature
datamart_id
string
The ID of the datamart
addressable_object
string
The SELECT statement of the query associated to the audience feature. It must always set at UserPoint.
audience_feature_id
integer
The ID of the audience feature to edit
datamart_id
integer
The ID of the datamart
object_tree_expression
string
The WHERE statement of the query associated to the audience feature
description
string
The description of your audience feature
datamart_id
string
The ID of the datamart
addressable_object
string
The SELECT statement of the query associated to the audience feature. It must always set at UserPoint.





{
"first_result": 0,
"count": 0,
"max_results": 0,
"status": "ok",
"data": [
{
"id": "string",
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}
],
"total": 0
}{
"status": "ok",
"data": {
"id": "string",
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}
}// Create a folder payload
{
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}{
"status": "ok",
"data": {
"id": "string",
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}
}// Editing a folder payload
{
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}{
"first_result": 0,
"total": 0,
"count": 0,
"data": [
{
"object_tree_expression": "string",
"description": "string",
"id": "string",
"variables": [
{
"field_name": "string",
"data_type": "string",
"reference_model_type": "string",
"type": "string",
"parameter_name": "string",
"path": [
"string"
],
"reference_type": "string",
"directive": "string",
"container_type": "string"
}
],
"token": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string",
"creation_date": 0
}
],
"max_results": 0,
"status": "ok"
}{
"status": "ok",
"data": {
"object_tree_expression": "string",
"description": "string",
"id": "string",
"variables": [
{
"field_name": "string",
"data_type": "string",
"reference_model_type": "string",
"type": "string",
"parameter_name": "string",
"path": [
"string"
],
"reference_type": "string",
"directive": "string",
"container_type": "string"
}
],
"token": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string",
"creation_date": 0
}
}{
"status": "ok",
"data": {
"object_tree_expression": "string",
"description": "string",
"id": "string",
"variables": [
{
"field_name": "string",
"data_type": "string",
"reference_model_type": "string",
"type": "string",
"parameter_name": "string",
"path": [
"string"
],
"reference_type": "string",
"directive": "string",
"container_type": "string"
}
],
"token": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string",
"creation_date": 0
}
}// Creating an audience feature payload
{
"object_tree_expression": "string",
"description": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string"
}{
"status": "ok",
"data": {
"object_tree_expression": "string",
"description": "string",
"id": "string",
"variables": [
{
"field_name": "string",
"data_type": "string",
"reference_model_type": "string",
"type": "string",
"parameter_name": "string",
"path": [
"string"
],
"reference_type": "string",
"directive": "string",
"container_type": "string"
}
],
"token": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string",
"creation_date": 0
}
}// Editing an audience feature payload
{
"object_tree_expression": "string",
"description": "string",
"datamart_id": "string",
"addressable_object": "string",
"name": "string",
"folder_id": "string"
}// The gender will be selectable by the user
SELECT @count{} FROM UserPoint
where profiles {gender in $gender}
// The date will be selectable by the user,
// but the nature will always be $transaction_confirmed
SELECT @count{} FROM UserPoint
where events { nature = "$transaction_confirmed" and date >= $date }
// User will only be able to select one gender
SELECT @count{} FROM UserPoint
where profiles {gender == $gender}
// User will be able to select multiple gender values
SELECT @count{} FROM UserPoint
where profiles {gender in $gender}// User will be able to select the minimum number of transactions
SELECT @count{} FROM UserPoint
WHERE events@ScoreSum(min: $frequency) {
purchase in $user_purchase
}



You have access to three tools to segment your audience using queries:
Leverage audience features to build your queries in the standard segment builder (Audience > Builders > Standard). Once set up, this is the preferred solution for fast queries building and visualising the segment in a dashboard before saving it.
Drag and drop fields from your schema into a visual OTQL query builder with the advanced segment builder (Audience > Builders > Advanced). It doesn't require any setup but requires knowledge about the schema. May not be the best option for casual users.
Build OTQL queries directly in Data Studio > Query tool. This requires a solid knowledge of your schema and .
You enable this feature when you set up at least:
One segment builder
One .
You can set up multiple segment builder to create templates once you identify common segment queries that you often use.
Each segment builder has a list of default audience features that are automatically used in it.
You can create and edit segment builders through the UI by going to Settings > Datamart > Your datamart > Segment builders. You can also manage them by API.
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders/{audience_builders_id}
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders
GET https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders/{audience_builders_id}
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders
You cannot create more than 20 standard segment builder instances per datamart.
PUT https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders/{audience_builders_id}
PUT https://api.mediarithmics.com/v1/datamarts/{datamart_id}/audience_builders/{audience_builders_id}
This API helps you upload dashboards.
GET https://api.mediarithmics.com/v1/data_file/data?uri=mics://data_file/tenants/{organisation_id}/dashboards/{datamart_id}/AUDIENCE_BUILDER-{audience_builder_id}.json
PUT https://api.mediarithmics.com/v1/data_file/data?uri=mics://data_file/tenants/{organisation_id}/dashboards/{datamart_id}/AUDIENCE_BUILDER-{audience_builder_id}.json
GET https://api.mediarithmics.com/v1/data_file/data?uri=mics://data_file/tenants/{organisation_id}/dashboards/{datamart_id}/AUDIENCE_BUILDER-{audience_builder_id}.json
PUT https://api.mediarithmics.com/v1/data_file/data?uri=mics://data_file/tenants/{organisation_id}/dashboards/{datamart_id}/AUDIENCE_BUILDER-{audience_builder_id}.json
In ordre to be able to select audience features thanks to final values, you should first import your final values thanks to a csv file. For more information about the search by final feature, please read the search by final value feature guider.
Your csv file should have:
The following format: 1 level min and 8 levels max, final_value,
Example
A maximum of 100 000 lines, each line should match your schema,
Final values' field of type String or [String].
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/reference_table_job_executions
POST https://api.mediarithmics.com/v1/datamarts/{datamart_id}/reference_table_job_executions
Example

datamart_id
integer
The ID of the datamart
datamart_id
integer
The ID of the datamart
audience_builders_id
integer
The ID of standard segment builder you want to get
datamart_id
integer
The ID of the datamart
datamart_id
string
The ID of the datamart
demographics_features_ids
array
Array of string: the IDs of audience features you want to link to your standard segment builder. These audience features will always be selected in the builder.
name
string
Name of the standard segment builder
datamart_id
integer
GLLBBPYmcMbn
datamart_id
integer
EACXzNpxesGA
audience_builders_id
integer
y1iztafbxNOl
datamart_id
integer
UpWZubSJk7SC
datamart_id
string
ZdvK39oPj25k
demographics_features_ids
array
Zu89g8D3qu6m
name
string
bpFIx0HWRbci
audience_builders_id
integer
The ID of the standard segment builder to edit
datamart_id
integer
The ID of the datamart
datamart_id
string
The ID of the datamart
demographics_features_ids
string
Array of string: The IDs of audience features you want to link to your standard segment builder. These audience features will always be selected in the builder.
name
string
The name of the standard segment builder
audience_builders_id
integer
aIhsvJpgK6ny
datamart_id
integer
TTSN6yxr96WI
datamart_id
string
N2N4Y3WHqATi
demographics_features_ids
string
1EsZV25xtbUt
name
string
xsAiWSvKY437
organisation_id
integer
The ID of the organisation
datamart_id
integer
The ID of the datamart
audience_builder_id
integer
The ID of the standard segment builder on which you want to upload dashboards
organisation_id
integer
The ID of the organization
datamart_id
integer
The ID of the datamart
audience_builder_id
integer
The ID of the standard segment builder on which you want to upload dashboards
organisation_id
integer
xpvJF5g4SpwP
datamart_id
integer
uqK1nIMmTAn6
audience_builder_id
integer
cZe7evLowSbp
organisation_id
integer
UFX1xf9M0BWy
datamart_id
integer
uhpUbCPUJ0Gm
audience_builder_id
integer
5Rm1gDwXgxFe
datamart_id
integer
The ID of the datamart
file
string
The name of the file you want to import. Ex: "@final_value_file.csv"
datamart_id
integer
Od2HfDMVLPel
file
string
kUYOfU4QZxF3
{
"first_result": 0,
"count": 0,
"max_results": 0,
"status": "ok",
"data": [
{
"id": "string",
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}
],
"total": 0
}{
"first_result": 0,
"count": 0,
"max_results": 0,
"status": "ok",
"data": [
{
"id": "string",
"children_ids": [
"string"
],
"audience_features_ids": [
"string"
],
"parent_id": "string",
"datamart_id": "string",
"name": "string"
}
],
"total": 0
}// Create a standard segment builder payload
{
"datamart_id": "string",
"demographics_features_ids": [
"string"
],
"name": "string"
}// Edit a standard segment builder payload
{
"datamart_id": "string",
"demographics_features_ids": [
"string"
],
"name": "string"
}// Create a dashboard payload example
[
{
"id": "1",
"name": "Standard segment builder",
"type": "AUDIENCE_BUILDER",
"datamart_id": "xxxx",
"components": [
{
"layout": {
"h": 1,
"static": false,
"w": 6,
"x": 0,
"y": 0
},
"component": {
"id": 2,
"component_type": "COUNT",
"title": "User Profiles",
"query_id": "22252"
}
},
{
"layout": {
"h": 1,
"static": false,
"w": 6,
"x": 6,
"y": 0
},
"component": {
"id": 2,
"component_type": "COUNT",
"title": "User Cookies",
"query_id": "22264"
}
},
{
"layout": {
"h": 3,
"static": false,
"w": 12,
"x": 0,
"y": 1
},
"component": {
"id": 5,
"component_type": "MAP_BAR_CHART",
"title": "Genre",
"show_legend": true,
"query_id": "47031",
"sortKey": "A-Z",
"percentage": true,
"labels": {
"enable": true,
"filterValue": "",
"format": "{point.y}%"
},
"tooltip": {
"formatter": "{point.y}% ({point.count})"
}
}
},
{
"layout": {
"h": 3,
"static": false,
"w": 12,
"x": 0,
"y": 4
},
"component": {
"id": 5,
"component_type": "COUNT_BAR_CHART",
"labels_enabled": true,
"plot_labels": [
"Email",
"Print",
"Sms",
"Tel",
"Web",
"App"
],
"title": "Contactabilité",
"show_legend": false,
"query_ids": [
"47033",
"47034",
"47035",
"47036",
"47037",
"47038"
]
}
}
]
}
]level1,level2, ... ,final_valuelevel1,level2,level3,level4,final_value
activities,channel_id,,,my channel id1
segments,creation_ts,,,123
...{
"status": "ok",
"data": {
"parameters": null,
"result": null,
"error": null,
"id": "xxxxxx",
"status": "PENDING",
"creation_date": 1634134417792,
"start_date": null,
"duration": null,
"organisation_id": "xxxx",
"user_id": "xxxx",
"cancel_status": null,
"debug": null,
"is_retryable": false,
"num_tasks": null,
"completed_tasks": null,
"erroneous_tasks": null,
"retry_count": 0,
"permalink_uri": null,
"job_type": "REFERENCE_TABLE",
"import_mode": "MANUAL_FILE",
"import_type": null
}
}curl -k --location --request POST 'https://api.mediarithmics.com/v1/datamarts/{datamart_id}/reference_table_job_executions' \
-H 'Content-Type: text/csv' \
-H 'Authorization: TOKEN' \
--data-binary '@./final_value_file.csv'





Bulk import aims at giving you the ability to bulk-import data into the mediarithmics platform.
You can import:
Offline activities such as offline purchases and store visits
User segments such as email lists, cookies list, user accounts list, etc.
User profiles such as CRM data and scoring
User association such as CRM Onboarding
User dissociation
User suppression requests such as GDPR Suppression requests, and Opt-Out Management
You upload files associated with a document import definition:
Files represent the data.
Document imports represent what mediarithmics should do with the data.
If you need to track users in real-time, you should read
The two steps for bulk import are:
Create the document import definition to tell mediarithmics what you are importing
Upload files associated with the document import definition. Each uploaded file creates a new document import execution.
How to choose between creating a new document import or adding a new file to an existing document import? Our recommendation is to create a new document import each time you have a new set of files to upload. For example, if you upload CRM profiles every night, you should create a new "User profiles from CRM - " document import every night instead of just uploading new files to a unique "User profiles from CRM" document import.
Each line in the uploaded file is a command to execute. Depending on the document import type, you have different commands available.
When importing data, you need to properly add . This will ensure your data is associated with the proper .
Only one identifier is allowed per line. For example, you shouldn't specify the user agent ID if the Email Hash is already used in a line.
However, you don't have to always use the same type of identifier in your document. For example, one line could use the user account ID while another uses the email hash.
Document imports define what you are about to upload in one or multiple files.
A document import object has the following properties:
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports
Response:
Here is a sample request using curl:
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports
You can list all document imports for a datamart or search them with filters.
The query is paginated as described in .
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId
PUT https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId
DELETE https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId
Removes a document import you don't want to see anymore in the system.
A file upload creates an execution.
After creation, the execution is at the PENDING status. It goes into the RUNNING status when the import starts and SUCCEEDED status once the platform has correctly imported the file.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId/executions
You create an execution and upload a file with this endpoint.
See an example:
You retrieve metadata about the created execution, notably and id property you can use to track the execution.
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId/executions
You can list all executions for a document, import and retrieve useful data like their status, execution time and error messages.
GET https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId/executions/:executionId
Get a specific execution and retrieves useful data like its status, execution time and error messages.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:importId/executions/:executionId/action
Cancel a specific execution
The cancellation of an execution will only work if the status of this executions is "PENDING"
If you need to import larger files than 100Mbytes, you can split them before using the upload API and call it multiple times.
You can split massive files using the shell command.
A schema is applied to a datamart and defines what mediarithmics should index and make available through queries.
Each schema is a schema defining an object tree index that will allow you to run fast to search users.
This schema defines all available mediarithmics objects with the standard properties. When , you can start from this schema and add/remove properties based on your needs and the data you ingest into the platform.
The name of your import.
priority
Enum
LOW, MEDIUM or HIGH
use_processing_pipeline
Boolean
Use this parameter if the import should go through activity analyzers or session aggregation for instance. Values are true or false. Default is false
shuffle_lines
Boolean
Will shuffle the lines of the file for better performance. Values are : true or false. Default is true
field
type
description
document_type
Enum
The type of data you want to import. Should be USER_ACTIVITY, USER_SEGMENT, USER_PROFILE,
USER_CHOICE,
USER_IDENTIFIERS_DELETION , or USER_IDENTIFIERS_ASSOCIATION_DECLARATIONS
mime_type
Enum
The format of the imported data. APPLICATION_X_NDJSONor TEXT_CSVIt should match the file format of the upload file, e.g. .csv or .ndjson. The csv format can be chosen only for USER_SEGMENT imports.
encoding
String
Encoding of the data that will be imported. Usuallyutf-8
name
datamartId
integer
The ID of the datamart in which your data will be imported
data
object
The document import object you wish to create
datamartId
integer
The ID of the datamart
keywords
string
The keywords to match with document import names. It is case sensitive.Examples:
mime_type
string
Filter on a specific mime type. Supported values are APPLICATION_X_NDJSON or TEXT_CSV .
document_types
string
Filter on specific document types. Supported values areUSER_PROFILE, USER_ACTIVITY or USER_SEGMENT .Multiple filters can be separated with commas.Examples : &document_types=USER_PROFILE or &document_types=USER_PROFILE,USER_ACTIVITY
order_by
string
ID sorts result by default, you can specify &order_by=name to sort them by name
datamartId
integer
The ID of the datamart
importId
integer
The ID of the document import
datamartId
integer
The ID of the datamart
importId
integer
The ID of the document import
data
object
The document import object to put
datamartId
integer
The ID of the datamart
importId
integer
The ID of the document import
datamartId*
string
The ID of the datamart
importId*
string
The ID of the document import
Content-Type*
string
Your upload configuration.
datamartId*
integer
The ID of the datamart
importId*
integer
The ID of document import
datamartId*
integer
The ID of the datamart
importId*
integer
The ID of the document import
executionId*
integer
The ID of the execution (usually retrieved from "create execution" or "list executions" requests)
datamartId*
string
The ID of the datamart
importId*
string
The ID of the document import
executionId*
string
The ID of the execution (usually retrieved from "create execution" or "list executions" requests)
body*
json
Must be: {"action":"CANCEL"}
String
// Sample document import object
{
"document_type": "USER_ACTIVITY",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}{
"status": "ok",
"data": {
"id": "36271",
"datafarm_key": "DF_KEY",
"datamart_id": "DATAMART_ID",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "YOUR_DOCUMENT_IMPORT_NAME",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
}
}curl -X POST \
"https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports"
-H 'Authorization: <YOUR_API_TOKEN>'
-H 'Content-Type: application/json'
-d '{
"document_type": "USER_ACTIVITY",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'{
"status": "ok",
"data": [
{
"id": "19538",
"datafarm_key": "DF_KEY",
"datamart_id": "DATAMART_ID",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "December 2020 user profiles",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
},
{
"id": "19552",
"datafarm_key": "DF_KEY",
"datamart_id": "DATAMART_ID",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "January 2021 user profiles",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
},
{
"id": "19553",
"datafarm_key": "DF_EU_2020_02",
"datamart_id": "1509",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "February 2021 user profiles",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
}
],
"count": 3,
"total": 3,
"first_result": 0,
"max_result": 50,
"max_results": 50
}{
"status": "ok",
"data": {
"id": "36271",
"datafarm_key": "DF_KEY",
"datamart_id": "DATAMART_ID",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "December 2020 user profiles",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
}
}{
"status": "ok",
"data": {
"id": "36271",
"datafarm_key": "DF_KEY",
"datamart_id": "DATAMART_ID",
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "YOUR_DOCUMENT_IMPORT_NAME",
"priority": "MEDIUM",
"shuffle_lines" : true,
"use_processing_pipeline" : false
}
}{
"status": "ok",
"data": {
"parameters": null,
"result": null,
"error": null,
"id": "11597785",
"status": "PENDING",
"creation_date": 1609410143659,
"start_date": null,
"duration": null,
"organisation_id": "1426",
"user_id": null,
"cancel_status": null,
"debug": null,
"is_retryable": false,
"permalink_uri": "MTowOjA6NDI1MzAxMg==",
"num_tasks": null,
"completed_tasks": null,
"erroneous_tasks": null,
"retry_count": 0,
"job_type": "DOCUMENT_IMPORT",
"import_mode": "MANUAL_FILE",
"import_type": null
}
}curl --location --request POST 'https://api.mediarithmics.com/v1/datamarts/:datamartId/document_imports/:executionId/executions/' \
--header 'Content-Type: application/x-ndjson; \
--header 'Authorization: api:TOKEN' \
--data-binary '@/Users/username/path/to/the/file.ndjson'{
"status": "ok",
"data": [
{
"parameters": {
"datamart_id": 1609,
"document_import_id": 19718,
"mime_type": "APPLICATION_X_NDJSON",
"document_type": "USER_PROFILE",
"input_file_name": "requestBody9664967795462448677asRaw",
"file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y.json",
"number_of_lines": 4,
"segment_id": null
},
"result": {
"total_success": 4,
"total_failure": 0,
"input_file_name": "requestBody9664967795462448677asRaw",
"input_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y.json",
"error_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y_errors.csv",
"possible_issue_on_identifiers": false,
"top_identifiers": {}
},
"error": null,
"id": "11597785",
"status": "SUCCEEDED",
"creation_date": 1609410143659,
"start_date": 1609410150976,
"duration": 3059,
"organisation_id": "1426",
"user_id": null,
"cancel_status": null,
"debug": null,
"is_retryable": false,
"permalink_uri": "MTowOjA6NDI1MzAxMg==",
"num_tasks": 4,
"completed_tasks": 4,
"erroneous_tasks": 0,
"retry_count": 0,
"job_type": "DOCUMENT_IMPORT",
"import_mode": "MANUAL_FILE",
"import_type": null,
"end_date": 1609410154035
},
{
"parameters": {
"datamart_id": 1609,
"document_import_id": 19718,
"mime_type": "APPLICATION_X_NDJSON",
"document_type": "USER_PROFILE",
"input_file_name": "requestBody17471990940413569967asRaw",
"file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody17471990940413569967asRaw-2020-10-19_09.54.45-JvP1ssxKSu.json",
"number_of_lines": 4,
"segment_id": null
},
"result": {
"total_success": 0,
"total_failure": 4,
"input_file_name": "requestBody17471990940413569967asRaw",
"input_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody17471990940413569967asRaw-2020-10-19_09.54.45-JvP1ssxKSu.json",
"error_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody17471990940413569967asRaw-2020-10-19_09.54.45-JvP1ssxKSu_errors.csv",
"possible_issue_on_identifiers": false,
"top_identifiers": {}
},
"error": {
"message": "0 success, 4 failures\nSaved errors:\nNo profile id found while upserting a user profile Error id = 9d5016ea-6b7b-4c64-bc74-60ba207e3bed.\nNo profile id found while upserting a user profile Error id = 99f8d9bb-4c94-49ea-8bb2-934bc6056cac.\nNo profile id found while upserting a user profile Error id = d1216b0e-619c-4d92-9098-cc5ae4ac8e16.\nNo profile id found while upserting a user profile Error id = a92d3258-163c-4b9d-949e-94f9006cd77d.\n"
},
"id": "11170897",
"status": "SUCCEEDED",
"creation_date": 1603101286198,
"start_date": 1603101317674,
"duration": 1062,
"organisation_id": "1426",
"user_id": null,
"cancel_status": null,
"debug": null,
"is_retryable": false,
"permalink_uri": "MTowOjA6MzgyNjEyNA==",
"num_tasks": 4,
"completed_tasks": 0,
"erroneous_tasks": 4,
"retry_count": 0,
"job_type": "DOCUMENT_IMPORT",
"import_mode": "MANUAL_FILE",
"import_type": null,
"end_date": 1603101318736
}
],
"count": 2,
"total": 2,
"first_result": 0,
"max_result": 50,
"max_results": 50
}{
"status": "ok",
"data": {
"parameters": {
"datamart_id": 1609,
"document_import_id": 19718,
"mime_type": "APPLICATION_X_NDJSON",
"document_type": "USER_PROFILE",
"input_file_name": "requestBody9664967795462448677asRaw",
"file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y.json",
"number_of_lines": 4,
"segment_id": null
},
"result": {
"total_success": 4,
"total_failure": 0,
"input_file_name": "requestBody9664967795462448677asRaw",
"input_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y.json",
"error_file_uri": "mics://data_file/tenants/1426/datamarts/1509/document_imports/19518/requestBody9664967795462448677asRaw-2020-12-31_10.22.23-KzgivDim3y_errors.csv",
"possible_issue_on_identifiers": false,
"top_identifiers": {}
},
"error": null,
"id": "11597785",
"status": "SUCCEEDED",
"creation_date": 1609410143659,
"start_date": 1609410150976,
"duration": 3059,
"organisation_id": "1426",
"user_id": null,
"cancel_status": null,
"debug": null,
"is_retryable": false,
"permalink_uri": "MTowOjA6NDI1MzAxMg==",
"num_tasks": 4,
"completed_tasks": 4,
"erroneous_tasks": 0,
"retry_count": 0,
"job_type": "DOCUMENT_IMPORT",
"import_mode": "MANUAL_FILE",
"import_type": null,
"end_date": 1609410154035
}
}{
"status": "ok",
"data": {
"parameters": null,
"result": null,
"error": null,
"id": "22747195",
"status": "CANCELED",
"creation_date": 1646060596034,
"start_date": null,
"duration": null,
"organisation_id": "1581",
"user_id": null,
"cancel_status": "REQUESTED",
"debug": null,
"is_retryable": false,
"community_id": "1581",
"num_tasks": null,
"completed_tasks": null,
"erroneous_tasks": null,
"retry_count": 0,
"permalink_uri": null,
"job_type": "DOCUMENT_IMPORT",
"import_mode": "MANUAL_FILE",
"import_type": null
}
}split -l <LINE_NUMBER> ./your/file/pathYou can, of course, upload multiple user profiles at once. Note the uploaded data is in ndjson and not json. That means the different profiles are not separated by commas, but by a line separator \n
Then, create an execution with your user profile import commands formatted in ndjson .
Each line in the uploaded file can have the following properties:
operation
Enum
Either UPSERT or DELETE
compartment_id
String (Optional)
The Compartment ID, acting as a user in correlation with user_account_id
user_account_id
String (Optional)
The User Account ID, acting as an in correlation with compartment_id
When importing profiles with identifiers, only one identifier is allowed per line. For example, you shouldn't specify the user agent ID if the Email Hash is already used in a line.
First create the document import using the call below. You can also reuse a document import that was previously created
Each user profile import you do will be a new execution of the document import created. Here is an example :
When doing an UPSERT if you want to update existing profiles in your datamart you should use the update_strategy property.
If you wish to perform targeted updates on existing profiles without overwriting the whole existing user profile object, you should use the PARTIAL_UPDATE or PARTIAL_DELETE values of the update_strategy property.
There are 2 main usage for these strategies :
Dealing with arrays of objects
If you're dealing with arrays of objects, these strategies work together with two directives that should be defined on the schema of the datamart you are working on. In this case, you should first update the schema in order to include the directives.
If you want to make targeted updates on a object that has "id-like" field that can be used to identify the object, use @UpdateStrategyKey
Mark the field with the directive inside the given object you would like to make updates on. The field that has the directive will serve to identify the given object based on the value of this field in the update request.
For example of where the directive is needed, see use cases #1 to #5 below.
If you want to make targeted updates on a object that does not have a field that can be used to identify the object you should use @UpdateValueObject
Mark the object with the directive : send in the payload with the new value of the object and it will override the previous value.
For example of where the directive is needed, see use case #6 and #7 below.
You should use only one of the directive in a given object
Dealing with objects
When there are no arrays of objects involved, you can still use the PARTIAL_UPDATE but the no directive is necessary. For examples, check use cases #8 to #10.
@UpdateStrategyKey directive@UpdateStrategyKey directiveFor a non mandatory value you can also set a given field to null (but not an object directly). For instance in the previous example you could have done the following :
@UpdateStrategyKey directive@UpdateStrategyKey directiveIf you wish to delete a specific property value inside an object, you should use PARTIAL_UPDATE. Please refer to use case #3.
Schema related constraints :
The update request must respect the datamart schema :
If a property is in the payload of the request but not declared in the schema, the whole request will fail. However, if a property is already present in the profile but not declared in the schema, it will not be overwritten by a partial update or delete. In fact it will not be possible to update such property unless using FORCE_REPLACE
If the types of the properties in the payload of the request do not respect the schema, the whole request will fail.
Limitations :
If a property in the schema has a directive such as @Property(path: “[parent].property”) (referencing [parent] in path) : this property cannot be updated with the partial update
Updatable properties cannot be computed values such as : computed field, ML function, results of function.
@UpdateValueObject cannot be used with @UpdateStrategyKey in the same object
When importing user profiles using UPSERT, if you wish to update existing profiles by completely overwriting the existing profiles you should use the FORCE_REPLACE value of the update_strategy property: the user profile will be completely replaced by the object passed in the user_profile property.
force_replace
Boolean (Optional)
Mandatory when the operation is UPSERT.
If true, then the User Profile will be completely replaced by the object passed in the user_profile field.
If false, the object passed in the user_profile field will be merged with the existing User Profile of the UserPoint.
merge_objects
Boolean (Optional)
Only considered if force_replace is false.
Manage the comportement between two objects with a same property.
If false (default value), the new object overrides the existing one.
If true the new object is merged in deep to the existing one (see ).
More details on merge_objects behavior :
The number of referenced properties has an impact on query performance. It would be best only to have the properties you need to use when defining your schema. Don't just copy the default ones.
UserPoint is the root element of any mediarithmics schema, and only one index can be created. This may change in future releases to allow you to build different indexes.
The ! operator marks elements as mandatory. That means the element is expected not to be null.
If you add the ! operator to a field that happens to have null values, the entire object won't be indexed.
It is hard to ensure a field will always have a value in all the data you'll put into the platform, whatever the ingestion method. Therefore, we recommend not using this operator in your schema for fields other than the predefined ones.
This type is treated as a keyword string, but marks data that is not understandable for a user, as it is an identifier.
There is existing multiple native type you can use in your schema.
A best practice is to import objects with dates as Timestamp
To display the value as date and time when running queries or in exports, you can use the Date type.
You usually get data as Timestamp and generate the Date type from the Timestamp with the ISODate function. If not, then ensure you get data in the correct format. There is no implicit conversion between timestamps and dates.
This directive marks the root element of an Object Tree Index. The index property marks the name of the Object Tree Index
It should always be USER_INDEX as multiple indexes are not currently supported.
This directive makes a field available in the WHERE clause and in Aggregation operations of your OTQL queries. Fields that don't have this directive can't be used in the WHERE clause but can still be retrieved in the SELECT clause.
Don't mark every field with this directive. Some fields, like first name, last name ... will never be used in WHERE clauses and would only make your index larger.
The @TreeIndex directive is mandatory for some default properties. They already have that directive in the default schema, and you shouldn't remove it, or your schema won't be validated.
The value of the index in @TreeIndex should always be USER_INDEX.
When registering a String in a Tree Index with the directive @TreeIndex, you can specify how the field should be indexed, depending on how you want to use it later.
Two modes are available, text and keyword.
This mode is considering your value as a set of words (e.g. a text). For example, the value 'The QUICK brown fox JuMpS, over the Lazy doG.' will be considered as the list of:
the
quick
brown
fox
jumps
over
lazy
dog.
As you can see, some transformations were done before storing the data:
all the words were put in lowercase -> all string operators will be case insensitive on a field indexed with data_type: text
the original string was split, and the splitting characters were removed (here, it was , . and ,)
The method used to split the words together is described in great details here. The most common characters that trigger a split are (non-exhaustive list):
(space)
-
"
‘
,
;
?
!
/
Note that the following characters do not trigger a split (non-exhaustive list):
.
_
'
’
The data_type: "text" mode should be used when you're working with:
Full sentences (ex. a Page Title)
URLs
List of keywords (separated by a splitting character as listed above)
similar text
Generally, this mode is used when you don't have great control over the value being collected in this field, and you want to do "broad" queries based on it.
This mode is used to consider your value as a single word. No transformation is done with the provided value. The data_type: "keyword" mode should be used when you're working with:
Single values
Ids passed as text (ex: UUIDs, productId, categoryId, etc.)
Every time that you already know the values that are passed in the field (e.g. when the field data is linked to a taxonomy)
etc.
Generally, this mode is used when you have great control over the value being collected in this field, and you want to do exact queries on it later by doing exact equality in queries.
By default, the path associated with each of your properties is the name of these properties. You can change this behavior with the @Property directive.
All the properties in the default schema already redefine their path. For example, the creation_ts property in the UserPoint object points to the $creation_ts property in the stored data. The declaration should theoretically have used the @Property directive, but it is unnecessary to do the work for you.
You can define multiple paths to get the data from. If the first path is empty, the second one will be used and so one.
In this example, user activities channel ID is either the site ID or the app ID depending on the user activity's context.
You can use the [parent] token to go up in the object tree when defining a path
This directive allows you to create custom types based on predefined types.
The @Function directive is used to declare a calculated field with a set of predefined functions.
This function creates a date from a timestamp.
In order to retrieve third party cookie mappings for a given device point, the ThirdPartyCookieMapping function can be used:
The function works on device points that have a device technical id of type MUM_ID attached to them. It translates the MUM_ID into a vector_id (mum:-1234 -> vec:1234) and retrieves attached partners' 3P cookies.
This function is only used on datamarts referencing the legacy type UserAgent.
For datamarts referencing the new type UserDevicePoint, we suggest to use the previous function: ThirdPartyCookieMappings.
This function extracts device information for an agent identifier.
The UserAgentInfo class has the following properties:
When users create their queries using your schema, they usually remember some elements they search for but don't know their identifiers.
You can add the @ReferenceTable directive to fields storing channels, compartments and segment identifiers. That way, the user will have an autocomplete with the element's name instead of their identifier when creating his queries.
This directive marks properties as usable in queries when creating Edge segments.
This directive marks properties as calculated from a Computed Field Function.
This directive is used in order to make targeted updates on objects with an ID property in a UserProfile
This directive is used in order to make targeted updates on objects that do not not have ID properties in a UserProfile
Do not index the output of the ISODate Function. You should index the timestamp value only.
In some scenarios, you could have events directly in the UserPoint and in user activities. For example, to use frequency OTQL directives on UserEvents and build queries on several events that occurred on a single activity.
In any other case, do not duplicate the UserEvent. Either use it in the UserPoint or the user activity.

This page provides examples of OTQL queries, based on simplified schemas. The objective is to be less technical and illustrate how works our language by different use cases.
To begin, we'll talk about fundamentals of the syntaxe language. How it works and what it's the excepted result for each query. It will give you additional informations about OTQL.
For the following examples, we consider the runtime schema below:
If we want to represent a userpoint, it will be object tree. For illustrate it, we could considerate it like this :
Even if the syntaxe of an OTQL query is close to a SQL one, the execution isn't the same at all. We talk about object tree and not column. This main difference gets lots of consequences and one of them is how the query is executed.
Query resolution is a two phases process
Narrow queried object mentioned in FROM by applying a WHERE clause on it or/and on any sub-object's fields
When WHERE clause is applied on sub-object's field, if at least one sub-object validates the condition then all parent objects validate it as well.
2. Return only desired objects & fields by listing them in SELECT clause
Finally the query returns :
As you can see, despite the WHERE clause on $transaction_confirmed events, the query returns $page_view events since SELECT is applied from UserPoint.
Narrow queried object mentioned in FROM by applying a WHERE clause on it or/and on any sub-object's fields
2. Return only desired objects & fields by listing them in SELECT clause
Here the query returns :
In this example, the UserPoint id=3 was not picked in the WHERE clause since it doesn't have any $transaction_confirmed event attached to it.
As you could see, the WHERE can filter only object in defined by the FROM . It gives you the ability to start the query where you need, it defines the scope of the query resolution and where the other operators are executed.
If you want to pick only a specific event, you will have to change the context. The FROM allows you to do so.
Here the query returns :
However specifying a sub-object also limit the scope of the predicate. For example, use FROM UserEvent doesn't give you access to UserPoint fields (like profiles).
Another way to get specific fields values : use @filter in the SELECT part to only retrieve elements you need
You can also use if you don't want to change the scope of your query but you need to get only specific elements. It is completely independent of the WHERE clause and the filter is applied after the WHERE clause execution.
The first step will be the same as the one mentionned previously (narrow queried object mentioned in FROM using the WHERE clause).
But during the second phase, the @filter will be apply on selected objects.
Thefore, only $page_view are returned by the query :
You can add multiple conditions in the WHERE clause using .
As previously demonstrated, it's quite easy to add conditions in sub-objects scope (FROM). However if you need to execute your query in a specific scope and apply condition from an other object, you will have to use JOIN.
Note: the join is automatically resolved by UserPoint, there is no need to provide join constraint.
By the way, it is also possible to directly make the JOIN in UserProfile to get the same result :
This is the runtime schema for examples below
This is the runtime schema for examples below
Use case: Select all UserPoint who are celebrating their birthday today and are between 18 and 28 years old (rolling years).
Be aware that now is evaluated at the start of the segment calculation, which may result in a discrepancy between the expected and actual values.
Use case: Select all UserPoint who are celebrating their birthday in the next 7 days and are between 18 and 28 years old (rolling years).
@filter can be quite hard to understand, so let's see some examples to clarify its usage.
This is the runtime schema for examples below
Use case: I want to retrieve, for each user, all URLs of type "newsarticle" and category "actu" that were browsed yesterday.
If I use the following query :
This query returns many empty events, making the result unusable. Although the events are filtered, I haven’t excluded UserPoint that don’t contain any events with the clause.
To fix this, I need to add a WHERE clause to ensure each UserPoint has at least one event which matches the clause.
Here, the result is an improvement over the previous one, displaying only the events with a url of a page with the type "newsarticle" and the category "actu".
Now, if we want to retrieve information from activities while still filtering the events, we need to refine the query further.
As you can see, some events may be empty because the filter is not applied at the activities level. This means the returned activities contain at least one events which matches the clause.
If you try applying the filter at a higher level, specifically at the activity level:
By removing the event filter, the query now returns all events from activities that contain at least one page that matches the clause, rather than only pages that match.
To fix this, the first solution is to add another @filter to exclude unnecessary elements from both activities and events.
This query filters out unnecessary elements from both events and activities, but it is quite lengthy and difficult to read.
The second solution is to apply an empty @filter at the activities level in addition to the events level one:
The empty filter removes any events that are directly empty from the activities.
The empty filter only removes empty sub-objects. If you add another field that contains data, the filter will not remove the events
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports \
-H 'Authorization: <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"document_type": "USER_PROFILE",
"mime_type": "APPLICATION_X_NDJSON",
"encoding": "utf-8",
"name": "<YOUR_DOCUMENT_IMPORT_NAME>"
}'# Here we use the combination of compartment_id and user_account_id acting as identifier
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{
"operation": "UPSERT",
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"user_profile": {
this": "is",
"a":"test"
}
}'# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
loyalty: [Loyalty]
}
type Loyalty {
cards : [Cards]
}
type Cards {
card_id: String! @UpdateStrategyKey
benefits: String
last_visit_date: String
}// Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 500,
"last_visit_date": "2024-01-01"
}
]
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ //adding a new cards object
{
"card_id": "def",
"benefits": 100,
"last_visit_date": "2025-01-01"
}
]
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ // the cards object contains both cards
{
"card_id": "abc",
"benefits": 500,
"last_visit_date": "2024-01-01"
},
{
"card_id": "def",
"benefits": 100,
"last_visit_date": "2025-01-01"
}
]
}
}# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
loyalty: [Loyalty]
}
type Loyalty {
cards : [Cards]
}
type Cards {
card_id: String! @UpdateStrategyKey
benefits: String
last_visit_date: String
}// Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 500,
"last_visit_date": "2024-01-01"
}
]
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ //request changing the benefits value of the existing card
{
"card_id": "abc", // value of the field marked with @UpdateStrategyKey of the inner object to update
"benefits": 100
}
]
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ //
{
"card_id": "abc",
"benefits": 100, // value has been updated
"last_visit_date": "2024-01-01"
}
]
}
}// Extract of existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"my_array_of_scalars": [1,2,3],
...
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"my_array_of_scalars": [4,5,6],
...
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"my_array_of_scalars": [4,5,6], // the value of the property has been replaced by the new value
...
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
loyalty: [Loyalty]
}
type Loyalty {
cards : [Cards]
}
type Cards {
card_id: String! @UpdateStrategyKey
benefits: String
last_visit_date: String
}// Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 500,
"last_visit_date": "2024-01-01"
}
]
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": null // set the value to null
}
]
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ // benefits value has been cleared
{
"card_id": "abc",
"last_visit_date": "2024-01-01"
}
]
}
}# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
loyalty: [Loyalty]
}
type Loyalty {
cards : [Cards]
}
type Cards {
card_id: String! @UpdateStrategyKey
benefits: String
last_visit_date: String
}// Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 500,
"last_visit_date": "2024-01-01"
}
]
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 100 //updating exsisting object
},
{
"card_id": "def", //adding a new object
"benefits": 600,
"last_visit_date": "2025-01-01"
}
]
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 100, // value has been updated
"last_visit_date": "2024-01-01"
},
{
"card_id": "def", // object has been added
"benefits": 600,
"last_visit_date": "2025-01-01"
}
]
}
}# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
loyalty: [Loyalty]
}
type Loyalty {
cards : [Cards]
}
type Cards {
card_id: String! @UpdateStrategyKey
benefits: String
last_visit_date: String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc",
"benefits": 100,
"last_visit_date": "2024-01-01"
},
{
"card_id": "def",
"benefits": 600,
"last_visit_date": "2025-01-01"
}
]
}
}
//Profile in update payload with update_strategy == PARTIAL_DELETE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"card_id": "abc" // value of the field marked with @UpdateStrategyKey of the inner object to delete
}
]
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [ // the card with card_id = "abc" has been deleted
{
"card_id": "def",
"benefits": 600,
"last_visit_date": "2025-01-01"
}
]
}
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
segmentations: [Segmentation]
}
type Segmentation @UpdateValueObject {
type: String
label: String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"segmentations": [
{ "type": "segRFM", "label": "seg1" },
{ "type": "segRFM", "label": "seg2" }
]
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"segmentations": [ // contains the new value of the object
{ "type": "segOther", "label": "seg3" },
{ "type": "segOther", "label": "seg4" }
]
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"segmentations": [ //full array of objects was replaced
{ "type": "segOther", "label": "seg3" },
{ "type": "segOther", "label": "seg4" }
]
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
geolocation: Geolocation
}
type Geolocation @UpdateValueObject {
address: String
city: String
postal_code : String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geolocation":
{ "address": "1 first avenue",
"city": "new york",
"postal_code" :"0101"
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geolocation":
{ "address": "1 first avenue",
"city": "new york"
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geolocation":
{ "address": "1 first avenue",
"city": "new york"
}
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
first_name: String
last_name : String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"first_name" : "john"
"last_name" : "doe"
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"first_name" : null
"last_name" : "smith"
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"last_name" : "smith"
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
geo_location: GeoLocation
}
type GeoLocation {
address: String
postal_code: String
city: String
country: String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"address": "1 main street",
"city": "paris",
"postal_code" : "0001",
"country" : "France"
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"address": null,
"city": "New york",
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"city": "new york",
"postal_code" : "0001",
"country" : "France"
}
}
# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
geo_location: GeoLocation
}
type GeoLocation {
address: String
postal_code: String
city: String
country: String
}//Existing profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"address": "1 main street",
"city": "paris",
"postal_code" : "0001",
"country" : "France"
}
}
//Profile in update payload with update_strategy == PARTIAL_UPDATE
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"address": "52 fifth avenue",
"city": "New York City",
"postal_code" : "12345",
"country" : "USA"
}
}
//New profile
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"geo_location": {
"address": "52 fifth avenue",
"city": "New York City",
"postal_code" : "12345",
"country" : "USA"
}
}
#This is allowed
type LoyaltyCard {
card_id:String @UpdateStrategyKey
benefits: Int
last_visit_date: String
other_information : [OtherInformation]
}
type OtherInformation {
info_id: String @UpdateStrategyKey
}
################################
#This is not allowed
type LoyaltyCard {
card_id:String @UpdateStrategyKey
benefits: Int @UpdateStrategyKey
last_visit_date: Date
}
# Stored profile:
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"benefits": 200,
"card_id": "abc",
"last_visit_date": "2024-01-01"
}
]
}
}
# New profile in request payload
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{
"operation": "UPSERT",
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"update_strategy": "FORCE_REPLACE",
"user_profile": {
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"benefits": 500,
"card_id": "xyz",
"last_visit_date": "2025-01-01"
}
]
}
}
}'
# New saved profile:
{
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"loyalty": {
"cards": [
{
"benefits": 500,
"card_id": "xyz",
"last_visit_date": "2025-01-01"
}
]
}
}# Stored profile:
{
"my_property_1": "value1",
"my_property_2": "value1",
"my_array_property": ["value1"]
"my_array_object_property": [
{
"my_sub_array_object_property_1": "value1",
"my_sub_array_object_property_2": "value1"
}
],
"my_object_property": {
"my_sub_object_property_1": "value1",
"my_sub_object_property_2": "value1"
}
}
# New profile in request payload
curl -X POST \
https://api.mediarithmics.com/v1/datamarts/<DATAMART_ID>/document_imports/<DOCUMENT_IMPORT_ID>/executions \
-H 'Authorization: <API_TOKEN>' \
-H 'Content-Type: application/x-ndjson' \
-d '{
"operation": "UPSERT",
"compartment_id": "<COMPARTMENT_ID>",
"user_account_id": "<USER_ACCOUNT_ID>",
"force_replace": false,
"merge_objects": true,
"user_profile": {
"my_property_2": "value2",
"my_property_3": "value3",
"my_array_property": ["value2"]
"my_array_object_property": [
{
"my_sub_array_object_property_2": "value2"
"my_sub_array_object_property_3": "value3"
}
],
"my_object_property": {
"my_sub_object_property_2": "value2"
"my_sub_object_property_3": "value3"
}
}
}'
# New saved profile:
{
"my_property_1": "value1",
"my_property_2": "value2", # override scalar property
"my_property_3": "value3",
"my_array_property": ["value1","value2"] # merge arrays
"my_array_object_property": [ # merge arrays
{
"my_sub_array_object_property_1": "value1"
"my_sub_array_object_property_2": "value1"
},
{
"my_sub_array_object_property_2": "value2"
"my_sub_array_object_property_3": "value3"
}
],
"my_object_property": { # merge objects
"my_sub_object_property_1": "value1"
"my_sub_object_property_2": "value2" # override scalar property within object
"my_sub_object_property_3": "value3"
}
}type UserPoint @TreeIndexRoot(index:"USER_INDEX"){
id: ID!
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
creation_date:Date! @Function(name:"ISODate", params:["creation_ts"])
# User identifiers
accounts: [UserAccount!]!
emails: [UserEmail!]!
devices: [UserDevicePoint!]!
# User content
activities: [UserActivity!]!
events: [UserEvent!]!
profiles: [UserProfile!]!
choices: [UserChoice!]!
# Technical objects
scenarios: [UserScenario!]!
segments: [UserSegment!]!
# Deprecated identifiers
# agents: [UserAgent!]!
}
### User identifiers
type UserAccount {
id:ID! @TreeIndex(index:"USER_INDEX")
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
creation_date:Date! @TreeIndex(index:"USER_INDEX") @Function(params:["creation_ts"], name:"ISODate")
compartment_id: String! @TreeIndex(index:"USER_INDEX") @ReferenceTable(model_type:"COMPARTMENTS", type:"CORE_OBJECT")
user_account_id: String! @TreeIndex(index:"USER_INDEX")
}
type UserEmail {
id:ID! @TreeIndex(index:"USER_INDEX")
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
last_activity_ts: Timestamp @TreeIndex(index:"USER_INDEX")
email: String @TreeIndex(index:"USER_INDEX")
}
type UserDevicePoint {
id:ID! @TreeIndex(index:"USER_INDEX")
creation_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
creation_date:Date! @Function(name:"ISODate", params:["creation_ts"])
device_info:DeviceInfo
technical_identifiers:[UserDeviceTechnicalId!]!
mappings:[UserAgentMapping!]! @Function(name:"ThirdPartyCookieMappings", params:["id"])
}
type DeviceInfo {
brand:String @TreeIndex(index:"USER_INDEX")
browser_version:String @TreeIndex(index:"USER_INDEX")
carrier:String @TreeIndex(index:"USER_INDEX")
model:String @TreeIndex(index:"USER_INDEX")
os_version:String @TreeIndex(index:"USER_INDEX")
agent_type:UserAgentType @TreeIndex(index:"USER_INDEX")
browser_family:BrowserFamily @TreeIndex(index:"USER_INDEX")
form_factor:FormFactor @TreeIndex(index:"USER_INDEX")
os_family:OperatingSystemFamily @TreeIndex(index:"USER_INDEX")
}
type UserDeviceTechnicalId {
id:ID! @TreeIndex(index:"USER_INDEX")
creation_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
expiration_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
last_seen_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
registry_id:String! @TreeIndex(index:"USER_INDEX")
type:String! @TreeIndex(index:"USER_INDEX")
}
type UserAgentMapping {
last_seen:Timestamp
user_agent_id:String @TreeIndex(index:"USER_INDEX")
vector_id:String
}
### User content
type UserActivity {
id: ID!
type: UserActivityType!
channel_id:String @TreeIndex(index:"USER_INDEX") @ReferenceTable(type:"CORE_OBJECT", model_type:"CHANNELS") @Property(paths:["$site_id", "$app_id"])
source: UserActivitySource!
ts: Timestamp! @TreeIndex(index:"USER_INDEX")
duration: Int @TreeIndex(index:"USER_INDEX")
events: [UserEvent!]!
}
type UserEvent @Mirror(object_type:"UserEvent") {
id: ID!
ts: Timestamp! @TreeIndex(index:"USER_INDEX")
date:Date! @Function(params:["ts"], name:"ISODate")
name:String! @TreeIndex(index:"USER_INDEX")
channel_id:String @TreeIndex(index:"USER_INDEX") @ReferenceTable(model_type:"CHANNELS", type:"CORE_OBJECT") @Property(paths:["[parent].$site_id", "[parent].$app_id"])
url: String @TreeIndex(index:"USER_INDEX")
referrer:String @TreeIndex(index:"USER_INDEX")
}
type UserProfile {
id: ID!
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
last_modified_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
compartment_id: String! @TreeIndex(index:"USER_INDEX") @ReferenceTable(model_type:"COMPARTMENTS", type:"CORE_OBJECT")
user_account_id: String @TreeIndex(index:"USER_INDEX")
}
type UserChoice {
id: ID!
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
choice_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
processing_id: String! @TreeIndex(index:"USER_INDEX")
choice_acceptance_value: Boolean! @TreeIndex(index:"USER_INDEX")
user_account_id: String
compartment_id: String
email_hash: String
user_agent_id: String
channel_id: String
}
### Technical objects
type UserSegment {
id: ID! @TreeIndex(index:"USER_INDEX") @ReferenceTable(model_type:"SEGMENTS", type:"CORE_OBJECT")
creation_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
last_modified_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
expiration_ts: Timestamp @TreeIndex(index:"USER_INDEX")
}
type UserScenario {
id: ID! @TreeIndex(index:"USER_INDEX")
scenario_id: String! @TreeIndex(index:"USER_INDEX")
execution_id: String! @TreeIndex(index:"USER_INDEX")
node_id: String! @TreeIndex(index:"USER_INDEX")
callback_ts: Timestamp @TreeIndex(index:"USER_INDEX")
start_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
node_start_ts: Timestamp! @TreeIndex(index:"USER_INDEX")
active: Boolean @TreeIndex(index:"USER_INDEX")
}
### Deprecated identifiers
# type UserAgent {
# id:ID!
# creation_ts: Timestamp!
# last_activity_ts: Timestamp
# user_agent_info:UserAgentInfo @Function(name:"DeviceInfo", params:["id"])
# }
# type UserAgentInfo {
# form_factor:FormFactor
# brand:String
# browser_family:BrowserFamily
# browser_version:String
# carrier:String
# model:String
# os_family:OperatingSystemFamily
# os_version:String
# agent_type:UserAgentType
# }type MyType {
user_account_id: String # doesn't necessarily have a user account
user_account_id: String! # has a user account
events: [UserEvent!]! # has a list of events, in which each event can't be null
events: [UserEvent!] # doesn't necessarily have a list of events, but lists can't have null elements
}type UserChoice {
id: ID!
}type UserProfile {
id: ID!
creation_ts: Timestamp
email: String
age: Int
active: Boolean
}// Origin activity
{
...
"$ts": 1632753811859,
"other_date": "2021-09-27T14:43:31.859Z",
"other_ts": 1632753811859
...
}type UserActivity {
...
ts: Timestamp @TreeIndex(index:"USER_INDEX")
other_date: Date
other_ts: Timestamp
date: Date @Function(name:"ISODate", params:["ts"])
...
}
## Doing SELECT { ts other_date other_ts date } ...
## returns
## "ts": 1632753811859,
## "other_date": "2021-09-27T14:43:31.859Z",
## "other_ts": 1632753811859,
## "date": "2021-09-27T14:43:31.859Z",// Origin activity
{
...
"other_date": 1632753811859,
...
}
type UserActivity {
# This won't work as received data is a timestamp.
other_date: Date
}
## SELECT { other_date } ...
## throws an errortype UserPoint @TreeIndexRoot(index:"USER_INDEX"){
}type UserEvent {
id:ID!
ts:Timestamp!
# url and referrr properties are now available in WHERE clauses
url:String @TreeIndex(index:"USER_INDEX")
referrer:String @TreeIndex(index:"USER_INDEX")
}type myType {
mystring:String @TreeIndex(index:"USER_INDEX", data_type: "text")
secondstring:String @TreeIndex(index:"USER_INDEX", data_type: "keyword")
}type UserEvent {
id:ID!
ts:Timestamp!
name:String!
# We are creating shortcuts to the $url, $referrer and $items properties
# that are normaly in a $properties object in the user event.
# This will make them easier to query
url:String @Property(path:"$properties.$url")
referrer:String @Property(path:"$properties.$referrer")
products:[Product] @Property(path:"$properties.$items")
}
type Product {
# Here we simply change the name into id and name instead of $id and $name
id: String @TreeIndex(index:"USER_INDEX") @Property(path:"$id")
name: String @TreeIndex(index:"USER_INDEX") @Property(path:"$name")
}type UserPoint {
# What should have been declared
creation_ts: Timestamp! @Property(path:"$creation_ts")
# What is declared as a shortcut
creation_ts: Timestamp!
}
type Product {
# We do have to use the @Property directive as those properties
# don't exist in the default schema for a Product object type
id: String @Property(path:"$id")
name: String @Property(path:"$name")
}type MyType {
channel_id: String @Property(paths:["$site_id", "$app_id"])
}type MyType {
creative_id:String @Property(path:"[parent].[parent].$origin.$creative_id")
}# UserEvent type has been renamed ArticleView
# Not really interesting and should be avoided
type ArticleView @Mirror(object_type:"UserEvent"){}
# More advanced usage : ArticleView object are UserEvents
# with a name of "navigation.article"
type ArticleView @Mirror(object_type:"UserEvent", filter:"name == \"navigation.article\""){}type UserPoint @TreeIndexRoot(index:"USER_INDEX"){
###
basketviews: [BasketView]
productviews: [ProductView]
}
type BasketView @Mirror(object_type:"UserEvent", filter:"name == \"$basket_view\""){}
type ProductView @Mirror(object_type:"UserEvent", filter:"name == \"$page_view\""){}type MyType {
# creation_date is a Date created from the timestamp creation_ts
creation_date:Date! @Function(name:"ISODate", params:["creation_ts"])
}type UserDevicePoint {
id:ID! @TreeIndex(index:"USER_INDEX")
...
mappings:[UserAgentMapping!]! @Function(name:"ThirdPartyCookieMappings", params:["id"])
}
type UserAgentMapping {
last_seen:Timestamp
user_agent_id:String
vector_id:String
}type UserAgent {
id:ID! @TreeIndex(index:"USER_INDEX")
user_agent_info:UserAgentInfo @Function(name:"DeviceInfo", params:["id"])
}type UserAgentInfo {
form_factor:FormFactor
brand:String
browser_family:BrowserFamily
browser_version:String
carrier:String
model:String
os_family:OperatingSystemFamily
os_version:String
agent_type:UserAgentType
}
### The following enums are predefined.
### It is not necessary to define them
enum FormFactor {
WEARABLE_COMPUTER
TABLET
SMARTPHONE
GAME_CONSOLE
SMART_TV
PERSONAL_COMPUTER
OTHER
}
enum BrowserFamily {
OTHER
CHROME
IE
FIREFOX
SAFARI
OPERA
STOCK_ANDROID
BOT
EMAIL_CLIENT
MICROSOFT_EDGE
}
enum OperatingSystemFamily {
OTHER
WINDOWS
MAC_OS
LINUX
ANDROID
IOS
}
enum UserAgentType {
WEB_BROWSER
MOBILE_APP
}type UserSegment {
id:ID! @ReferenceTable(type:"CORE_OBJECT", model_type:"SEGMENTS") @TreeIndex(index:"USER_INDEX")
}
type UserActivity {
channel_id:String @ReferenceTable(model_type:"CHANNELS", type:"CORE_OBJECT") @TreeIndex(index:"USER_INDEX") @Property(paths:["$site_id", "$app_id"])
}
type UserProfile {
compartment_id:String! @ReferenceTable(model_type:"COMPARTMENTS", type:"CORE_OBJECT") @TreeIndex(index:"USER_INDEX")
}
type UserEvent {
channel_id:String @ReferenceTable(model_type:"CHANNELS", type:"CORE_OBJECT") @Property(paths:["[parent].$site_id", "[parent].$app_id"]) @TreeIndex(index:"USER_INDEX")
}type UserAccount {
id:ID!
# This property won't be usable in Edge segment queries
compartment_id:String!
# This property will be usable in Edge segment queries
user_account_id:String! @TreeIndex(index:"USER_INDEX") @EdgeAvailability
}type UserPoint {
id: ID!
accounts: [UserAccount]
…
rfm_score: RfmScore @ComputedField(technical_name = “RfmScore”) @TreeIndex(index:"USER_INDEX")
}
type RfmScore {
…
}type UserProfile {
…
loyalty: Loyalty
}
type Loyalty {
cards:[LoyaltyCards]
}
type LoyaltyCards {
card_id : String! @UpdateStrategyKey
benefits : String
last_visit_date : String
}# Schema extract
type UserProfile {
compartment_id : String!
user_account_id : String
segmentations: [Segmentation]
}
type Segmentation @UpdateValueObject {
type: String
label: String
}# DO
type UserAgent {
creation_ts:Timestamp! @TreeIndex(index:"USER_INDEX")
creation_date:Date! @Function(name:"ISODate", params:["creation_ts"])
user_agent_info:UserAgentInfo @Function(params:["id"], name:"DeviceInfo")
id:ID!
last_activity_ts:Timestamp
}
# DON'T
type UserAgent {
creation_ts:Timestamp!
creation_date:Date! @Function(name:"ISODate", params:["creation_ts"]) @TreeIndex(index:"USER_INDEX")
user_agent_info:UserAgentInfo @Function(params:["id"], name:"DeviceInfo")
id:ID!
last_activity_ts:Timestamp
}# Only do if in a specific scenario requiring it
type UserPoint @TreeIndexRoot(index:"USER_INDEX"){
###
activities: [UserActivity!]!
events:[UserEvent!]!
}
type UserActivity {
###
events: [UserEvent!]!
}
type UserEvent @Mirror(object_type:"UserEvent") {
name:String! @TreeIndex(index:"USER_INDEX")
id:ID!
ts:Timestamp!
}email_hash
String (Optional)
The Email Hash, acting as an identifier
user_agent_id
String (Optional)
The User Agent ID, acting as an identifier
update_strategy
Enum (Optional)
Only considered when operation == UPSERT
Values are PARTIAL_UPDATE, PARTIAL_DELETE, FORCE_REPLACE
user_profile
JSON Object (Optional)
Mandatory when operation == UPSERT.
JSON Object representing the User Profile. Please refer to the user profile object for more information.






SELECT only be apply in userpoint still in the list, after the WHERE filter




WHERE clause 
WHERE 
type UserPoint {
id : ID!
activities : [UserActivity!]!
profiles : [UserProfile!]!
}
type UserActivity {
id : ID!
events : [UserEvent!]!
}
type UserEvent {
id : ID!
name : String
}
type UserProfile {
id : ID!
age : String
}// SELECT <objects fields or aggregates> # fields returned
// FROM <object collection> # where the query will be executed
// WHERE <object tree expression> # filter applied
SELECT { activities { events { name } } }
FROM UserPoint
WHERE activities { events { name = "$transaction_confirmed" } }[
{
activities : [ { events : [ { name : "$page_view" } ] },
{ events : [ { name : "$page_view" },
{ name : "$transaction_confirmed" } ] } ]
}
]# Example : Get all event’s name by user with at least one transaction confirmed
SELECT { activities { events { name } } }
FROM UserPoint
WHERE activities { events { name = "$transaction_confirmed" } }[
{
activities : [ { events : [ { name : "$page_view" },
{ name : "$transaction_confirmed" } ] } ]
},
{
activities : [ { events : [ { name : "$page_view" },
{ name : "$transaction_confirmed" } ] },
{ events : [ { name : "$page_view" } ] } ]
}
]# Example : Get the name of each event where the name is "$trasaction_confirmed"
# Here, we just want to be sure this query return only events we want
SELECT { name }
FROM UserEvent
WHERE name = "$transaction_confirmed"[
{
name : "$transaction_confirmed"
}
]# Example : Get events named "$page_view" by user with at least one transaction confirmed
SELECT { activities { events @filter(clause:"name = \"$page_view\"") { name } } }
FROM UserPoint
WHERE activities { events { name = “$transaction_confirmed” } }[
{
activities : [ { events : [ { name : "$page_view" } ] } ]
},
{
activities : [ { events : [ { name : "$page_view" } ] },
{ events : [ { name : "$page_view" } ] } ]
}
]# Example : Get event’s names by user with at least one transaction confirmed
# and an age between 20 and 30 years old
SELECT { activities { events { name } } }
FROM UserPoint WHERE profiles { age = "20-30" }
AND activities {events { name = "$transaction_confirmed" } }[
{
activities : [ { events : [ { name : "$page_view" },
{ name : "$transaction_confirmed" } ] },
{ events : [ { name : "$page_view" } ] } ]
}
]# Example : Get event’s names by events named "$transaction_confirmed"
# and where the user is between 20 and 30 years old
SELECT { name }
FROM UserEvent WHERE name = "$transaction_confirmed"
JOIN UserPoint WHERE profiles { age == "20-30" }[
{
name : "$transaction_confirmed"
}
]# Example : Get event’s names by events named "$transaction_confirmed"
# and where the user is between 20 and 30 years old
SELECT { name }
FROM UserEvent WHERE name = "$transaction_confirmed"
JOIN UserProfile WHERE age == "20-30"type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
id:ID!
activity_events:[ActivityEvent!]!
}
type ActivityEvent @Mirror(object_type:"UserEvent") {
order:Order @Property(path:"$properties.order")
}
type Order {
order_products:[OrderProduct]!
date: Timestamp!
}
type OrderProduct {
id:String @TreeIndex(index:"USER_INDEX")
price: Float @TreeIndex(index:"USER_INDEX") # in €
category:String @TreeIndex(index:"USER_INDEX") # possible value : "IT" or "Book"
}# More than 1000€ in one order :
SELECT @count{} FROM UserPoint
WHERE activity_events {
order {
order_products @ScoreField(name: "price") @ScoreSum(min: 1000) {
category="IT"
}
}
}
# More than 1000€ in cross orders (explicite):
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000, result:"boolean_value") {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
}
}
}
# More than 1000€ in cross orders (implicite):
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") {
category="IT"
}
}
}
# More than 1000€ in cross orders this last 10 days:
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") {
category="IT"
}
AND date > "now-10d"
}
}
# More than 1000€ in cross orders with at least products which cost 10€:
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "price") @ScoreSum(min: 10, result:"score_value") {
category="IT"
}
}
}
# More than 1000€ in cross orders with at least 10 products :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "price") @ScoreSum(result:"score_value") {
category="IT"
}
AND order_products @ScoreSum(min: 10) {
category="IT"
}
}
}
# More than 1000€ in cross orders with at least one product which costs more than 10€ :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "price") @ScoreSum(result:"score_value") {
category="IT"
}
AND order_products @ScoreField(name: "price") @ScoreMax(min: 10) {
category="IT"
}
}
}
# WARNING : DOES NOT WORK
# More than 1000€ in cross orders with at least 10 orders more than 100€ :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order @ScoreSum(min: 10, result: "score_value") {
@ScoreSum(min: 100, result: "boolean_value") {
order_products @ScoreField(name: "price") @ScoreSum(result:"score_value") {
category="IT"
}
}
}
}# More than 1000€ in one order :
SELECT @count {} FROM UserPoint
WHERE activity_events {
order {
order_products @ScoreField(name: "price") @ScoreAvg(min: 1000) {
category="IT"
}
}
}
# More than 1000€ in cross orders :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreAvg(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
}
}
}# More than 1000€ in cross orders in IT or Book category :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
OR category="BOOK"
}
}
}
# Same result :
#Doesn't work yet
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
},
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="BOOK"
}
}
}
# More than 1000€ in cross orders in only with product of IT or Book category :
# maximum of separately IT products and Book products is superior to 1000€
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
}
OR order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="BOOK"
}
}
}
# More than 1000€ in cross orders of products in IT and products in Book category :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
OR category="BOOK"
}
}
}
# More than 1000€ in cross orders in IT and Book category :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="IT"
}
}
AND activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name:"price") @ScoreSum(result:"score_value") {
category="BOOK"
}
}
}
# More than 1000€ in cross orders with at least 10 IT products and 10 BOOK products :
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "amount") @ScoreSum(){
category="IT"
OR category="BOOK"
}
AND order_products @ScoreSum(min: 10) {
category="IT"
}
AND order_products @ScoreSum(min: 10) {
category="BOOK"
}
}
}
### DUPLICATE WITH PREVIOUS QUERY
# More than 1000€ in cross orders in IT or Book category with at least 10€ of each in each order:
SELECT @count {} FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "amount") @ScoreSum(){
category="IT"
OR category="BOOK"
}
AND order_products @Scorefield(name: "amount") @ScoreSum(min: 10) {
category="IT"
}
AND order_products @Scorefield(name: "amount") @ScoreSum(min: 10) {
category="BOOK"
}
}
}type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
id:ID!
profiles:[UserProfile!]!
}
type UserProfile {
id:ID!
birth_date:Date @TreeIndex(index:"USER_INDEX")
}SELECT { id }
FROM UserPoint
WHERE profiles { birth_date IN ["now-18y/d", "now-19y/d", "now-20y/d", "now-21y/d",
"now-22y/d", "now-23y/d", "now-24y/d", "now-25y/d", "now-26y/d", "now-27y/d", "now-28y/d"]
}SELECT { id }
FROM UserPoint
WHERE profiles { ( birth_date >= "now-18y/d" AND birth_date < "now+7d-18y" ) OR
( birth_date >= "now-19y/d" AND birth_date < "now+7d-19y" ) OR
( birth_date >= "now-20y/d" AND birth_date < "now+7d-20y" ) OR
( birth_date >= "now-21y/d" AND birth_date < "now+7d-21y" ) OR
( birth_date >= "now-22y/d" AND birth_date < "now+7d-22y" ) OR
( birth_date >= "now-23y/d" AND birth_date < "now+7d-23y" ) OR
( birth_date >= "now-24y/d" AND birth_date < "now+7d-24y" ) OR
( birth_date >= "now-25y/d" AND birth_date < "now+7d-25y" ) OR
( birth_date >= "now-26y/d" AND birth_date < "now+7d-26y" ) OR
( birth_date >= "now-27y/d" AND birth_date < "now+7d-27y" ) OR
( birth_date >= "now-28y/d" AND birth_date < "now+7d-28y" ) }type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
id:ID!
events:[UserEvent!]!
activities:[UserActivity!]!
}
type UserActivity {
id:ID!
events:[UserEvent!]!
}
type UserEvent @Mirror(object_type:"UserEvent") {
id:ID!
event_name:String @Property(path:"$event_name") @TreeIndex(index:"USER_INDEX")
}SELECT { events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ url } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" }// This query returns
[
[
{
"events": [
{
"url": "xxx"
}
]
},
{
"events": []
},
{
"events": [
{
"url": "xxx"
},
{
"url": "xxx"
},
{
"url": "xxx"
}
]
},
{
"events": []
},
//...
]
]SELECT { events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ page_type page _category url } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }// This query returns
[
[
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
},
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
}
]
]SELECT { activities { events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ page_type page_category url } } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }[
[
{
"activities": [
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
},
{
"events": []
},
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
}
]
}
]
]SELECT { activities @filter(clause: "events { page_type == \"newsarticle\" AND page_category == \"actu\"}"){ events { page_type page_category url } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }[
[
{
"activities": [
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
},
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "video",
"page_category": "sport",
"url": "xxx"
},
{
"page_type": "video",
"page_category": "sport",
"url": "xxx"
}
]
}
]
]SELECT { activities @filter(clause: "events { page_type == \"newsarticle\" AND page_category == \"actu\"}"){
events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ page_type page_category url } } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }[
[
{
"activities": [
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
},
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
}
]
]SELECT @filter{ activities { events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ page_type page_category url } } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }[
[
{
"activities": [
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
},
{
"events": [
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
},
{
"page_type": "newsarticle",
"page_category": "actu",
"url": "xxx"
}
]
}
]
]SELECT @filter{ activities { id events @filter(clause: "page_type == \"newsarticle\" AND page_category == \"actu\""){ url } } }
FROM UserPoint
WHERE activities { ts >= "now-1d/d" } AND events { page_type == "newsarticle" AND page_category == "actu" }[
[
{
"activities": [
{
"events": [
{
"url": "xxx"
},
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
},
{
"events": [],
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
]
}
]
]
This page provides a formal description of OTQL capabilities.
The Object Tree Query Language (OTQL) has been designed to help to search and calculate aggregates on a large collection of object trees. The object tree is defined in the schema using the @TreeIndexRoot and the @TreeIndex directives.
OTQL queries help you :
Build segments
Explore data
Monitor data integration
An OTQL query looks like an SQL query.
It is composed of three parts:
A SELECT Operation: It gives indications on what needs to be done: extracting field values or calculating aggregates
A FROM starting Object Type: It defines the starting object type in the evaluation process
A WHERE Object Tree Expression:
There are two kinds of operations:
Selection Operations are similar to a GraphQL operation and return a list of objects containing the required fields.
Aggregation Operations return aggregated values (count, stats, histograms, ...) calculated on the selected objects.
Here are some examples of requests you can do with OTQL :
Imagining the following Object Tree:
You could build queries starting from all UserPoint, all UserActivity, UserEvent, UserEmail or UserAccount
The expression contained in the WHERE clause is composed of a list of predicates, separated by logical operators AND, OR, and NOT . Parenthesis can be used to group together two predicates with a logical operator.
Examples :
Each predicate doesn't return directly a boolean but a score, 1 if the condition is respected else 0. At the end the score is compared to 0 and return true if it's higher than 0 else return false. These operator keep the same priority as boolean ones. The logical operators work as below :
As we are querying an Object Tree, and as predicates are only possible on a leaf (e.g. fields that only contain a scalar value), it is natural to have a way of going from the root to each of the leaves by traversing the tree.
Braces symbols {} are used to traverse the tree through link fields. The sub-query written in the braces will be evaluated on each item in the linked list.
Let's see it in action.
Let's say we built a schema corresponding to the following Object Tree.
The following query will return all the UserPoint that have at least one UserEvent whose name is $transaction_confirmed .
As the root of the Object Tree is the UserPoint in this example, we'll need to start from there. And then follow the activities link to the associated UserActivity and then the events link to the associated UserEvent.
The latest query returns items with at least one of the events have a $transaction_confirmed name. We return every user that did at least one purchase and at least one visit.
If we instead want the users that bought things through at least 3 different visits (frequent buyers), we will use a scoring operator.
Each time there is a pair of braces { } and a sub-query written in the braces, there is implicitly a score calculated for the sub-query. By default, the score will be the number of items matching the sub-query.
Only returns score if the nested sub-query has a score superior or equal to min.
Calculate the average from the sub-query matching scores and returns true if it's superior or equal to min.
As said above, by default, score values are equal to the number of items matching a sub-query when following a link.
However, your Object Tree leaves have some number typed fields (Int or Float). It is possible to use those values as the score of a sub-query.
Select a specific field in which the numeric value used as the score is stored.
The information of which field is selected bubble up still it didn't catch by a @ScoreSum, @ScoreAvg or @ScoreMax .
With the possibility to use score in a field, you may want to return the calculated score of a @ScoreSum and not only it a sub-query validate the condition or not. This is why we add a new parameters to @ScoreSum :result.
Only returns score if the nested sub-query has a score superior or equal to min.
Calculate the average from the sub-query matching scores and returns it if it's superior or equal to min args
It is possible to use these two ways at the same time but be careful, it is currently not possible to grow up a @ScoreField after a @ScoreSum(result: "boolean_score").
Example of possible use case :
However the following use case can't be written :
Using conditional and scoring ways in the same query is useful for many use case whose won't be detailed here. A specific page has been created for regroup examples of them if you want to .
The following operators are available to work with dates :
>= Greater or equal
> Greater
<= Lower or equal
Dates can be formatted either
in ISO8601 format (time part is optional) 2012-09-27, 2012-09-27T12:42:00
in a timestamp in milliseconds 1549365498507
in a Date Math format, defining a relative date
The idea of the date match syntax is to define a relative date compared to an anchor date. The anchor date is either now or a date (ISO8601 or timestamp format) followed by ||.
The expression begins with the anchor date and is followed by one or more math expressions :
+1h adds one hour
-1d substracts one day
/d rounds down to the nearest day
Example, assuming now is 2001-01-01 12:00:00 :
The supported units are the following :
Only the indexed fields of type String are eligible. Depending on the specified data_type in the schema, the String operator will behave differently.
data_type: text :All operators are case-insensitive. .
The same transformation is done on the text data before storage is also done on the comparison value.
Diacritical marks (e.g. é, è, à, ç), number/digits, and word/expression containing apostrophe are usually stored as-is which means that you will need to provide the same value in the match function.
Below are some examples comparing the ingested and stored values, along with a demonstration of the match function:
data_type: keyword :All operators are case-sensitive. .
You can use the IN operator as a shortcut to filter on multiple values of the same field.
This is used to evaluate the value of a field and check if it is defined or not. The predicate can be applied in any indexed field in the schema and return a boolean.
****
You may want to add another list of predicates FROM various objects. To do so, use JOIN clause to mention another object right after the FROM/WHERE clauses. It's possible to apply multiple JOIN in the same query.
In the Query Tools, we return 10 elements by default but you can easily override this by using the LIMIT clause followed by the number of elements required:
It's not possible to return more than 10 000 elements with LIMIT. The query will fail if you try it.
Note that the LIMIT clause will be ignored when using @count or @cardinality directives.
LIMIT operator isn't applied during the segment calculation.
If you create a User Query Segment with a LIMIT, it will be ignored and return all UserPoint who respect the WHERE clause.
They are simply selecting fields. Every field present can be selected.
, the WHERE expression gives you the ability to filter a sublist of objects at FROM level. You also have the capability to filter in or out the data returned by the query using the @filter directive in SELECT :
Here are some tips to properly use the @filter directive in your queries:
Filter multiple fields (note that you can used OR or AND between fields, based on the required filter logic):
Filter in multiple values for a given fields:
Filter out multiple values for a given fields:
Combine AND & OR filters:
Filter by a subfield:
When filter_empty:true option is provided, the following elements will be filtered out:
An optional array which is empty
An object which is empty and either optional or inside an array
A mandatory array where the following conditions are met :
Tha array is empty
Please note that the @filter directive cannot be used at the same time as aggregation operations such as @map, @date_histogram, ...
The following query retrieves userpoints, activities, events and some of their field for each UserPoint that has an event named "display".
Let's assume it gives the following result :
The user might be surprised to find "click" events in this result. However remember that the where clause only filter the roots (i.e the UserPoints). To retrieve only "display" events, the user will need to add an @filter clause as follow:
Assuming the same data, this query would produce the following result :
The @filter predicate also filters by default empty result in its scope. To illustrate this, we reduce our query to retrieve only the score fields :
Still the same data, the result is the following:
You'll notice that two display events have disappeared. Since they don't have a score, they would be empty object. Actually, this "filter empty" behavior can be set using a second optional parameter to the @filter directive. If we set it to false, we will obtain the following result :
One might think that the previous result still contains a lot of noise (4 events retrieved for only one score). You can add an extra @filter before the object name to lighten the result:
You will get the following result :
@filter can be used to filter a field by a condition on a sub-field.
The aggregation operations are initiated by a directive in the SELECT clause. They take into account the filter defined in the WHERE clause, however they are not compatible with the @filter directive that you can use in the SELECT clause.
This directive is used to count the number of objects verifying the query
Fields used with metrics directives should have the directive in your schema.
Those directives calculate a value per bucket created in the bucket directive, or with only one bucket containing all elements if you don't use bucket directives.
@avg: average value for a specific field (only applies to numeric values)
@min: minimum value for a specific field (only applies to numeric values)
@max: maximum value for a specific field (only applies to numeric values)
@sum: sum of value for a specific field (only applies to numeric values)
@cardinality: count of distinct values
Fields used with bucket directives should have the directive in your schema.
Those directives separate values into buckets
@map one bucket per field value
@histogram aggregated count on a specific field. The interval can be modified regarding the business needs
@date_histogram aggregated count by a period of an object associated with a date. Allowed intervals are 1M for a month and XXD for a XX number of days.
It is possible to add an alias to the field expression. This alias is then used in the output to identify the field result.
You usually enter OTQL queries directly in tools like the navigator. However, they can be saved and managed by code as objects. Some features will require you to link an object to an OTQL query, instead of just saving the query as text.
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/queries
Create an OTQL query in the platform before creating an export based on this query
POST https://api.mediarithmics.com/v1/datamarts/:datamartId/query_check/otql
Create an OTQL query in the platform before creating an export based on this query
You can execute queries in the different tools that mediarithmics offer, or using our API.
POST https://api.mediarithmics.com/v1/datamarts/:datamart_id/query_executions/otql?use_cache=true
Executes an OTQL query on the specified datamart
When setting the use_cache query parameter to TRUE, the system returns the query from the cache if available.
To know if the returned value is from the cache or a new query execution, look at the cache_hit property from the response. Its value is TRUE if the response comes from the cache and FALSE otherwise.
When not setting the use_cache query parameter or setting its value to FALSE, the cache system is skipped. The query will be executed and its value won't be stored. You can't use this to force a cache update .
Running the query SELECT ... FROM ... WHERE ts >= "now-1h" (with a Date Math format from ) will return the same result now, in five minutes and during the next hour if using the cache.
Our engine tries to automatically optimize queries before running them. For example, a query with multiple OR operators can use the IN operator instead if it is better.
Build data exports
...
min : Float
minimum required score for the nested sub-query to be returned
By default, the score will be the number of items matching the sub-query. So using @ScoreMax or @ScoreAvg like that is useless because each score from the sub-query will be 1. It's why you should apply a modification on the score calculation.
Takes the maximum score from the sub-query matching scores and returns true if it's superior or equal to min.
args
description
min : Float
minimum required score for the nested sub-query to be returned
By default, the score will be the number of items matching the sub-query. So using @ScoreMax or @ScoreAvg like that is useless because each score from the sub-query will be 1. It's why you should apply a modification on the score calculation.
Be sure sub-field selected in @ScoreField exist in any field. Add a condition if the query return an error
Multiply the score by the factor. Can be used to boost a sub-query over another one.
args
description
factor: Float
Constant float which multiply the score
args
description
min : Float
minimum required score for the nested sub-query to be returned
result : String
Two values possibles :
"boolean_value" (by default) : reduce the returned score value to 0 or 1.
"score_value" : return the real score value.
Takes the maximum score from the sub-query matching scores and returns it if it's superior or equal to min
args
description
min : Float
minimum required score for the nested sub-query to be returned
result : String
Two values possibles :
"boolean_value" (by default) : reduce the returned score value to 0 or 1.
"score_value" : return the real score value.
< Lower
= or == Equal
!= Not equal
m
Minutes
s
Seconds
aujourd hui aujourd hui
s'inscrire
s'inscrire
s'inscrire
s inscrire inscrire s
100
100
100
1000 10
100 900 km
100 900 km
100 900 km 100 900 km
100900 100,900
km/h
km h
km/h km h
kmh
100km/h
100km h
100km/h 100km h km/h
100 km
H&M
h m
H&M h m
hm
£1,000 1,000 1,000+
1,000
£1,000 1,000 1,000+
1000 000 1.000
1.000
1.000
1.000
1000 000 1,000
chou-fleur chou- fleur chou fleur
chou fleur
chou-fleur chou- fleur chou fleur chou fleur
chou_fleur choufleur
chou_fleur
chou_fleur
chou_fleur
chou fleur choufleur chou-fleur
recherche.aspx
recherche.aspx
recherche.aspx
recette aspx
False
myField = undefined
False
(NoField)
False
The other mandatory selections of the parent object are only empty arrays
This parent object can be filtered
Logical operator
Real operation
Priority
NOT PredicateA
if ( ScoreA > 0) then 0 else 1
high
PredicateA AND PredicateB
ScoreA x ScoreB
middle
PredicateA OR PredicateB
max(ScoreA, ScoreB)
low
args
description
min : Float
minimum required score for the nested sub-query to be returned
args
description
args
description
name: String
The name of the field selected
args
description
min : Float
minimum required score for the nested sub-query to be returned
result : String
Two values possibles :
"boolean_value" (by default) : reduce the returned score value to 0 or 1. It has the same comportment as the previous one explain in conditional predicate.
"score_value" : return the real score value.
y
Years
M
Months
w
Weeks
d
Days
h or H
Hours
match(fieldName, comparisonValue)
Returns true if a word of the text contained in fieldName matches a word contained in comparisonValue.
starts_with(fieldName, comparisonValue)
Returns true if a word of the text contained in fieldName starts with one of the words contained in comparisonValue.
Métamorphosé
métamorphosé
Métamorphosé métamorphosé
metamorphose métamorphos métamorphosés
hameçon, ligne et bouchon
hameçon ligne et bouchon
hameçon hameÇon ligne et bouchon
hamecon
aujourd'hui
aujourd'hui
aujourd'hui
starts_with(fieldName, comparisonValue)
Returns true if the exact value contained in fieldName starts with the exact value passed in comparisonValue.
= or ==
Returns true if the exact value contained in fieldName is equal with the exact value passed in comparisonValue.
myField = “example”
True
myField = [“example”]
True
myField = [null]
True
myField = ""
True
myField = [ ]
True
clause
Yes
Used to list fields names & values that you want to filter in/out
filter_empty
No
Used to filter out empty object. Set to true by default.
datamartId
integer
The ID of the datamart
Body
object
Payload
datamartId
integer
The ID of the datamart
Body
object
Payload
datamart_id
integer
The ID of the datamart
use_cache
boolean
Optimize the response time using the cache.
Body
string
OTQL query to execute
myField = null
SELECT (...) FROM UserPoint
WHERE activity @ScoreSum(min : 1000) {
events @ScoreField(name:"amount") {
name = "$transaction_confirmed"
}
}
# To be sure field "amount" exist
SELECT (...) FROM UserPoint
WHERE activity @ScoreSum(min : 1000) {
events @ScoreField(name:"amount") {
is_defined(amount) AND
name = "$transaction_confirmed"
}
}# Select UserPoints having spent at least 1000 orders, where IT product count twice
SELECT (...) FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreBoost(factor: 2.0) @ScoreSum(result: "score_value") {
category="IT"
},
order_products @ScoreSum() {
category!="IT"
}
}
}# Select user points with at least 1 activity that contains at least 1 $transaction_confirmed event with amount >= 1000 during the past year
SELECT (...) FROM UserPoint
WHERE activities {
events @ScoreField(name: "amount") @ScoreMax(min: 1000) {
name = "$transaction_confirmed" AND date >= "now-1y/y"
}
} SELECT { id name } FROM Product WHERE price > 50.0SELECT [Operation] FROM [Object Type] WHERE [Object Tree Expression]# Counts number of new users in the past 7 days
SELECT @count{} FROM UserPoint WHERE creation_date >= "now-7d/d"
# Counts the number of transactions on a specific site (channel) 7 days ago
SELECT @count{} FROM UserEvent
WHERE name = "$transaction_confirmed"
AND date = "now-7d/d"
AND channel_id = 2419
# Counts the number of profiles with female gender
SELECT @count{} FROM UserProfile WHERE gender = "W"
# Lists all categories from universes in events done on a specific channel
SELECT { universe { category @map }} FROM UserEvent WHERE channel_id = 2417
# Lists all event names collected in the platform
SELECT {name @map} FROM UserEvent
# Number of users having at least 3 events related to laptops in the past 15 days
SELECT @count{} FROM UserPoint
WHERE activities { events @ScoreSum(min:3) {
category = "Laptop" AND date >= "now-15d/d"
}}
# Number of transactions per site and per day
SELECT { channel_id @map { date @date_histogram } } FROM UserEvent WHERE name = "$transaction_confirmed"
# Number of users having an account but no emails
SELECT @count{} FROM UserPoint where accounts{} and not emails{}UserPoint
├── UserActivity
│ └── UserEvent
├── UserEmail
└── UserAccount# Selects all names from all UserPoint
SELECT {name} FROM UserPoint
# Selects all names from all UserActivity
SELECT {name} FROM UserActivity
# Equivalent of
SELECT {activities { name }} FROM UserPoint
# Selects all names from all UserEvent
SELECT {name} FROM UserEvent
# Equivalent of
SELECT { activities { events { name }}} FROM UserPoint SELECT (...) FROM (...) WHERE (PredicateA AND PredicateB) OR PredicateC
SELECT (...) FROM (...) WHERE PredicateA AND (PredicateB OR PredicateC) SELECT (...) FROM (...) WHERE price > 50.0 SELECT (...) FROM (...) WHERE price > 50.0 AND last_modified_date > "now-10d"SELECT (...) FROM (...)
WHERE price > 50.0 AND last_modified_date > "now-10d"
# ( price > 50.0 ) x ( last_modified_date > "now-10d" )
SELECT (...) FROM (...)
WHERE price > 50.0 AND last_modified_date > "now-10d" OR price > 100.0 AND last_modified_date > "now-20d"
# (( price > 50.0 ) x ( last_modified_date > "now-10d" )) + (( price > 100.0 ) x ( last_modified_date > "now-20d" ))# Functional tree
UserPoint
└─ UserActivity
└─ UserEvent
# Associated schema
type UserPoint @TreeIndexRoot(index:"USER_INDEX") {
# activities is a link field to UserActivity objects
activities: [UserActivity]
}
type UserActivity {
# events is a link field to UserEvent objects
events: [UserEvent]
}
type UserEvent {
name: String @TreeIndex(index_name: "USER_INDEX")
amount: Int @TreeIndex(index_name: "USER_INDEX")
date: Timestamp!
}
# A UserPoint can have 0..n User Activity
# Each UserActivity can have 0..m UserEvent
# UserEvent has a "name" String field, an "amount" Int field and a "date" Timestamp field SELECT (...) FROM UserPoint WHERE activities { events { name = "$transaction_confirmed" } }# Select UserPoint that bought things through at least 3 different visits
SELECT (...) FROM UserPoint
WHERE activities @ScoreSum(min: 3.0){ events { name = "$transaction_confirmed" } }
# Select UserPoint that have at least 1 activity that contains at least 3 $transaction_confirmed events
SELECT (...) FROM UserPoint
WHERE activities { events @ScoreSum(min: 3.0) { name = "$transaction_confirmed" } }# Using @ScoreField alone is useless because it could be replace by logical operator
SELECT (...) FROM UserPoint
WHERE activity {
events @ScoreField(name:"amount") {
name = "$transaction_confirmed"
}
}
# Can be written
SELECT (...) FROM UserPoint
WHERE activity {
events {
name = "$transaction_confirmed"
}
}# Select UserPoint having spent at 1000 in one event
SELECT (...) FROM UserPoint
WHERE activity {
events @ScoreField(name:"amount") @ScoreSum(min : 1000) {
name = "$transaction_confirmed"
}
}
# Select UserPoint having spent at 1000 in one activity
SELECT (...) FROM UserPoint
WHERE activity @ScoreSum(min : 1000) {
events @ScoreField(name:"amount") {
name = "$transaction_confirmed"
}
}# Select UserPoint having spent more than 1000€ through $transaction_confirmed events during the past year
SELECT (...) FROM UserPoint
WHERE activities {
events @ScoreField(name: "amount") @ScoreSum(min: 1000) {
name = "$transaction_confirmed" AND date >= "now-1y/y"
}
}
# Select UserPoint having spent more than 1000€ in cross orders with at least products which cost 10€:
SELECT (...) FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order {
order_products @ScoreField(name: "amount") @ScoreSum(min: 10, result:"score_value") {
category="IT"
}
}
}# Select UserPoint having spent in average more than 1000 in one activity of events with a amount superior than 10
SELECT (...) FROM UserPoint
WHERE activity @ScoreAvg(min : 1000) { events @ScoreField(name:"amount") @ScoreSum(min 10, result:"score_value") {
name = "$transaction_confirmed" } }
# Select UserPoint having spent in average in one activity, more than 1000 events with a amount superior than 10
SELECT (...) FROM UserPoint
WHERE activity @ScoreAvg(min : 1000) { events @ScoreField(name:"amount") @ScoreSum(min: 10, result:"boolean_value") {
name = "$transaction_confirmed" } }# We wanted :
# Count UserPoint having spent more than 1000€ in cross orders with at least 10 orders more than 100€ :
SELECT (...) FROM UserPoint
WHERE activity_events @ScoreSum(min : 1000) {
order @ScoreSum(min: 10, result: "score_value") {
@ScoreSum(min: 100, result: "boolean_value") {
order_products @ScoreField(name: "price") @ScoreSum(result:"score_value") {
category="IT"
}
}
}
}
# But the query return :
# Count UserPoint having spent more than 1000 orders with at least 10 orders more than 100€SELECT (...) FROM UserPoint WHERE activities {creation_ts <= "2012-09-27"}
SELECT (...) FROM UserPoint WHERE activities {creation_ts > "1549365498507"}
SELECT (...) FROM UserPoint WHERE activities {creation_ts > "now-7d"}now+1h // Resolves to: 2001-01-01 13:00:00
now-1h // Resolves to: 2001-01-01 11:00:00
now-1h/d // Resolves to: 2001-01-01 00:00:00
2001.02.01||+1M/d // Resolves to: 2001-03-01 00:00:00# Doing
(...) WHERE match(url_as_text, "Hello World!")
# Will search in the text values for words matching 'hello' or 'world'
https://www.hello.com/
https://www.world.com/
https://www.hello.com/world/
(...)(...) WHERE starts_with(mykeyword, "Hello World!")
(...) WHERE mykeyword == "Hello World!"# Total sold in events for channel IDs 2456, 5489, 1426
SELECT {events {basket { amount @sum}}} FROM UserActivity
WHERE channel_id IN ["2456","5489","1426"]
# Equivalent of WHERE channel_id = "2456" OR channel_id = "5489" OR channel_id = "1426"# Return all UserPoint with a profile
SELECT { id } FROM UserPoint WHERE is_defined(profiles)
# Return all UserPoint with an email in their profile
SELECT { id } FROM UserPoint WHERE profiles{is_defined(email)}# Get the activity “$transaction_confirmed” of UserPoint of the segment id “1234“
SELECT { id }
FROM ActivityEvent WHERE name=="$transaction_confirmed"
JOIN Userpoint WHERE segments { id="1234" }# Get only 5 or fewer activities named “$transaction_confirmed”
SELECT { id }
FROM ActivityEvent WHERE name=="$transaction_confirmed"
LIMIT 5SELECT { id }
FROM ActivityEvent WHERE name=="$transaction_confirmed"
LIMIT 100
# This query returns 100 (or fewer) activities named “$transaction_confirmed”SELECT @count{ }
FROM ActivityEvent WHERE name=="$transaction_confirmed"
LIMIT 5
# Return the count of all activities named "$transaction_confirmed"
# Example: Return - 21,866,076# Select id and name in the root level
SELECT { id name } FROM UserPoint
# Select name in UserPoint and creation_ts and id in emails linked to the UserPoint
SELECT { name emails { creation_ts id } } FROM UserPoint{<OBJECT> @filter(clause: "<FIELD_NAME> == \"<FIELD_VALUE>\"",
filter_empty: <BOOLEAN>) {<FIELD_NAME_1> <FIELD_NAME_2>}@filter(clause: "category == \"CAT_1\" OR referrer == \"REF\"")@filter(clause: "category == \"CAT_1\" OR category == \"CAT_2\"")@filter(clause: "category != \"CAT_1\" AND category != \"CAT_2\"")@filter(clause: "(category == \"CAT_1\" OR category == \"CAT_2\") AND
referrer == \"REF\"")@filter(clause: "events { category == \"CAT_1\" } ")select { id { activities { id events { name score } } } }
from UserPoint
where { activities { events { name == "display" } } }[
{
"id": "up1",
"activities": [
{
"id": "a1",
"events": [
{ "name": "display", "score": 123 },
{ "name": "click"}
]
},
{
"id": "a2",
"events": [ { "name": "display" } ]
}
]
},
{
"id": "up2",
"activities": [
{
"id": "a3",
"events": [ { "name": "click" } ]
},
{
"id": "a4",
"events": [ { "name": "display" } ]
}
]
}
]select { id { activities { id events @filter(clause: "name == \"display\"") { name score } } } }
from UserPoint
where { activities { events { name == "display" } } } [
{
"id": "up1",
"activities": [
{
"id": "a1",
"events": [
{ "name": "display", "score": 123 }
]
},
{
"id": "a2",
"events": [ { "name": "display" } ]
}
]
},
{
"id": "up2",
"activities": [
{
"id": "a3",
"events": []
},
{
"id": "a4",
"events": [ { "name": "display" } ]
}
]
}
]select { { activities { events @filter(clause: "name == \"display\"") { score } } } }
from UserPoint
where { activities { events { name == "display" } } }[
{
"activities": [
{ "events": [ { "score": 123 } ] },
{ "events": [ ] }
]
},
{
"activities": [
{ "events": [] },
{ "events": [] }
]
}
]select { { activities {
events @filter(clause: "name == \"display\"", filter_empty: false) { score } } } }
from UserPoint
where { activities { events { name == "display" } }}
//result
[
{
"activities": [
{ "events": [ { "score": 123 } ] },
{ "events": [ {} ] }
]
},
{
"activities": [
{ "events": [] },
{ "events": [ {} ] }
]
}
]select @filter { { activities {
events @filter(clause: "name == \"display\"") { score } } } }
from UserPoint
where { activities { events { name == "display" } }}[
{
"activities": [ { "events": [ { "score": 123 } ] } ]
}
]SELECT { activities @filter(clause: "events {is_defined(event_name) AND event_name == \"display\"}")
{ events { event_name } } }
FROM UserPoint
where { activities { events { name == "display" }}}# Counts the number of UserPoint
SELECT @count {} FROM UserPoint
# Counts number of new users in the past 7 days
SELECT @count{} FROM UserPoint WHERE creation_date >= "now-7d/d"# Average basket amount between two specific dates
SELECT {basket {amount @avg}} FROM UserEvent
WHERE {date >= "2020-12-01" AND date <= "2020-12-31" }# Minimum basket amount between two specific dates
SELECT {basket {amount @min}} FROM UserEvent
WHERE { date >= "2020-06-20" AND date <= "2020-06-25” }# Maximum basket amount between two specific dates
SELECT {basket {amount @max}} FROM UserEvent
WHERE { date >= "2020-06-20" AND date <= "2020-06-25”}# Sum of basket amounts between two specific dates
SELECT {order{amount @sum }} FROM ActivityEvent
WHERE {date >= "2020-06-20" AND date <= "2020-06-25”}# Number of channels in a datamart
SELECT {channel_id @cardinality} FROM ActivityEvent
# Number of cookies associated with UserPoint in a specific segment
SELECT {agents{id @cardinality}} FROM UserPoint
WHERE segments {id=”XXXX”}SELECT { channel_id @map { # map the values of channel id in several buckets
session_duration @avg # The average duration
}
} FROM UserActivity
# Data
# channel ID : 1234, count : 654987987987, session_duration: 100
# channel ID : 1235, count : 987987965465, session_duration: 1500SELECT { order { amount @histogram(interval:50)}}
FROM UserEvent WHERE date >= "now-7d"
# Data
# Key: 0, count: 97681
# Key: 50, count: 50324
# Key: 100, count: 33164
# Key: 150, count: 36528# Mere use of @date_histogram directive: selecting all page_view events
# in the last 30 days
SELECT { date @date_histogram(interval:"1d") }
FROM UserEvent
WHERE name = "page_view" and date >= "now-30d/d"
# @date_histogram used together with @map directive with default interval (days)
SELECT { channel_id @map {date @date_histogram }}
FROM UserEvent
WHERE name = "$transaction_confirmed"
# Data
# Key: 2416, count: 27563351
# 2018-01-16T00:00:00.000Z 330
# 2018-01-17T00:00:00.000Z 331
# 2018-01-18T00:00:00.000Z 3332
# ...
# Key: 2417, count: 65498798
# ...
# Force an interval of one month
SELECT { channel_id @map {date @date_histogram(interval: "1M") } }
FROM UserEvent
WHERE name = "$transaction_confirmed"SELECT {
numberOfChannels: channel_id @cardinality # The approximate number of distinc values
averageDuration: duration @avg # The average duration
mininumDuration: duration @min # map the values of channel id in several buckets
} FROM UserEvent{
"status": "ok",
"data":
{
"id": "50409", // ID of the query to retrieve for the next steps
"datamart_id": "1509",
"query_language": "OTQL",
"minor_version": null,
"major_version": null,
"query_text": "SELECT {id} FROM UserPoint",
"favorite": false
}
}{
"status": "error",
"error": "cannot save invalid query, cause: Syntax error while parsing document \"nawak\". Invalid input 'n', expected Comments or select (line 1, column 1):\nnawak\n^",
"error_code": "BAD_REQUEST_DATA",
"error_id": "ef292c4e-1eab-4a7f-8fb2-77d797139be9"
}// Creating a query payload
{
"query_text": "SELECT {id} FROM UserPoint", // Your query
"datamart_id": "<ASSOCIATED_DATAMART_ID>",
"query_language": "OTQL"
}// If valid
{
"status": "ok",
"data": {
"type": "VALID",
"validation": {
"parameters": []
},
"status": "ok"
}
}
// If invalid
{
"status": "ok",
"data": {
"type": "PARSING_ERROR",
"error": {
"message": "Syntax error while parsing document \"nawak\". Invalid input 'n', expected Comments or select (line 1, column 1):\nnawak\n^",
"position": {
"row": 1,
"col": 1
},
"error_type": "PARSING"
},
"status": "error"
}// Checking a query payload
{
"query": "SELECT {id} FROM UserPoint" // Your query
}{
"status": "ok",
"data": {
"took": 112015,
"timed_out": false,
"offset": null,
"limit": null,
"result_type": "COUNT",
"precision": "FULL_PRECISION",
"sampling_ratio": null,
"rows": [
{
"count": 80975924
}
],
"cache_hit": true
}
}{
"status": "error",
"error": "Service Unavailable",
"error_code": "SERVICE_UNAVAILABLE",
"error_id": "482416e1-2d93-484a-948b-615b639b5e4f"
}# Select user points having spent on average at least 1000 through $transaction_confirmed events during the past year
SELECT (...) FROM UserPoint
WHERE activities {
events @ScoreField(name: "amount") @ScoreAvg(min: 1000) {
name = "$transaction_confirmed" AND date >= "now-1y/y"
}
}
# Select user points having spent in average more than 1000€ by orders with at least products which cost 10€:
SELECT (...) FROM UserPoint
WHERE activity_events @ScoreAvg(min : 1000) {
order {
order_products @ScoreField(name: "amount") @ScoreSum(min: 10, result:"score_value") {
category="IT"
}
}
}

User activity management is at the core of audience management services.
A UserActivity is a collection of events (UserEvent) done by a single user in a given period of time. For instance, mediarithmics considers that a website activity is a session of no more than 30mn on a given website. An UserActivity is also linked to its referrer (like Google, Yahoo!, ...), if an user switch of referrer an go back to the same website, it will be considered like a new activity. Events can be given any name and any properties, but some predefined names and properties can be used to trigger specific treatment on the data.
You send UserEvent to mediarithmics, that are aggregated and encapsulated into UserActivity. Some UserActivity only have one UserEvent, but some can have multiple UserEvent.
User activities and events can be indexed and queried using APIs and/or the query engine, if their properties fit the object tree schema.
When an activity is ingested via , it will trigger some dedicated processes like , , ...
A minimal user activity is an object with the following properties.
Always prefer predefined properties to custom properties when they exist, as the platform automates a lot of actions based on those properties. You could miss some critical steps in having well organized data.
Activities are limited to 8000 characters.
For user identification you can : - either use $user_identifiers; - or use $user_agent_id, $user_account_id, $compartment_id, and $email_hash
Their type is SITE_VISIT. They have those additional base properties :
Their type is APP_VISIT. They have those additional base properties :
If you have multiple sites and/or apps, you can create a channel for each one of them. Each channel will have an ID, representing either a site ID or an app ID.
When you attach site IDs or app IDs to your user activities, you link them to the corresponding channel.
This is useful for to a site or an app, or when you'll want to create queries and segments, allowing you to select only users having activities on a specific site or app.
A user identifier is either a user account, a user email, or a user agent.
In the mediarithmics vocabulary, the activity origin refers to the last digital channel leading to user interaction. It is key information to be used to analyze the results and the performance of marketing activities.
This interaction can be a Touch (the user views a banner or an email) or a Visit (the user visits a web site or an app). In both cases, the $origin object of the is used to capture the information related to the originating channel.
The $origin object is a customizable object with predefined properties as follows:
The origin of user activity is calculated in different ways depending on the activity type (Touch/Visit) and source (Tag/API):
for a visit on a site or an app: from the analysis of the referrer and/or the query parameters provided in the destination URL (like the UTM parameters used by Google Analytics)
for touch events generated by campaigns delivered by the mediarithmics platform, the origin is automatically calculated from campaign information
for touch events (pixel events in emails or banners) the origin is calculated from the properties provided in the event (predefined properties)
Here is the table of correspondence between mediarithmics origin fields and Google Analytics parameters:
Here is the table of correspondence between mediarithmics origin fields and AT Internet parameter:The structure of the xtor parameter is as follows:
The xtor parameter is automatically analyzed to fill the following origin fields:
According to , source may be related to marketing channel.Here is the table of correspondance between AT Internet source and mediarithmics channel origin field
When the user event is declared through a tag it is possible to add predefined event properties to declare an activity origin.
Here is the list of the predefined properties :
In mediarithmics, each unique activity is stored with 4 different accesses in this order:
Datamart_id, user_point_id, ts and unique_key.
It means that when those 4 parameters are identical between 2 activities, the new one cancels and replaces the old one. If one of the parameter differs then a new activity is created.
This unique key is an uuid format. It can be calculated by the platform or the user itself.
The best practice is to used uuid-v1 when generated by the user as it’s not a random value but calculated on the timestamp and a unique String.
mediarithmics tag through user-event-front: the unique_key is generated by the platform.
mediarithmics apis through datamart-front: the unique_key can be calculated by the user. If empty, it’s generated by the platform.
mediartihmics document import: the unique_key can be calculated by the user. If empty, it’s generated by the platform.
When generated manually, the user has to select a key property (such as an order_id, a unique id generated on client size) or properties concatenation in order for the String to be unique.
The uuid-v1 also need a timestamp to be generated and it's highly recommended to select the activity's one.
See additional documentation
A User Event is an object composed of an event name and a list of properties. Each property is composed of a name and a value.
Using some predefined event names will allow you to have useful adapted automatic processing on some of your events.
Always prefer predefined event names to custom events when they exist, as the platform automates a lot of actions based on those names. You could miss some critical steps in having well organized data.
Using some predefined event properties will allow you to have useful adapted automatic processing on some of your events.
Always prefer predefined event properties to custom event properties when they exist, as the platform automates a lot of actions based on those names. You could miss some critical steps in having well organized data.
Below is the JSON schema for a single activity, according to the rules enacted in this documentation.
The Time To Live in minutes for the storage of this activity. 0 means no expiration
$user_agent_id
String
The identifier of the user device as provided by the user id mapping service. ex: vec:89090939434
⚠️ Legacy, please use $user_identifiers instead
$user_account_id
(in addition to $compartment_id)
String
The user account id of the user
⚠️ Legacy, please use $user_identifiers instead
$compartment_id
(in addition to
$user_account_id)
Integer
The ID of the compartment associated with this activity
⚠️ Legacy, please use $user_identifiers instead
$email_hash
Email Hash Object
The email hash object { “$hash”:…, “$email”:…}
⚠️ Legacy, please use $user_identifiers instead
$user_identifiers
List of
A list of all identifiers relative to the activity. You can have many of each type. ⚠️ Use this list rather than the legacy properties ($user_agent_id, $user_account_id & $compartment_id, $email_hash)
$origin
The activity origin
$location
The activity location
$events
List of
A list of user events attached to this activity
$unique_key
String
The unique_key of the activity formatted as an uuid-v1. If empty, the platform generates one automatically
[any custom property]
Any
The value of a custom property
The account's expiration timestamp
The email's expiration timestamp
The region iso (ISO 3166-2)
$city
String (Optional)
The city’s name
$iso_city
String (Optional)
The city iso (UN/LOCODE)
$zip_code
String (Optional)
The zip code
$latlon
Array[Double, Double]
The latitude and longitude where the first element is the latitude and the second the longitude
the campaign id
$sub_campaign_technical_name
the sub campaign (Ad Group) technical name
$sub_campaign_id
the sub campaign (Ad Group) id
$message_id
$message_technical_name
$keywords
the keywords used in the search ex:sport+shoes
$creative_name
the creative name
$creative_technical_name
the creative technical name
$creative_id
the creative id
$engagement_content_id
$social_network
the social network
$referral_path
the URL of the referral
$log_id
the custom unique identifier for the activity origin
$gclid
the unique identifier for a Google AdWords click
for an activity inserted through the API, the origin fields can be directly filled with the relevant data.
utm_content = template 1
$keywords
utm_term
utm_term = sport+shoes or $keywords = sport+shoes
$gclid
gclid
Google Click Identifier
$caid=8989
$sub_campaign_technical_name
$scatn
$scatn=STRATEGY-1, the sub campaign technical name is equivalent to the ad group technical name
$sub_campaign_id
$scaid
$scaid=8782, sub campaign id is equivalent to the ad group id
$creative_name
$creative_name
$creative_technical_name
$crtn
$crtn=special-banner
$keywords
$keywords
$keywords = sport+shoes
$social network
$social_network
$referral path
$referral_path
$log_id
$log_id
unique custom identifier
$gclid
$gclid
Google Click Identifier
The user has completed a transaction
$conversion
The user has completed a conversion. This event is either registered by the integrator or automatically craeted by the platform when a goal is met.
$app_install
The user has installed a mobile app
$app_update
The user has updated a mobile app
$app_open
The user has opened an app or resumed it
$ad_view
The user has been exposed to a display add
$ad_click
The user has clicked on an app
$email_view
The user has opened an email
$email_click
The user has opened a link in an email
$set_user_choice
The user has given consent or objected to a
Acceptance value of the when event name is $set_user_choice.
field
type
description
$ts
Long
The timestamp of the activity. For a session, it should correspond to the start date (Unix Epoch Time in milliseconds)
$type
String enum
The activity type :
- SITE_VISITfor activities happening on a channel of type site
- APP_VISITfor activities happening on a channel of type app
- DISPLAY_AD for activities involving ad view and ad click events. See Ads exposure tracking for more information.
- EMAIL when users read a mail or click in a link inside it. For more information, see Email views and clicks.
- TOUCH for other activities
There are also activity types specific to automations:
- USER_SCENARIO_START when a scenario starts
- USER_SCENARIO_STOP when a scenario stops
- USER_SCENARIO_NODE_ENTER when a new scenario node is entered
- USER_SCENARIO_NODE_EXIT when a scenario node is exited
$session_status
String enum
The sessions status, automatically updated by mediarithmics. NO_SESSION, IN_SESSION, CLOSED_SESSION
$ttl
field
type
description
$site_id
String
The site ID (channel)
$session_duration
Integer
The session duration in seconds.
field
type
description
$app_id
String
The mobile app ID (channel)
$session_duration
Integer (Optional)
The session duration in seconds.
field
type
description
$type
Constant String
USER_ACCOUNT
$compartment_id
Integer
The ID of the compartment associated with this activity
$user_account_id
String
The user account id of the user
$expiration_ts
field
type
description
$type
Constant String
USER_EMAIL
$hash
String
The email hash
String (Optional)
The "raw" email
$expiration_ts
field
type
description
$type
Constant String
USER_AGENT
$user_agent_id
String
The user agent id
Currently support agent type :
- vector id : vec:vector_id ; e.g.: vec:89090939434
$expiration_ts
Timestamp (Optional)
The agent's expiration timestamp
field
type
description
$source
String (Optional)
The location source (IP, GPS, OTHER)
$country
String (Optional)
The country’s name
$region
String (Optional)
The region’s name
$iso_region
origin field
description
$ts
$channel
the communication channel. ex: cpc, newsletter, banner, video, ...
$source
the source of the traffic. ex: google.com, news-foo.com, ...
$campaign_name
the campaign name
$campaign_technical_name
the campaign technical name
origin field
url parameters
example / description
$channel
utm_medium
utm_medium = email or $channel = email
$source
utm_source
utm_source = base loyalty program
$campaign_name
utm_campaign
utm_campaign = back to school
$creative_name
origin field
xtor field
example / description
$source
A
EPR when xtor=EPR-14234
$campaign_name
B
7880 when xtor=AD-7880
$creative_name
C
ad_version7 when xtor=AD-3030-ad_version7
source
channel
EPR
EREC
ES
AD
$rtb_display
AL
$affiliation
origin field
predefined event properties
example / description
$source
$source
$source=crm_database
$campaign_name
$campaign_name
$campaign_name=back to school
$campaign_technical_name
$ctn
$ctn=DISPLAY-BACK-TO-SCHOOL
$campaign_id
event name
description
$page_view
The user has viewed a page.
Events with this event name only serve in session aggregation to have correct information on the user activity. They will be dropped when the session closes, and you won't see them on the platform anymore. If you wish to keep a record of the pages a user viewed in your site and create queries based on that data, you should name your event differently. You can also use the $item_view event name if your pages show products.
$home_view
The user has viewed the home page
$item_view
The user has viewed an item
$item_list_view
The user has viewed a list of items
$basket_view
The user has viewed the basket
property name
description
$items
An array of products associated with the event. Mainly used in products tracking for e-commerce sites.
Example value : [{"$id":"ProductID1"},{"$id":"ProductID2"}]
Each item has the $id, $ean, $qty , $price, $brand, $name, $category1, $category2, $category3 and $category4predefined properties.
$campaign_technical_name
Technical name of a campaign associated to the event when tracking ads exposure.
$sub_campaign_technical_name
Technical name of a sub campaign associated to the event when tracking ads exposure.
$creative_technical_name
Technical name of a creative associated to the event
$processing_token
Token of the associated processing activity when event name is $set_user_choice.
Integer
Timestamp (Optional)
Timestamp (Optional)
String (Optional)
$campaign_id
utm_content
$caid
$transaction_confirmed
$choice_acceptance_value
xtor=A-B-C-D-E-F-G-H{
"$ts": 3489009384393,
"$event_name": "$transaction_confirmed", // Conversion detected
"$properties": {
"$items": [
{
"$id": "product_ID", // Used to filter in funnel analytics
"$qty": 20, // Used for conversion amounts
"$price": 102.8, // Used or conversion amounts
"$brand": "Apple" // Used to filter in funnel analytics
"$category1": "Category 1", // Used to filter in funnel analytics
"$category2": "Category 2", // Used to filter in funnel analytics
"$category3": "Category 3", // Used to filter in funnel analytics
"$category4": "Category 4" // Used to filter in funnel analytics
},
{
"$id": "product_ID2",
"$qty": 12,
"$price": 3.4,
"$brand": "Microsoft"
}
],
"$currency": "EUR"
}
}{
"$schema": "http://json-schema.org/draft-07/schema#",
"$ref": "#/definitions/UserActivity",
"definitions": {
"UserActivity": {
"oneOf": [
{
"$ref": "#/definitions/GenericUserActivity"
},
{
"$ref": "#/definitions/SiteVisitUserActivity"
},
{
"$ref": "#/definitions/AppVisitUserActivity"
}
]
},
"GenericUserActivity": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$session_status": {
"$ref": "#/definitions/UserActivitySessionStatus"
},
"$ttl": {
"type": "number"
},
"$user_agent_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$user_account_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$compartment_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$email_hash": {
"$ref": "#/definitions/Nullable%3Calias-1071211137-70767-70920-1071211137-0-212510%3Cdef-interface-792792747-896-1013-792792747-0-7494%2C%22%24type%22%3E%3E"
},
"$user_identifiers": {
"$ref": "#/definitions/Nullable%3Cdef-alias-792792747-1226-1323-792792747-0-7494%5B%5D%3E"
},
"$origin": {
"$ref": "#/definitions/Nullable%3CUserActivityOrigin%3E"
},
"$location": {
"$ref": "#/definitions/Nullable%3CUserActivityLocation%3E"
},
"$unique_key": {
"$ref": "#/definitions/UUID"
},
"$type": {
"type": "string",
"enum": [
"DISPLAY_AD",
"EMAIL",
"TOUCH",
"USER_SCENARIO_START",
"USER_SCENARIO_STOP",
"USER_SCENARIO_NODE_ENTER",
"USER_SCENARIO_NODE_EXIT"
]
},
"$events": {
"type": "array",
"items": {
"$ref": "#/definitions/UserActivityEvent"
}
}
},
"required": [
"$email_hash",
"$events",
"$location",
"$origin",
"$session_status",
"$ts",
"$ttl",
"$type"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"Timestamp": {
"type": "number"
},
"UserActivitySessionStatus": {
"type": "string",
"enum": [
"NO_SESSION",
"IN_SESSION",
"CLOSED_SESSION"
]
},
"Nullable<ID>": {
"anyOf": [
{
"$ref": "#/definitions/ID"
},
{
"type": "null"
}
]
},
"ID": {
"type": "string"
},
"Nullable<alias-1071211137-70767-70920-1071211137-0-212510<def-interface-792792747-896-1013-792792747-0-7494,\"$type\">>": {
"anyOf": [
{
"type": "object",
"properties": {
"$hash": {
"type": "string"
},
"$email": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
}
},
"required": [
"$hash"
],
"additionalProperties": false
},
{
"type": "null"
}
]
},
"Nullable<string>": {
"type": [
"string",
"null"
]
},
"Nullable<def-alias-792792747-1226-1323-792792747-0-7494[]>": {
"anyOf": [
{
"type": "array",
"items": {
"$ref": "#/definitions/UserIdentifier"
}
},
{
"type": "null"
}
]
},
"UserIdentifier": {
"anyOf": [
{
"$ref": "#/definitions/UserEmailIdentifier"
},
{
"$ref": "#/definitions/UserAccountIdentifier"
},
{
"$ref": "#/definitions/UserAgentIdentifier"
}
]
},
"UserEmailIdentifier": {
"type": "object",
"properties": {
"$type": {
"type": "string",
"const": "USER_EMAIL"
},
"$hash": {
"type": "string"
},
"$email": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
}
},
"required": [
"$type",
"$hash"
],
"additionalProperties": false
},
"UserAccountIdentifier": {
"type": "object",
"properties": {
"$type": {
"type": "string",
"const": "USER_ACCOUNT"
},
"$user_account_id": {
"$ref": "#/definitions/ID"
},
"$compartment_id": {
"$ref": "#/definitions/ID"
}
},
"required": [
"$type",
"$user_account_id",
"$compartment_id"
],
"additionalProperties": false
},
"UserAgentIdentifier": {
"type": "object",
"properties": {
"$type": {
"type": "string",
"const": "USER_AGENT"
},
"$user_agent_id": {
"$ref": "#/definitions/ID"
}
},
"required": [
"$type",
"$user_agent_id"
],
"additionalProperties": false
},
"Nullable<UserActivityOrigin>": {
"anyOf": [
{
"$ref": "#/definitions/UserActivityOrigin"
},
{
"type": "null"
}
]
},
"UserActivityOrigin": {
"type": "object",
"properties": {
"$campaign_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$campaign_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$channel": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$creative_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$creative_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$engagement_content_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$gclid": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$keywords": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$log_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$message_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$message_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$referral_path": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$social_network": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$source": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$sub_campaign_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$sub_campaign_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$ts": {
"type": "number"
}
},
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"Nullable<number>": {
"type": [
"number",
"null"
]
},
"JsonType": {
"anyOf": [
{
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
{
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
{
"type": "boolean"
},
{
"type": "object"
},
{
"type": "array",
"items": {}
},
{
"not": {}
}
]
},
"Nullable<UserActivityLocation>": {
"anyOf": [
{
"$ref": "#/definitions/UserActivityLocation"
},
{
"type": "null"
}
]
},
"UserActivityLocation": {
"type": "object",
"properties": {
"$source": {
"$ref": "#/definitions/LocationSource"
},
"$country": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$region": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$iso_region": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$city": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$iso_city": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$zip_code": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$latlon": {
"$ref": "#/definitions/Nullable%3Cnumber%5B%5D%3E"
}
},
"required": [
"$latlon"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"LocationSource": {
"type": "string",
"enum": [
"GPS",
"IP",
"OTHER"
]
},
"Nullable<number[]>": {
"anyOf": [
{
"type": "array",
"items": {
"type": "number"
}
},
{
"type": "null"
}
]
},
"UUID": {
"type": "string",
"pattern": "^[0-9a-fA-F]{8}\\b-[0-9a-fA-F]{4}\\b-[0-9a-fA-F]{4}\\b-[0-9a-fA-F]{4}\\b-[0-9a-fA-F]{12}$"
},
"UserActivityEvent": {
"anyOf": [
{
"$ref": "#/definitions/GenericUserActivityEvent"
},
{
"$ref": "#/definitions/AdTrackingEvent"
},
{
"$ref": "#/definitions/SetUserChoiceEvent"
},
{
"$ref": "#/definitions/SetUserProfilePropertiesEvent"
},
{
"$ref": "#/definitions/RetailEvent"
},
{
"$ref": "#/definitions/ConversionEvent"
}
]
},
"GenericUserActivityEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"anyOf": [
{
"$ref": "#/definitions/EventName"
},
{
"type": "string"
}
]
},
"$properties": {
"$ref": "#/definitions/Customizable"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"Nullable<Timestamp>": {
"anyOf": [
{
"$ref": "#/definitions/Timestamp"
},
{
"type": "null"
}
]
},
"EventName": {
"anyOf": [
{
"$ref": "#/definitions/DefaultEventName"
},
{
"type": "string"
}
]
},
"DefaultEventName": {
"type": "string",
"enum": [
"$page_view",
"$home_view",
"$category_view",
"$email_view",
"$email_click",
"$email_sent",
"$email_delivered",
"$email_soft_bounce",
"$email_hard_bounce",
"$email_unsubscribe",
"$email_complaint",
"$content_corrections"
]
},
"Customizable": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"AdTrackingEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"type": "string",
"enum": [
"$ad_view",
"$ad_click"
]
},
"$properties": {
"$ref": "#/definitions/AdTrackingEventProperties"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"AdTrackingEventProperties": {
"type": "object",
"properties": {
"$url": {
"type": "string"
},
"$referrer": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$campaign_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$sub_campaign_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$creative_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$message_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$campaign_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$sub_campaign_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$message_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$creative_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
}
},
"additionalProperties": {
"$ref": "#/definitions/JsonType"
},
"required": [
"$url"
]
},
"SetUserChoiceEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"type": "string",
"const": "$set_user_choice"
},
"$properties": {
"$ref": "#/definitions/SetUserChoiceEventProperties"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"SetUserChoiceEventProperties": {
"type": "object",
"properties": {
"$url": {
"type": "string"
},
"$referrer": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$processing_token": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$processing_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$choice_acceptance_value": {
"type": "boolean"
},
"$choice_source_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
}
},
"required": [
"$choice_acceptance_value",
"$url"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"SetUserProfilePropertiesEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"type": "string",
"const": "$set_user_profile_properties"
},
"$properties": {
"$ref": "#/definitions/Customizable"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"RetailEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"$ref": "#/definitions/RetailEventName"
},
"$properties": {
"$ref": "#/definitions/RetailEventProperties"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"RetailEventName": {
"type": "string",
"enum": [
"$item_view",
"$item_list_view",
"$product_view",
"$product_list_view",
"$basket_view",
"$transaction_confirmed"
]
},
"RetailEventProperties": {
"type": "object",
"properties": {
"$url": {
"type": "string"
},
"$referrer": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$items": {
"$ref": "#/definitions/Nullable%3Cdef-interface-792792747-3521-3897-792792747-0-7494%5B%5D%3E"
}
},
"additionalProperties": {
"$ref": "#/definitions/JsonType"
},
"required": [
"$url"
]
},
"Nullable<def-interface-792792747-3521-3897-792792747-0-7494[]>": {
"anyOf": [
{
"type": "array",
"items": {
"$ref": "#/definitions/RetailEventPropertiesItem"
}
},
{
"type": "null"
}
]
},
"RetailEventPropertiesItem": {
"type": "object",
"properties": {
"$id": {
"type": "string"
},
"$ean": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$qty": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$price": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$brand": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$category1": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$category2": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$category3": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$category4": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
}
},
"required": [
"$id"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"ConversionEvent": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$expiration_ts": {
"$ref": "#/definitions/Nullable%3CTimestamp%3E"
},
"$event_name": {
"type": "string",
"const": "$conversion"
},
"$properties": {
"$ref": "#/definitions/ConversionEventProperties"
}
},
"required": [
"$event_name",
"$properties",
"$ts"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"ConversionEventProperties": {
"type": "object",
"properties": {
"$conversion_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$goal_id": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$conversion_technical_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$goal_technical_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$conversion_value": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$log_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$conversion_external_id": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
},
"$goal_technical_name": {
"$ref": "#/definitions/Nullable%3Cstring%3E"
}
},
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"SiteVisitUserActivity": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$session_status": {
"$ref": "#/definitions/UserActivitySessionStatus"
},
"$ttl": {
"type": "number"
},
"$user_agent_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$user_account_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$compartment_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$email_hash": {
"$ref": "#/definitions/Nullable%3Calias-1071211137-70767-70920-1071211137-0-212510%3Cdef-interface-792792747-896-1013-792792747-0-7494%2C%22%24type%22%3E%3E"
},
"$user_identifiers": {
"$ref": "#/definitions/Nullable%3Cdef-alias-792792747-1226-1323-792792747-0-7494%5B%5D%3E"
},
"$origin": {
"$ref": "#/definitions/Nullable%3CUserActivityOrigin%3E"
},
"$location": {
"$ref": "#/definitions/Nullable%3CUserActivityLocation%3E"
},
"$unique_key": {
"$ref": "#/definitions/UUID"
},
"$session_duration": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$error_analyzer_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$analyzer_errors": {
"type": "array",
"items": {
"type": "object"
}
},
"$topics": {
"$ref": "#/definitions/Nullable%3Calias-1071211137-70404-70537-1071211137-0-212510%3Cstring%2Calias-1071211137-70404-70537-1071211137-0-212510%3Cstring%2Cnumber%3E%3E%3E"
},
"$type": {
"type": "string",
"const": "SITE_VISIT"
},
"$events": {
"type": "array",
"items": {
"$ref": "#/definitions/UserActivityEvent"
}
},
"$site_id": {
"$ref": "#/definitions/ID"
}
},
"required": [
"$email_hash",
"$events",
"$location",
"$origin",
"$session_status",
"$site_id",
"$ts",
"$ttl",
"$type"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"Nullable<alias-1071211137-70404-70537-1071211137-0-212510<string,alias-1071211137-70404-70537-1071211137-0-212510<string,number>>>": {
"anyOf": [
{
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "number"
}
}
},
{
"type": "null"
}
]
},
"AppVisitUserActivity": {
"type": "object",
"properties": {
"$ts": {
"$ref": "#/definitions/Timestamp"
},
"$session_status": {
"$ref": "#/definitions/UserActivitySessionStatus"
},
"$ttl": {
"type": "number"
},
"$user_agent_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$user_account_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$compartment_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$email_hash": {
"$ref": "#/definitions/Nullable%3Calias-1071211137-70767-70920-1071211137-0-212510%3Cdef-interface-792792747-896-1013-792792747-0-7494%2C%22%24type%22%3E%3E"
},
"$user_identifiers": {
"$ref": "#/definitions/Nullable%3Cdef-alias-792792747-1226-1323-792792747-0-7494%5B%5D%3E"
},
"$origin": {
"$ref": "#/definitions/Nullable%3CUserActivityOrigin%3E"
},
"$location": {
"$ref": "#/definitions/Nullable%3CUserActivityLocation%3E"
},
"$unique_key": {
"$ref": "#/definitions/UUID"
},
"$session_duration": {
"$ref": "#/definitions/Nullable%3Cnumber%3E"
},
"$error_analyzer_id": {
"$ref": "#/definitions/Nullable%3CID%3E"
},
"$analyzer_errors": {
"type": "array",
"items": {
"type": "object"
}
},
"$topics": {
"$ref": "#/definitions/Nullable%3Calias-1071211137-70404-70537-1071211137-0-212510%3Cstring%2Calias-1071211137-70404-70537-1071211137-0-212510%3Cstring%2Cnumber%3E%3E%3E"
},
"$type": {
"type": "string",
"const": "APP_VISIT"
},
"$events": {
"type": "array",
"items": {
"anyOf": [
{
"$ref": "#/definitions/UserActivityEvent"
},
{
"$ref": "#/definitions/AppActivityEvent"
}
]
}
},
"$app_id": {
"$ref": "#/definitions/ID"
}
},
"required": [
"$app_id",
"$email_hash",
"$events",
"$location",
"$origin",
"$session_status",
"$ts",
"$ttl",
"$type"
],
"additionalProperties": {
"$ref": "#/definitions/JsonType"
}
},
"AppActivityEvent": {
"type": "object",
"properties": {
"$event_name": {
"type": "string",
"enum": [
"$app_open",
"$app_update",
"$app_install"
]
},
"$properties": {
"$ref": "#/definitions/Customizable"
}
},
"required": [
"$event_name",
"$properties"
],
"additionalProperties": false
}
}
}