Developer
User guidesDeveloper websiteHelp centerLog in
  • Welcome!
  • Organisations structure
    • Datamart
    • Users and roles
  • User points
    • User identifiers
      • Networks IDs
        • Device-based Network IDs
          • Custom Device ID integration
          • ID5
          • First ID
        • User-based Network IDs
          • Custom User ID integration
          • UTIQ martechpass
      • Accounts
      • Emails
      • Device identifiers
    • User activities and events
    • Compartments
    • User profiles
    • User segments
    • Hyper point & Quarantine
  • Data model
    • Defining your schema
    • Computed fields
      • Concepts
      • Setup
      • Development
      • Examples
  • Data ingestion
    • Real time user tracking
      • Website tracking
      • Mobile apps tracking
      • Ads exposure tracking
      • AMP tracking
      • Conversions tracking
      • Email views and clicks
      • Tracking API
      • Event rules
      • Activity analyzers
    • Bulk processing
      • Imports
        • User activities import
        • User profiles import
        • User choices import
        • Segments import
      • Deletions
        • User identifiers deletion
        • Device points deletion
        • User points deletion
      • User identifiers association
      • Integration batch
    • Activities analytics
    • Data warehouse
      • Preliminary setup
        • BigQuery
      • Create data warehouse
  • Querying your data
    • OTQL queries
    • OTQL examples
    • GraphQL queries
    • UserPoint API
    • User activities
    • Activities analytics queries
      • API Quickstart
      • Dimensions and metrics
      • Use cases
    • Funnel API
  • Alerting
    • Alert configurations
  • Data visualisation
    • Quickstart
    • Dashboards
    • Sections and cards
    • Charts
    • Datasets and data sources
      • Using a data file data source
    • Transformations
    • Filters
    • Cookbook
    • Reference
  • Advanced usages
    • Audience segmentation
      • Audience features
      • Segment builders
      • Audience segment metrics
      • Audience segment feed
        • Building new feeds
        • Monitoring a feed
        • Curated Audiences (SDA)
      • Edge segments
      • Cohort-based Lookalike
    • Contextual targeting
      • Setup
      • Activation
        • Google Ad Manager
        • Xandr (through prebid.js)
      • API documentation
    • Exporting your data
      • Query Exports
      • Datamart replication
    • Data privacy compliance
      • User choices
      • Cleaning rules
      • Exercise of user rights
      • Cookies
    • Campaigns
    • Automations
      • Email routers
      • Email renderers
      • Opt-in provider
      • Custom action plugins
      • Usage limits for automations
    • Plugins
      • Concepts
      • Creation & Deployment
      • Coding your plugin
      • Manage existing plugins
      • Layouts
      • Presets
      • Monitoring
      • Throttling
      • Batching (for external feeds)
    • Platform monitoring
      • Resources usage
        • Dimensions and metrics
      • Collection volumes
        • Dimensions and metrics
      • Events ingestion monitoring
        • Dimensions and metrics
    • Data Clean Room
      • Bunker
      • Clean room
  • Resources
    • Tutorial: Data Ingestion
      • Your first events
        • Add the mediarithmics tag
          • Getting the tag
          • Adding the tag
        • Send events using the tag
          • Adding event properties
          • Finding the UserEvent type in your schema
          • Matching your schema
          • Standard events
      • Your first bulk imports
        • API basics
          • Authentication
          • Your first API call
        • Send documents using the API
          • Requirements
          • Sending documents
    • Using our API
      • Authentication
    • Tools & libraries
      • mics CLI
      • JS Tag
      • Plugin SDK
    • Data cubes
      • Creating a report
      • Reference
Powered by GitBook
On this page
  • Single number datasets
  • Key / value datasets
  • Key / value / buckets datasets
  • Key / values datasets
  • Dataset JSON declaration
  • series_title property
  • datamart_id property
  • adapt_to_scope property

Was this helpful?

Export as PDF
  1. Data visualisation

Datasets and data sources

PreviousChartsNextUsing a data file data source

Last updated 2 years ago

Was this helpful?

A dataset is built based on at least one data source, and optional and processed for visualisation in .

You can retrieve data from the following data sources :

  • data cube

  • data cube

  • data cube

  • data cube

Depending on the query you run and the transformations you apply, you can build different types of datasets. Here is a recap of which datasets are created from which data sources and transformations and the available visualisations for each.

Dataset
Created from
Compatible with

Single number

Queries

Transformations

None

Charts

Transformations

Key / value

Queries

Transformations

Charts

Transformations

Key / value / buckets

Queries

Transformations

None

Charts

Transformations

Key / values

Queries

None

Transformations

Charts

Transformations

Single number datasets

Here is an example dataset with only one data source, that returns a number :

"dataset": {
    "type": "OTQL",
    "query_id": 666 // SELECT @count FROM UserPoint
}
"dataset": {
    "type": "activities_analytics",
    "query_json":  { // This query returns the number of active users
        "dimensions": [],
        "metrics": [
            {
                "expression": "users"
            }
        ]
    }
}

Key / value datasets

// key-value dataset built with an OTQL query
"dataset": {
    "type": "OTQL",
    "query_id": 666 // SELECT {gender @map} FROM UserProfile
}

// key-value dataset built with an activities analytics query
"dataset": {
    "type": "activities_analytics",
    "query_json":  { // This query returns the number of active users per channel
        "dimensions": [
            {"name": "channel_id"}
        ],
        "metrics": [
            {
                "expression": "users"
            }
        ]
    }
}

Key / value datasets also come from transformations like to-list, to create a list from multiple numbers. You can note the series_title property that gives you control over the title that will be displayed in tooltips and legends.

"dataset": {
    "type": "to-list",
    "sources": [
        {
            "type": "OTQL",
            "query_id": "666",
            "series_title": "Female"
        },
        {
            "type": "OTQL",
            "query_id": "777",
            "series_title": "Male"
        }
    ]
}

Key / value / buckets datasets

// key-value dataset built with an OTQL query
"dataset": {
    "type": "OTQL",
    "query_id": 666 // SELECT {cat1 @map{cat2 @map{cat3 @map}}} FROM UserProfile
}

// key-value dataset built with an activities analytics query
"dataset": {
    "type": "activities_analytics",
    "query_json":  { // Number of active users per day per channel
        "dimensions": [
            {"name": "date_yyyymmdd"}
            {"name": "channel_id"}
        ],
        "metrics": [
            {
                "expression": "users"
            }
        ]
    }
}

Key / values datasets

The join transformation with multiple key / value datasets with common keys creates a single dataset with multiple values associated with each key.

"dataset": {
    "type": "join",
    "sources": [
        {
            "type": "OTQL",
            "query_id": 777, // Select {interests @map} FROM UserPoint WHERE ...
            "series_title": "Group 1" 
        },
        {
            "type": "OTQL",
            "query_id": 666, // Select {interests @map} FROM UserPoint WHERE...
            "series_title": "Group 2"
        }
    ]
}

Dataset JSON declaration

A dataset is formed with a tree of data sources and transformations chained.

"dataset": {
    "type": "transformation-name",
    "sources": [
        { 
            "type": "transformation-name",
            "sources": [
                {
                    // OTQL data source
                    "type": "OTQL", 
                    // ID of the OTQL query to call
                    "query_id": Int, 
                    // Optional. Title of the series for tooltips and legends
                    "series_title": String, 
                    // Optional. Datamart on which to run the query.
                    // Defaults to current datamart
                    "datamart_id": Int,
                    // Optional. To adapt the query to the current scope
                    // for example by adding current segment's query
                    // when dashboard is executed on a segment
                    // Defaults to TRUE
                    // COMING SOON
                    "adapt_to_scope": Boolean
                    // Optional. To run the query in a specific precision
                    // To be used when charts take too long to load and 
                    // a lower precision is accepted
                    // Defaults to FULL_PRECISION
                    "precision": "FULL_PRECISION" | "LOWER_PRECISION" | "MEDIUM_PRECISION"
                }
            ]
        },
        {
            "type": "activities_analytics",
             // JSON representation of the activities analytics query
            "query_json": Object, 
            // Optional. Title of the series for tooltips and legends
            "series_title": String, 
            // Optional. Datamart on which to run the query.
            // Defaults to current datamart
            "datamart_id": Int,
            // Optional. To adapt the query to the current scope
            // for example by only selecting activities of users 
            // that were in the segment while doing it
            // when dashboard is executed on a segment
            // Defaults to TRUE
            // COMING SOON
            "adapt_to_scope": Boolean
        },
        {
            "type": "collection_volumes",
             // JSON representation of the activities analytics query
            "query_json": Object, 
            // Optional. Title of the series for tooltips and legends
            "series_title": String 
        },
        {
            "type": "resources_usage",
             // JSON representation of the activities analytics query
            "query_json": Object, 
            // Optional. Title of the series for tooltips and legends
            "series_title": String 
        },
        {
            "type": "data_ingestion",
             // JSON representation of the activities analytics query
            "query_json": Object, 
            // Optional. Title of the series for tooltips and legends
            "series_title": String 
        },
        {
            "type": "data_file",
            // URI of the JSON data file containing data
            // Format "mics://data_file/tenants/1426/dashboard-1.json"
            "uri": String,
            // Path of the property in the JSON that should be used as dataset
            // This allows you to have multiple datasets in the same JSON file
            // Should use the JSONPath syntax. See https://jsonpath.com/
            // For example, "$[0].components[1].component.data"
            "JSON_path": String,
            // Optional. Title of the series for tooltips and legends
            "series_title": String
      }
    ]
}

series_title property

All data sources have a series_title property. This is useful when combining multiple sources together to set the title associated with each source. This will be reflected in tooltips and legends. Here is an example of a Datamart and a Segment data sources combined together.

"dataset": {
    "type": "join",
    "sources": [
        {
            "type": "OTQL",
            "query_id": 777, // Select {interests @map} FROM UserPoint WHERE ...
            "series_title": "Segment" 
        },
        {
            "type": "OTQL",
            "query_id": 666, // Select {interests @map} FROM UserPoint WHERE...
            "series_title": "Datamart",
            "adapt_to_scope": false
        }
    ]
}

datamart_id property

All data sources have a datamart_id property allowing you to specify the datamart on which to run the query. It defaults to current datamart. This allows you to bring data for an other datamart or to create a dashboard at the community level that aggragates data from sub organisations.

The user loading the dashboard should have the permissions to query the specified datamart or the chart will throw an error for this user.

adapt_to_scope property

By defaults, all data sources will try to adapt to the page on they are executed, with the adapt_to_scope property set to TRUE.

The goal is to :

  • Filter data for the current segment when a dashboard is displayed on a segments page

  • Filter data based on the current query when a dashboard is displayed on a builder.

For OTQL data sources :

  • On home scopes, nothing is changed and the query is run as is.

  • On segments scopes, the current segment's query is added at the end of the OTQL query. That means that only OTQL queries FROM UserPoint will adapt to the scope.

  • On builders scopes, the current query selected in the builder is added at the end of the OTQL query. That means that only OTQL queries FROM UserPoint will adapt to the scope.

For activities analytics data sources :

  • On home and builders scopes, nothing changes and the query is run as is.

  • On segments scopes, activities are filters so that only those of users that were in the segment while having the activity will be kept.

If the dashboard is meant to be displayed on segments, only build OTQL queries FROM UserPoint and activities analytics queries unless you want to retrieve data for the whole datamart.

If the dashboard is meant to be displayed on builders, only build OTQL queries FROM UserPoint unless you want to retrieve data for the whole datamart.

without dimensions

without dimensions

queries with dimensions

queries with dimensions

queries with multiple dimensions

queries with multiple dimensions

You can build the same kind of dataset with a different data source, like :

Use this type of dataset in charts to display a single number.

Queries in the preceding paragraph were only returning numbers, but you can build key / value datasets with more complex queries like and .

You can pass this kind of dataset in , and charts to visualize the content.

You can go further by adding up to three levels of buckets in your dataset with and .

This can then be displayed with and charts, with drill down or multiple / stacking bars.

The two groups can be displayed together in and charts to efficienly compare their data.

To learn about OTQL queries, go to .

To learn about activities analytics queries, go to .

To learn about collection volumes queries, go to .

For a list of available transformations, see .

transformations
Charts
OTQL queries
Activities analytics
Collection volumes
Resources usage
Events ingestion monitoring
OTQL
activities analytics
OTQL queries
Activities analytics queries
Collection volumes
Transformations
Activities analytics
Collection volumes
Activities analytics
Collection volumes
Activities analytics
Collection volumes
ratio
to-list
to-list
to-percentages
index
to-percentages
join
index
reduce
to-percentages
reduce
join
reduce
OTQL @count and metrics directives
OTQL bucket directives
OTQL multi-level bucket directives
OTQL bucket directives
multi-level bucket directives
activities analytics dimensions
activities analytics queries with multiple dimensions
Metric
Bars
Pie
Radar
Bars
Pie
Bars
Radar
Metric
Pie
Radar
Bars
Pie
Bars
Bars
Radar