Marketing Research Details


When someone downloads an app and opens it for the first time, the odds that they return to the app in the next three months aren’t great – only 55%.

Brands who see their new customers come back for a second week increase their retention rate to 82%; those that don’t see it fall to 37%. This difference in retention can have a significant impact on an app’s long-term success and can drive up (or drive down) the total cost of acquiring a loyal customer, which is already as high as $14-27 each.

Brands that welcome new customers to their apps with robust, well-structured onboarding experiences are going to be better positioned to communicate the value that their apps can provide, making continued use more appealing for their audience. And making sure that those customers finish onboarding is key – brands that encourage customers to complete onboarding using both a push notification and a message in a second channel (like email or in-app messages) increase their app’s two-month retention by 130%, compared to brands that sent no messages.


User Engagement Scoring: enabling self-balancing, personalized notification limits

Imagine an app that could dial up or down the frequency of notifications based on your personal engagement with them. Ignore your notifications, and it sends fewer, less often. Click on a bunch of them and it starts sending you more of the things you’re interested in.

By employing a Machine Learning approach, or simpler algorithmic method of adding and subtracting points based on interactions (or lack thereof) with past notifications, the system dynamically adjusts the volume of notifications each user receives (i.e. adjusts their individual rate limit).

Why bother with Engagement Scores if the app already includes granular notification preferences?

There are a couple of reasons why maintaining an engagement score might be a good idea, even if a comprehensive set of preferences are implemented in the app settings menu:

  1. Engagement Score is automatic and based on inference rather than explicit user action: users do not need to explicitly find this feature and make manual changes. Thus, all users benefit from it, not just the users who find and use the manual settings.
  2. Even fine-grained preferences typically only allow toggling of specific notification categories on/off. An Engagement Score allows every on the possibility to be more on and reduces the likelihood that anything will be turned off since it will be dialled down to a lower frequency by the system.
  3. While, in an ideal world, the engagement score system works so perfectly that it obviates the need for explicit preferences entirely, there will still be some users who enjoy the feeling of control that a full set of preferences provides, much as some people would prefer a steering wheel and pedals in their self-driving car, just in case they feel the need to take over.

The Prioritization Engine: exceptions to the rate limit for top-priority messages

Think of a Prioritization Engine as a sorting machine that tags message-user pairs with a level of importance, and thus influences the decision on whether to notify them with this specific message, or leave them undisturbed.

The Prioritization Engine’s job is to guess — as best it can — the relevance of a specific notification to a specific user. A really smart system would employ machine learning to become progressively smarter, based on a feedback loop in the form of open rates and conversion data from previous notifications. The aim here is to discard notifications that are unlikely to resonate with the user before they are sent, while flagging those that have a great chance of being engaging, so they can be prioritized and possibly even override the standard rate limit established for that user, to send a notification that would otherwise have been prevented by the rate limit.

While it sounds fancy and complex, an MVP prioritization engine could employ simple heuristics (basic rules of thumb/logic rules) to try to identify the ‘best’ or ‘worst’ notifications for a specific user. As long as it succeeds in improving, on average, the ratio of opened to unopened notifications (i.e. increasing the average relevance of notifications) that the system delivers, it’s adding value to the system.

Improving on simple logic rules, the Prioritization Engine could leverage analytics data about the user’s historical engagement. For example, pre-calculated ‘favorites’ lists (favorite content, favorite creators or publishers of content) based on the usage history of each user and prioritizing notifications about content or creators the user has expressed a clear predilection for.

The logic goes that notifications about content/creators/product categories that the user has interacted with a lot in the past will be welcome at a higher frequency than of those that had received little prior attention from the user.

The danger of this kind of prioritization logic is that it can create an echo-chamber effect, where new content is not surfaced due to a bias toward the already-discovered. This effect could be reduced by only applying prioritization to notifications once the rate limit has been reached, to decide if a new one should be permitted — if predicted to be sufficiently relevant — to bypass the rate limit.

ULCT Survey–Full Results

ULCT Survey Analysis 

Government Services Survey–Full Results

Government Services Analysis

Technology Preferences Survey–Full Results

Technology Preference Analysis