01. NeuroTracker

Your Guide to NeuroTrackerX Data for Organizations

March 5, 2024

Over 100 independently published scientific studies have revealed that NeuroTracker data provides a unique and meaningful window into brain functions. As such, the NeuroTrackerX platform has been designed with analytical tools to help provide insights that can be discovered both microscopically through individual user breakdowns, and macroscopically via trends across groups of users. Here we will cover the essentials on how to navigate and interpret the valuable cognitive data collected by your organization.

Reviewing Single User Data

Each users’ training data can be viewed in two focused ways – training progress over time through NeuroTracker scores, and as individual session breakdowns. Let’s take a look at both.

NeuroTracker Scores

Each NeuroTracker session provides one main score that represents a user’s ‘Speed Threshold’ - the level at which someone can track all of their targets successfully around 50% of the time.  This score is calculated as the average between lowest speed trial fails and highest speed trial success from a selection of key trials in a complete session.

For example, a NeuroTracker session score could be 1.5.  This means that when the targets reach that speed, the individual’s ability to maintain track of them will typically start to break down.  In effect, it represents an upper limit for 3D multiple object tracking speed.

As NeuroTracker is a simulation-based task, the score also represents a real-world measure of how fast a person can track objects moving, with a score of 1.0 equating to an object velocity of 0.68cms per second. Note that users need to sit the correct distance from the display to maintain an accurate representation of real-world tracking speed.

The Core session is used to gain a scientifically validated measure of NeuroTracker performance. Therefore, Core scores are also the best reference for measuring progress with training over time.

Accordingly, Core baselines are used as the reference for performance progress. A scientific NeuroTracker baseline is derived from the average of 3 consecutive Core sessions. This is the recommended starting point for all users, and the first 3 Core sessions yields an 'Initial Baseline'.

The NeuroTrackerX software compares this to 'Current Baseline' (the most recent 3 Core sessions) to calculate overall improvement.

The quick guide for this is 'Improvement %' displayed on each users dashboard. In the example above, the Initial Baseline (shown bottom left) of 1.01 is compared to the Current Baseline (bottom right) to yield an overall improvement of 66% (top left). However, any 2 Core baselines can be meaningfully compared, for example to investigate the effects of any type of training intervention on high-level cognitive functions.

The standardized measure for a NeuroTracker score is based on 4 target tracking, where tracking speed and score are the same. If a user performs a session at 1, 2, or 3 targets, the NeuroTracker score will be normalized to an approximation of what the result would be if performed at 4 targets.

For comparison purposes, tracking speed is then presented in addition to score. For example, achieving a tracking speed of 1.5 at 3 targets will yield a session score of 1.0 (the approximate tracking speed at 4 targets).

On the user’s dashboard these two measures are differentiated by a solid line (score) and a dashed line (tracking speed).

Single Session Data Analysis

In addition to a session score, more granular performance metrics can be viewed for any completed session. Highlight metrics include:

Consistency Score: a measure of how variable tracking speed performance was over the session. A low score here means that over the 20 trials of the session the user was successful at relatively high speeds, yet also was unsuccessful at relatively low speeds, suggesting susceptibility to attention lapses. This score tends to increase with the benefits of training over time (via increases in sustained attention).

Fastest Trial Score Success: the single highest successful trial speed of the session.

Lowest Trial Score Miss: the single lowest trial speed fail of the session.

Other highlights include a user’s personal milestone achievements specific to the their training history, such as reaching a relatively high level of consistency.

Now let’s cover two micro analyses of session data.

Trial Success Breakdown

The results of each trial in a NeuroTracker session are categorized into three groups:

Perfect Trials: correct identification of all targets.

Near Misses: correct identification of all targets except one.

Significant Misses: incorrect identification of 2 or more targets.

The types of misses help give insights on whether a user was close to a trial success, or mostly lost tracking overall.

The dashboard shows the distribution of these results on the right side. However, it’s also important to qualify these results by the speed they were attempted at. For example, a Perfect Trial at a high speed is much more significant than at a low speed.

In the above screenshot the spider chart displays the three categories of trial results, denoted by the three correspondingly colored sections.

Results at high speeds are represented towards the outer boundaries of the chart, and low speeds are shown near the inner boundaries.

Overall, this gives a snapshot of the distribution of trial results relative to the tracking speeds at which they were performed.

This data gives a more complete picture of the Consistency Score, and can be useful for close monitoring how NeuroTracker performance specifically changes with training over time as session scores improve.

Average Response Time Per Trial

This metric is basically a measure of how much time it took for a user to input answers on each session trial. Although answering quickly is not part of the NeuroTracker task and does not influence session score, it can be useful as a passive indicator in two ways.

Firstly, selecting targets during the answer phase in NeuroTracker trials involves processing speed and working memory, which have both been shown to be improved with NeuroTracker training. As such, response times will likely become incrementally faster the more training sessions a user has completed.

Or they can be influenced by a user’s general cognitive state. For instance, from the get-go young people will be much more likely to have faster response times than older people.

Secondly, these measures can give hints into a person’s confidence level. To give an example, if a person attempted a trial at a relatively low speed, had a fast response time, but got a Significant Miss, it suggests they believed they were successful, yet were unaware they had lost tracking (over confident).

Now let’s review the chart data. On the top right a single Average Response Time score is displayed. This represents the average time taken to input target answers across all the trials in the session.

The graph displays response time per trial in seconds on the vertical Y axis, and the number of the trial on the horizontal X axis. It gives a quick picture of overall response times as well how variable they were over the course of the session.

For a more detailed breakdown, the filters for Perfect Trials, Near Misses and Significant Misses can be selected to compare response times to trial results along with the precise time taken to input answers.

As we’ve just covered, this breakdown can be helpful for revealing more psychological aspects of NeuroTracker performance. Changes in response time will also likely correlate with NeuroTracker scores - typically the higher an individual's tracking speed, the faster their response time will be.

Lastly, if a user's NeuroTracker scores drop lower than expected, it is a good idea to check if this correlates with reduced response time, as it may provide an extra indicator that cognitive functioning has become impaired for some reason (e.g. from poor sleep, fatigue, diet changes and so on).

Aggregated Data

Collated group data can be viewed in the ‘Stats’ section of the NeuroTrackerX Organization software. There are three analysis tools which can be used to compare macro data trends for up to 10 users within the organization.

1. Progression

This tool is similar to each user's main training chart, showing all NeuroTracker session scores for each user, but with each user represented as a different line.

At a glance it shows how much training has been completed from one user to another, as well how scores compare. The chart can be scrolled to the right to view 20 or more sessions, revealing how improvements vary with training over time.

The shaded bands of light blue on the chart background allow sessions scores to be compared with normative population data. In the above example users would rank in the top 25% of users in the global NeuroTracker database.

The thick light blue line displays the average score performance from session-to-session for the selected group of users.

The average score performance for all users within the organization can be compared, displayed as the dotted line.

Filters allow flexible selection or deselection of any of normative population, organization or group data, as well as individuals within the selected group. This means it is easy to explore and focus in on any interesting trends that might emerge.

2. Baselines

This tool provides a visualization of a group of users' Initial Baselines (Y axis) compared to Current Baselines (X axis), i.e. where they started off and where they are now.

The average for the selected group is represented by the light blue line, also the average for all users in the organization can be selected for broader comparision. Both of these refences are displayed numerically as default on the right side of the chart.

Additionally, the crosshair highlights can be shown for any individual by selecting them, also revealing their numerical scores.

Generally speaking, the further a user's circle icon is to the top right of the chart, the better their overall NeuroTracker performance is. However, being closer to the bottom right also highlights how much relative progress a user has made in their training (AKA their Improvement %), which is a better reference for the expected benefits transferred from training effects.

3. Improvement

Last but not least, a clear overview of Improvement% can be compared for a group of users. Again group and organizations norms can be displayed for comparision and are shown numerically by default by the right side of the chart.

This can be a nice way to congratulate and motivate successful users making significant progress compared to their peers, particularly so if their NeuroTracker scores are not so high. From a neuroplasticity standpoint improvement is the ultimate goal!

Finally, note that all of the charts for single users and aggregated data can be customized to individual preferences, as well as downloaded as data reports to print out or share digitally with users.

We hope you found this guide useful and that these tools can help you glean more insights that NeuroTracker data can uncover through a unqiue window into cognitive functions and brain health. If you would like to delve into NeuroTracker scores and learning rates a little deeper, or to enlighten your organization's users, then also check out this guide.

Your Guide to NeuroTracker Scores

Witness the benefits of NeuroTrackerX. Start Today!