Science & Tech

We developed the first fully automated, panel based consumer neuroscience platform in the world

Taking neuroscience based research out of the lab

MindProber is a fully automated platform. Our automated pipeline assures you are as autonomous as you want and can run studies with hundreds of participants in a few days.

Our massive data collection capability plus the work with our clients are the key for the development of measures with increased precision.

We know that just asking doesn’t work.

Our strong academic background, continued involvement in basic R&D and partnerships with academic institutions allows us to be in the forefront of applied cognitive neuroscience, pushing the field to the next level.

We built a complete innovative system end to end, from the sensors to the methods and metrics.

Meet James One

The slickest biometric sensor on the market!

James One feeds our media testing platform with medical grade physiological signals to let you know the emotional impact of your content.


Physiological signals

Heart Rate (HR)
Galvanic Skin Response (GSR)

James One is

Small

The smallest, less intrusive, high-quality biometric sensor available. It’s portable and comfortable to the panelists.

Research grade

James one captures high-quality heart and electrodermal indexes that are combined to produce our second-sensitive and overall activation metrics.

Out-of-the-lab proof

It has the autonomy required to do live monitoring for hours. It is resistant and synchronizes flawlessly to the content.

Why James One

It is our tribute to the father of modern psychology – William James.

Our sensor is the best window to emotions, emotions that are revealed by and depend on bodily states as William James explained.

MindProber's app

MindProber's Mobile App works in parallel with
our James One sensor to:

Manage interaction between panellist and the platform

The app enables panel members to participate in a media testing study after accepting an invitation.

Collect the declarative answers and merging it with biometrics

Once the study starts, each sensor gets independently paired with the mobile app, which collects declarative data. This data is merged with the biometric input and gets sent to our data backend on a second-to-second basis.

Synchronize to content

Through audio content recognition, our data gets millisecond-wise synchronization to any sort of content the panelist might be exposed to, be it an advertisement or long format content.

Merging different sources of data

Combining second-to-second analyses with post-visualization questionnaires and implicit tasks

We are continually working on our platform in order to deliver you more and more powerful features.

Combine the value of second-wise biometrics with traditional and custom questionnaires, extracting insights with sentiment analyses and concept modeling.

Going further with Machine Learning

Predictive models

Our system allows the collection of large volumes of biometric and behavioral data which can be (and is) used to efficiently predict relevant KPIs such as audience ratings.

Automated Segmentation

Through computer vision algorithms, we are able to seamlessly segment your media content to efficiently and quickly identify the different moments and elements, and subsequently extract the associated emotional impact data. What’s the best angle for your show? When did your logo appear? What player is the most efficient in driving the audience’s engagement?