Emotion-sensing, eye-tracking, behavioural and auditory analysis. These are examples of rapidly evolving technologies that have one sole purpose; to find out what you’re not telling.

Automated monitoring of human communication

Research powered by machine learning at MIT is trying to develop an algorithm that would be able to predict depression from raw text and audio data. In one study a group of participants answered questions that were recorded both in text and audio. By using a neural-network model to discover speech patterns indicative of depression, they were able to predict (not detect) depression after about seven questions. Its averaged accuracy, according to the researchers, was 77 percent.

What is not to like? When the technology becomes more advanced we could predict depression in its early stage on a societal scale. Those in the high-risk groups could be advised to see a health professional. However, this is also the beginning of passive automated monitoring of human communication.

“Imagine losing out on a job because a company used a “depression detector” AI to determine you weren’t mentally stable enough during your interview, or having an algorithm’s interpretation of your responses to a lawyer’s questions being admissible in your child custody case.”– Tristan Greene

Evaluating performance in the future

Will suspects in legal cases, applicants in job interviews and employees in general become targets of non-consensual mental health evaluations in the future? A report by MIT Sloan says that emotion-sensing technology (EST) will become a common instrument to evaluate employee performance. By 2022 it is estimated that EST will be a market worth of $US 38 billion worldwide.

Through eye movement, facial expressions and skin responses, the use of EST can give employers information about the emotional state of their employees. They will know when you’re emotional distressed. Is that a horrible violation of our privacy or is it a helpful device to manage workload, prevent burnouts, and so on?

Hardcore surveillance?

In Barcelona a company called Telefónica I+D have developed an algorithm that detects boredom and distraction by analysing smartphone activity. It will monitor the time you spend on Instagram, Facebook and how often you check your email. According to the MIT report, it can detect when a user is bored 80 percent of the time. Imagine that an automated system, as a part of the corporative setup, will notify your manager when you trigger the algorithm, because you were bored for a couple of hours.

“Surely knowing that you are being monitored to such a degree could make you feel like you’re in a fishbowl and increase anxiety about performance” – Chloe Hava

There are many examples of these monitoring technologies. MIT’s Affective Computing Lab has designed a piece of hardware that allegedly is able to identify user stress levels by how hard they pushed on a keyboard and how they hold their mouse. Eye-tracking research are currently being used to identify personality traits by tracing eye movement doing everyday tasks. The point being that in the future we will see a lot of these technologies that will reveal things that we might not want to disclose.

How will the future look like?

How can we defend our right for privacy? Do we really need these technologies? Aren’t we underestimating how knowledgeable intelligent and sincere human-to-human interaction can be? We have to remember that we can learn abundantly from caring for each other’s wellbeing and asking honest questions.

What do you think? Let us know in the comments.

Want to know more?

Digi-Talks is hosting events about AI, Blockchain, Digital Consulting and Ethics; view the details here.

You can read more articles here.

Sources: Emotion-sensing technology – friend or foe?  –  AI can determine your personality through eye movements  –  Eye Movements During Everyday Behavior Predict Personality Traits  –  MIT’s depression-detecting AI might be its scariest creation yet