The Dev is in the Details

Digital Therapy: Apps and AI reshaping mental health care | Mark Goering | The Dev is in the Details #6

Lukasz Lazewski

► Should we use AI to advance mental health care?

In this episode, we explore how technology is evolving the landscape of mental health care, from diagnosis to treatment. Mark Goering, entrepreneur and psychotherapist, shares the potential of AI-driven apps and data-driven approaches in reshaping how we understand and address mental health challenges.


► Our guest 🌟

Mark Goering 👉 https://www.linkedin.com/in/mpgoering/ 
Entrepreneur, Product Lead and Psychotherapist


► In today’s episode:

  • The current state of mental health care, highlighting issues of accessibility
  • Opportunities for AI to enhance mental health services and improve overall wellness outcomes.
  • Mental health apps revolutionizing accessibility and personalized treatment options for individuals.
  • Significance of movements that are paving the way for data-driven mental health care, challenging traditional methods of diagnosis.
  • Nuances and complexities of mental health diagnostics.
  • The fluid nature of our understanding and qualification of mental disorders.

► Decoding the timeline:

00:00 – Transforming mental health with technology

10:13 – Next level of innovation within mental health care

23:57 – Data-driven approach to mental health

34:37 – Advancements in mental health technology 

45:15 – Navigating regulations in health tech

#AI #mentalhealth #healthtech


► Materials and information mentioned in the episode: 

https://minddoc.com/us/en/science 
https://web.njit.edu/~ronkowit/eliza.html  

***

The Dev is in the Details is a podcast where we talk about technology, business and their impacts on the world around us.

Łukasz Łażewski 👉 https://www.linkedin.com/in/lukasz-lazewski/
Write to us 👉 podcast@llinformatics.com

Speaker 1:

So I think actually the distance and kind of the anonymity that the digital space can make you feel you have can sometimes even ease the process of finding help or opening up about things. I really do believe that generative AI will be able to take that role and help people go through and basically be kind of a healing mechanism. If it makes sense to use new technology, you always have to look at the status quo and what the actual quality of treatment is currently, and it's very, very low. And at the same time, probably in 20 years ADHD as such will not be a thing anymore.

Speaker 2:

Today I am pleased to welcome Mark Goering, a seasoned entrepreneur and expert in mental health technologies. Mark has co-founded MoodPath, a widely acclaimed mental health application with over 4 million downloads. Following the acquisition of MoodPath by Shun Clinic Group in 2019, Mark played a key role in integrating digital mental health solutions into clinical settings. Recently, mark transitioned from the role of Chief Product Officer at Minidoc to refocus on new ventures. We're super happy to have him as a guest on our show. Mark welcome.

Speaker 1:

Good to be here.

Speaker 2:

Awesome. So, given your background in psychotherapy, I'm super curious how someone like yourself you know who is the psychotherapist, certified and everything becomes a product person for so many startups in their career. Can you tell us a little bit more about this?

Speaker 1:

Sure, both fields were always present in my life and my ambitions. So I had a double focus in university time, both on economic and organizational psychology and the classic clinical psychology part, and I actually felt too junior, too young to jump right as a 24-year-old into clinical psychology. So the first couple of years of my career I focused completely on the startup world. I worked in recruiting and HR topics for a fast growing company builder, and then I focused for a couple of years on the clinical part growing company builder and then I focused for a couple of years on the clinical part, and then I kind of brought both pathways together by founding a mental health startup. So then I was able to work in product and in the whole startup world while still being, you know, focused on the mental health part.

Speaker 2:

Awesome, and through your journey you built so many different digital tools. I'll be curious to find out how you feel they transform access to mental health and support people, and particularly in terms of scalability and personalization they still do it far too little.

Speaker 1:

By far.

Speaker 1:

The potential isn't being, isn't used or harnessed the way it should be or could be.

Speaker 1:

But I think what the technology and at least the first product I built, moodpath, has a small contribution to it is giving access to the whole question of what is my current state labeled, or, you know, what would a, what would a psychologist say to my current state.

Speaker 1:

This is a question that you know affects so many people because you know, as, as you know, mental health problems of you know, of any sort, this huge problem for a massive amount of people, anywhere between, let's say, 20 and even up to 50% of people go through some kind of mental health condition throughout their life, and you know some kind of form of pre-diagnosis and information, just understanding what is, you know, considered based on academic research and literature, what is considered an illness and what is considered normal human experience, answering a bunch of questions and letting those answers run through, what you know academia and science has developed as what is normal and what is, you know, beyond normal, and I think that that's a huge potential.

Speaker 1:

And what you know then, beyond that would be, would be and I don't think it's being used yet, um, that a huge potential is to then actually use what, what is produced from that, uh, and and basically feed it into the standardized processes of of health care got it, got it, but I don't feel like I see a massive adoption of technologies like this right now.

Speaker 2:

What, what would you say, are the main concerns for massive adoption currently? You know, in EU or worldwide?

Speaker 1:

I would say on the user side, there is at least proven interest in massive adoption. So there's a bunch of tools, there's a bunch of self-help and if you just look at the keyword, the search volume online, google and so on, there is massive adoption of using digital tools to gain information about mental health and a lot of times they then land in you know one of the. You know massive amounts of apps out there, be it clinical, actual, focused apps that you know probably still underrepresented or the more, let's say, in between apps, and you know more meditation or other self-help areas. So I think there is a lot of adoption in terms of people that are affected, but I think has almost close to zero.

Speaker 1:

Adoption is kind of what I was saying in the last part, as well as the intertwining or actually using the information gathered for real life health care. So if you use, let's say, you use an app or you find some kind of website that gives you information and you can actually enter information, and then that information is run through and gives you some kind of result, some kind of indication. I don't know, basically, of any system yet that will actually use that information. So you know, generate a profile and then actually tell you some kind of you know, certified and recognized by public health care and insurance systems pathway to okay, based on your answers, now you can go to this doctor or this clinic and they'll be able to see your results and then, based on that, you will receive some kind of treatment. This, this kind of journey, is what is actually the missing part from, from my perspective yeah, this sounds amazing on one hand, but a little bit scary on the privacy end.

Speaker 2:

Is this a concern for scalability or legalization of this and widespread adoption, or is it just my concern?

Speaker 1:

I don't really think it is an objective reason that should limit scalability. I mean, it's obviously a risk and it needs to be addressed.

Speaker 1:

There needs to be, you know state of the art, but best not, you know, best possible IT security data privacy measures in place. As with almost any any field with, you know that that handles sensitive data. I don't see a any objective reason why it should you know, at least per se limit the scalability of it, because there are solutions to it. With modern encryption there are from my perspective at least, we have all the tools there too, and also with GDPR, also the legal basis to actually enforce high standards in IT security and data privacy.

Speaker 2:

But I even mean from a perspective of like a feel of an end customer or user or a patient, because we're banking online today. It's pretty normal, I'd say, in the Western world, and we goof around on social media. But this is different, right. The narrative of like opening up and confessing and really deep diving into your deepest thoughts and past and everything with someone online or sharing with some sort of anonymous system where it's just a bunch of questions and answers. Is that not a natural barrier for interacting and adopting again? Yeah, I mean, it's just a bunch of questions and answers. Is that not a natural barrier for interacting and adopting again?

Speaker 1:

Yeah, I mean it's interesting you say so. I mean I'm sure it is some form of barrier, but in my experience in building systems I have not seen it being a significant barrier for people that actually need help, significant barrier for people that actually need help. You know, I think as soon as your level of pressure there's at least in German we say like a pressure of pain or so, to say like how much you're actually suffering under your symptoms, I think you get to a pretty quick you know point. So I think with any doctor there is that barrier that you don't generally like.

Speaker 1:

It's not a pleasurable experience to open up and share information and there's, especially then in the digital world, a bit of anxiety around it. But in my experience at least, as soon as the suffering, the pressure of suffering, gets to a certain point and that's actually quite quickly then and you're actually ready in the state of mind to seek for help. I don't think that the aspect of you know it being digital or non-digital is actually much of a game changer. Actually quite the opposite. I once read a study how people affected by post-traumatic stress symptoms and disorder were more likely to open up and not keep back information within a digital setting compared to a face-to-face setting. So I think actually the distance and kind of the anonymity that the digital space can at least make you feel you have can sometimes even ease the process of finding help or opening up about things.

Speaker 2:

Yeah, I find it fascinating that some very famous people would openly speak up and present themselves as vulnerable to the wider public, speaking of their experience you know, trauma and other past experiences openly and also describe their healing process. That's actually. I fully believe that this can be done. Just Just wondering if this is for everyone you know, and given the current state of technology and, let's say, public awareness of how to use that technology, or even external motivational factors such as pain, as you described, where do you see next level of innovation that comes in in this field in the coming years?

Speaker 1:

Yeah, I mean, as probably anyone you know, any tech savvy person um on this planet. You know, I think, the the level of quality, that large language models and you know, specifically chat, gpt, that that opened up as kind of the trailblazer there. It excites me a lot and I see massive potential in it. I think it is a true game changer. It's a leap of innovation. I think anyone who deals with these things sees it the same way. And I do also see that oftentimes there are like kind of predecessors or you know kind of like almost like prophets of new innovations, where people have the same idea and first runs, then fail miserably, but the same idea makes sense and is there. It's just the implementation wasn't good enough and then you know, with some kind of cycles of innovation, then you just reach that certain level. And I think AI and specifically generative language model like AI with using semantics that you can actually talk to in a chat situation, has that same potential. And this goes back to the 60s.

Speaker 1:

I think there was a research project called ELISA. That's always used in scientific literature and there's many, many cases of this idea of hey, couldn't we mimic? You know what happens in psychotherapy, where you know there's a person affected and a person that has problems that talks, and then there's another person that listens and, you know, asks questions and tries to lead by kind of softly, kind of helping a person realize themselves and just giving a forum and giving you know, allowing certain emotions to be there. And couldn't we, you know, automate that process? And you know, we know that a lot of these, or we're pretty sure, at least based on science, that a lot of the things that contribute to psychotherapy being effective, so being, you know, more helpful than not doing anything, or being also more helpful than taking certain drugs are very common factors or they're more general factors. It's never a very specific technique. That you know.

Speaker 1:

As you see, there's many different schools of psychotherapy and we can be sure that when you abstract to what is actually healing, what is a mechanism, what happens in the brain that actually leads to someone actually being better than before, it's not a certain very narrow therapy of systemic or psychodynamic or cognitive and whatever.

Speaker 1:

It's more general factors. And one of the general factors, I believe for sure, is confrontation. Confrontation and getting your brain used to certain thoughts and emotions and state of minds that you usually find aversive and want to push away, and if you can, through that process of psychotherapy, get your brain used to it and kind of reframe or see it in a different way or just at least be able to deal with some things, then most likely those subversive thoughts and so on will be less powerful, will have less power over you, and I really do believe that generative AI will be able to take that role and help people go through and basically be kind of a healing mechanism in of itself. So a specific trained language model, just to reframe how I understood it. So you're saying there's a healing mechanism in of itself, so a specific train language model just to reframe how I understood it.

Speaker 2:

So you're saying there's a specific model you train for purpose of psychotherapy and would that be supervised session by session? Or, like I guess I'm trying to ask if there is some edge cases in which it could worsen the state of the of the patient.

Speaker 1:

You know, hallucination problem or something well, I mean, to be honest, you know I went through the entire training to be a psychotherapist and some of the supervisors I had there, um, I definitely would not have gone to as a patient and it's it's also clear from research that, um, you know, in a large you know relatively large amount, psychotherapy can also have negative effects and you know symptoms can get worse and oftentimes, as I was describing, you know the mechanism of confrontation. Oftentimes for a patient, you know it's kind of a almost like a dramatic curve where first psychotherapy actually things get worse for you. It feels worse because you're confronting. It's work, it's hard, it's not pleasant by any means, but, as oftentimes in life, things need to get a little worse to get better.

Speaker 1:

And I do think obviously there are risks of that process stopping too early, not working well and, as with any medical treatment, there's always some kind of risk of things getting worse, adverse effects, and these risks are there and that's why obviously it needs research, it needs certification and the regulatory bodies that actually certify medical software need to somehow find solutions to be able to somehow be able to address and assess risks based on AI. You know it's completely different assessing risks from an ever-changing, updating AI model. That also isn't even really foreseeable for the makers of the model compared to a traditional software system, but these things are being addressed and they're working on them and I think the FDA, and then the States, is much further again, as many other States, and already has some frameworks to to you know, to to assess things based on AI. So, yeah, it's complex, but in the overall picture, I am most hopeful in terms of the new technological developments of AI.

Speaker 2:

Would you agree? It's comparable to the following metaphor where it's scary, it's scary that the car can drive itself, but in practice we know that, statistically, chances for accident by human drivers is 10 times higher than if the computer does it right To be an accident caused by AI than it is by another human being.

Speaker 1:

Absolutely, I see it the same way. I mean, I think a lot of times, sometimes people are or think that new solutions should be somehow perfect. And you know and they got used to the fact that you know that there are outstanding risks of the solutions that they're using right now. They just don't, they're not aware of them because they're rare, obviously, and they're managed. But you know, if you look at reality, there's never perfect solutions out there and you know the assessment of a value is always. You know what new advancements and effectiveness can it bring, but also you know what are the risks, and then it's always something that you have to weigh against each other. Yeah, so I think it's comparable with self-driving car technologies and almost any new technology has that same.

Speaker 2:

I consider doctors as people of trust and authority. I just always assume it's going to be 100% success rate towards whatever I'm feeling or whatever you know skin is crushed I go to dermatologists and they always resolve it for me, right, 100 success rate. But as a reminder, it's not like that, right it's. You said yourself that not every practitioner in any given area might be best for even not in the area, but the specific patient type or specific condition type.

Speaker 1:

So I mean by far?

Speaker 1:

I mean just just, uh, look at psychotherapy, is the the most recent studies um on the effectiveness on psychopharmacology, so the the entire basically, especially specifically antidepressants, um, but if you look at antidepressants, the fact of matter is that the scientific community is pretty sure at least they could say they are not sure, if not skeptic if SSRIs, the main class of antidepressants, have any effect beyond placebo.

Speaker 1:

And it is the main tool that psychiatrists and clinical treatment of it's the main, it's like 80% of treatments for psychotherapy are pure based on, are only doctors that don't engage in any form of psychotherapy but only prescribe antidepressants. And at the same time, science shows us that we have no idea if there's even a causal relation between serotonin system and depression. And yeah, if you just look at that one example and there's many other examples from other medical areas as well you know medicine does best it can, but oftentimes it's not much better than just placebo. Or if it makes sense to use new technology, you always have to look at the status quo and what the actual quality of treatment is currently, and it's very low.

Speaker 2:

It's very, very low yeah, that's the surprising how much of that is still research and learning. Just a couple of months ago, maybe last year sometime, I read that there is now a new research that shows a connection between and don't don't quote me on that, I'm just rephrasing how I remember it that there is a correlation between gut bacteria and depression yeah, yeah, I, I've.

Speaker 1:

I remember um seeing some research on that in that field and and I, from my clinical, let's say, experience and my own personal, personal experience, I believe that there is a huge connection between diet and your, your gut health and your mental health.

Speaker 1:

You know, just even from personal experience, I know the situation that I, you know, laying in bed and you start feeling uneasy and kind of, you know, anxious, and then you start getting into these thoughts of you know, anxious, you know what did I not get finished and you know what is not going to work out.

Speaker 1:

And you know, I think all of us know that state of mind and I figured out over the last years that oftentimes when I am in that state, especially during, you know, when I want to fall asleep, it's often connected to somehow something that I didn't, you know, shouldn't have eaten. Let's say, where you know I have my gut, I don't, I can't, I can't eat, let's say, milk and dairy products, that well, I'm a bit lactose intolerant and I have allergies against some pollen. But there's these connections at a certain time of the year and I've personally noticed very strongly that that um is strongly linked um to each other and that oftentimes you, you, your brain kind of starts um cognitively trying to make sense of things based on a state of mind you're in um.

Speaker 1:

That often may be um you know, uh, founded in in something completely different than you, than you think. Um, so, yeah, I I don't have much more than anecdotal experience on that, but I do think that, yeah, the whole, I think, the point being mental health is such a complex topic, and I think it's just fair to at least acknowledge that it is very, very poorly understood so far. It's more of a phenomenon that we're trying to grasp at than something that we fully understand, and we just need to do the right things.

Speaker 2:

Given that input, given the idea that this is still an area of rapid development and R&D by scientific and medical communities, do you think we could actually deploy ready-trained models? Because they would be. If we have wrong assumptions? Now, right, you mentioned this placebo effect by some drugs. What is stopping us from deploying a model, or anyone, for whatever reasons? That would just say to everyone take this pill and you're going to feel better, maybe even with some level of probability being right if it's winter and it's just, you know, prescribing vitamin D.

Speaker 1:

Well, first of all, I would I'm not sure if we're actually in really in a state or phase of rapid development or knowledge improvement. I think in terms of, in terms of the biology and understanding the brain for sure I mean they're massive and quick understanding of some kind of basic you know molecular and you know whatever biological processes and being able to understand things that are happening. In terms of how much that is contributing actually to the understanding of mental health phenomenon. I don't think we're getting much closer there and the entire syndrome of, let's say, a depression, to reduce it down to biological processes, is a massive gap. I mean it's so far that if you're scratching, if you're inching your way forward by understanding the basic process in the brain, you might still be miles away of being able to put that together to actually understand and reproduce the phenomenon of human experience.

Speaker 1:

So, looking at that, I view psychotherapy and mental health more on a, let's say, not strictly scientific, from a scientific perspective, more from a human perspective. And I do believe that we are pretty sure, and there's good empirical evidence, that these general factors of psychotherapy, that they are actually helpful, connecting to a person, giving them the space to actually bring out and manifest emotional states and thoughts into words and atmosphere and you know, just bringing them outside from you know externalizing them, that these kind of processes and confronting with them, practicing social situations within you know a conversation, that all these things are effective, that they're general factors of psychotherapy that make people feel better and cope better. So if you can put that into technology and if you can run that through tests and scientific studies and show that people using your product show improvements, improvements, then yeah, then I'm I'm all for that be you know going out there. So I see, I think quite pragmatically, totally get it.

Speaker 2:

Yeah, um, mark, assuming that there is some condition, right, that someone has what oneself has, is it always worthwhile to notice, like I mean, if someone has some sort of they're well integrated and participants of the society today, you know, they have a family, they're, they're happy at their work, they have hobbies and everything that normally society would consider some sort of norm, but then they get diagnosed with asd or or some other form of being out of spectrum right or in well, but you know, psychonormative or whatever the term is the label. I'm sorry. Would you think that there is always value or there are cases where there's no value in knowing this for the individual right?

Speaker 1:

Yeah, I would clearly say that oftentimes the labels given don't bring any value and, quite the opposite, even bring just negative effects A lot of times. I'm quite critical of things that are, or diagnosis that are, in the personality disorder spectrum disorder spectrum. I personally am a follower, or am convinced of a kind of a scientific movement right now that is quite critical of the current classification system of mental health and has a quite a data-driven approach to kind of diagnosing mental health disorders and they basically reduced from a data perspective away a lot of these personality disorder like borderline, for example, or impulsive or narcissistic. I've seen that oftentimes exactly these labeling effects that we discussed have come up with people based on very how should I call it? Like sluggish or carelessly given diagnoses, especially by medical doctors that have had barely the time to actually understand the psychology of the person they're dealing with, but based on a couple of interactions in a hospital. They then label an entire person based on a certain personality disorder, and so I guess your question was more a bit in a different direction, I mean.

Speaker 1:

So I think misdiagnosis or even if they're true, so to say, oftentimes don't bring any value unless you can actually connect them with goals. And then you know some kind of process to actually change something within the personality of the person. But even on the other side, I think, like within medical field, oftentimes like, say, cancer diagnoses or so within the elderly people, they just don't bring any value anymore because there's no treatment and it just has negative effects. I think there might be similar things in psychology where there's no real treatment, there's no medication that will actually address the phenomenon you're going through, and without a fitting therapy probably a lot of people would be better off not being diagnosed at all. I mean, I've seen it many, many times, both with people affected and also the loved ones of people that feel relieved by a label. Labeling, you know, has two sides. It can be functional, it can be helpful In German you say, to give the child a name.

Speaker 1:

That means you know you could kind of feel already from the sentence it's like it's yeah, making it definitive. And also, you know, there is that point kind of a connection to what we were discussing in the beginning, I think, the assumption that the doctor knows my healer, you know that this kind of arch type knows what he or she is doing and is an expert and can tell me exactly what it is and it has a name. It is an illusion that can be helpful. And I've actually been in a weird conflict where, even though I know that a lot of the things are much more unclear, much more complex and not as well understood noticing or being in a situation with a patient who needs that or feels the need for clarity I've found myself in situations where I thought myself that I'm exaggerating the level of certainty that I have in terms of a diagnosis and the prognosis of you know someone getting better and the chances of therapy to give the patient that good feeling of I am in professional hands, they know what they're doing.

Speaker 1:

Now I finally have gone through, you know I've left this phase of you know being in the dark and not knowing anything and now you know kind of like going into enlightenment, you know, being in the dark and not knowing anything, and now you know, kind of like going into enlightenment, you know, and that is a good feeling and I think it can be used, you know, and generate energy from that and channel it kind of into the right things and motivation. So, yeah, I think all these things are positive. And you know, and at the same time, all these things are are positive and you know, and at the same time, probably in 20 years, um, adhd as such will not be a thing anymore. You know, it'll probably be divided into four or five different things. And you know, and it's, it's, it's always moving. So you know, I would say, like with a grain of salt, if it, if it's helpful, then then yeah, by all means, you know, use it.

Speaker 2:

Could we get back to the topic where you were describing some of the movements and models for data-driven approaches? I'd be really curious to learn more about how they enhance mental health outcomes for patients and doctors.

Speaker 1:

Yeah, totally, I mean there's also. There's an interesting, at least connection to technology and what I was doing with my first company, moodpath. There's one approach or model called the HITOP model. It's H-I-T-O-P, which stands for Hierarchical Taxonomy of Psychopathology actually build a model that's basically again like a factorial analysis model where you take a bunch of data of the occurrence of symptoms within patients. So how often do from all these symptoms that we know people can have, you know, going from not being able to concentrate to not having any appetite or not being able to sleep, anything we know of? How do they on average, how do they occur together? And they you know, so it's basically correlations, and over masses of data they kind of then generate kind of factors of psychopathology. And if you look at that, what comes out there, and you can see that beyond cultures, we see that the same models appear and over different data sets. So we're finding a robust kind of structure there.

Speaker 1:

And if you compare that to what we use in reality called the International Classification of Diseases, that's the ICD, and then there's an American equivalent of it called DSM, and the way that works is that it's generally white old men that have researched in a certain field for a very long time and somehow connected their ego and identity to a certain disease or something like that.

Speaker 1:

They have their own theories and ideas and conferences and books and so on and they kind of stay focused on one certain type of phenomena.

Speaker 1:

Then that's what is actually leading and driving the diagnostics and treatments and it doesn't fit well together. If you're honest. A lot of things that have a hyper focus in the data-driven model are kind of clumped together to one big phenomena and a lot of anxiety and melancholy, depression, mood disorder. Things are actually so intertwined that, based on the data-driven approach, you can't really talk, you know, can't really divide them up very well. So what we did was we had contact with some of the leading scientists within this consortium of scientists leading this development with our MoonPath app, and what we were doing then was like a tracking app with a long questionnaire. Doing then was like a tracking app with long questionnaire and we changed our entire system to reflect the questioning structure from this high top model so that the user could then get a profile of their own symptom occurrence based on this, let's say, data-driven approach. So that was actually very fun to be part of kind of an implementation into the real world of a more scientific approach to diagnostics.

Speaker 2:

Yeah, I had the pleasure to play with this and I must say I really enjoyed the entire gamification aspect of it as well. It's interesting to see there's a lot of different vendors and ideas around this now. I believe even iOS now naturally has I don't remember what it's called, but like Health App in iOS, I believe you can fill in your diary of how you felt with different yeah, this came new with, I think, iOS 16 or so.

Speaker 2:

Exactly. It's just brand new, like maybe a year old, maybe two, I don't know, but it's not as engaging, like I'm completely not motivated to fill this in and it's not reminding me to do it and it's not. You know, you guys did a far better job at this. So do you feel how much we can push the band reader? Uh, from a product perspective now, you know, user experience or communication wise, or maybe even with the ar VR which is coming in? I don't know if that makes sense for psychology and psychiatry.

Speaker 1:

AR VR, I see more potential for enhancing the whole confrontational part. So in the actual, more in the therapy, so you know when you know, so I'll get to that point afterwards. But to your original question of the tracking and self-diagnosis so you know you could generally divide between diagnosis, self-diagnosis, recognition of disorders, and then the treatment of disorders. And within the diagnostics, I think supporting these diagnostic systems with sensor data based on your smartphone has more potential in terms of, from a technological perspective, to also help personalize and ask the right questions. So if we can see, based on the movement pattern and how often someone's actually on their phone, and what we could see based on their sleeping, from that, on a sleeping pattern, asking the right questions at the right time at the right moment in the right spot, you know geographically, even to then understand, okay, is basically even asking you what you're in this situation right now. We know that, like where you're at work, um, you know you're staying longer than usual. Um, what is the usual? What is your state of mind right now? You know if you feel that the diagnostic system is actually asking you questions based on, you know, on prior information, maybe even on a theory with my girlfriend or my wife or whatever, and then the therapist might ask how often did that happen in the last three months, or did that happen in your previous relationships as well? From that question I can understand the therapist is thinking about something and wants to kind of get to somewhere.

Speaker 1:

If the diagnostic systems that run on you know, software also were more intelligent like that, then I think that would improve the UX a lot and the motivation to actually use it over a longer period of time. Yeah, I think interfaces is always, you know, just having state-of-the-art good design in terms of your interface and giving feedback, classic, you know, kind of little. Elements of gamifications are good, but that's just that's kind of the basics which I would expect any professional app to have. You know, I think on the sensor side can really be an edge. And in terms of the VR thing, I think that for diagnostics it's not necessarily the right use case.

Speaker 1:

But, as I was saying, if you can combine that with the AI I mean the generative AI part, the LLM and really describe with your digital therapist the situation that makes you anxious, let's say, in a social situation, speaking in front of people, people then looking at you in a certain way and you being nervous about that. Then the AI saying okay, let me draw that up and put on your Apple Vision Pro right now and let's go through that situation right now, and then you feel it, you see it and you can confront yourself with it, and then you can talk to the AI while doing that. That is possible technologically right now and it's only been possible since the LLMs in my perspective. So I think there's massive potential there as well.

Speaker 2:

Pretty cool example. Or arachnophobia, right when you can just be challenged, being in a virtual environment of a jungle with all the spiders, where you can experience one by one I mean that that's already on the market.

Speaker 1:

There's even a, a digital health application that is covered by the health insurance companies, that that helps you go to. Um, you know specific phobias, but that's really a one-size-fits-all thing, so you can. You can maybe have a hundred pieces of content. You know heights and spiders and you know things like that. Um, that that if you, if you by any chance, have that specific phobia, then already vr that those kind of vr content sets can help you.

Speaker 1:

Um, you know, confront with that, but, but it's very rare that, um, you know, if you look at how many people have intense anxiety about something and experience that as something that they suffer of in their daily life, it's a very high amount of people. But how often is it so crystal clear? Just one thing. It's much more often it's way more complex. So clear, just one thing. It's. It's much more often it's way more complex. It's you know, uh, it's it's being rejected by your you know that specific person, because that's exactly what triggers the feeling that you had when your parents did that to you, and it's, it's, it's very individual. So if you want to be able to confront, uh, it has to be able to produce content on that level and you know, looking at Sora, you know, coming out and probably it's going to be just as mind-blowing as the Chachapiti or Dali, you know, and if you kind of think another five years ahead of being able to then produce that kind of VR content based on language input, that's surely something that's going to be very interesting.

Speaker 2:

Indeed, yeah, I'm smiling. You can see, I'm smiling because I can correlate to so many of these cases that you just mentioned.

Speaker 2:

But I also had a thought like it could be used in reverse right. Imagine you're using your VR headset and then someone hacks it to just, you know, launch a bunch of spiders, knowing that this is a weakness. Launch a bunch of virtual spiders running, you know, right in front of your eyes. On a more serious note, my last question. You know you mentioned insurance companies, and I know every country and maybe even continent is doing it differently. I can probably speak more to a US market that I know a little bit and the Polish market that I know a little bit. But how do you think, how open are the institutions that support us, like insurance companies, to experiment with this kind of new solutions? Or do they consider it, you know, not proven enough to reimburse the users for their spend?

Speaker 1:

Well, I think on the status quo, they would would consider it not mature enough and I would agree. But, um to your general question, I've experienced um insurance companies um so, professionally, I've had to do primarily with german companies, but I've also had contact to US and UK insurance companies in that context and I do experience them as very open and keen to adopt, and that only makes sense. It's very plausible because obviously, as payers and reimbursers, they're looking to save a buck. They're looking to save a buck, they want to make sure that they can provide services at low costs and that's obviously what technology is great at if it can reach that level of effectiveness and adoption by patients. So I think all of these scenarios that we were talking about you know, using using AI, improving diagnostics, opening it up for self-diagnosis based on on your own devices that you have around I think insurance companies are generally very open to these solutions.

Speaker 2:

And to follow up to this, do you feel like in public sector or governments in general, should they get involved and remove some roadblocks for us technologies? I mean to implement this kind of and deploy this kind of solutions in the wild, in the general public?

Speaker 1:

Technologies or regulations.

Speaker 2:

They should remove regulations, they should make it easier right to experiment, because right now the I feel like regulations are there's too much regulations to help startups grow in the health tech industry and particularly I feel that way, I agree okay, tell me, yeah, I, I feel like I, that I don't necessarily agree with that.

Speaker 1:

I think you know, having built a mental health startup actually two I never felt somehow limited or even disproportionately burdened by regulation. If I'm honest, I feel like it, yeah, I think, like the big two frameworks in Europe. You know, first of all, it's already pretty impressive that the entire European Union runs on the same frameworks, right, you don't have to go into any different countries and you know, if you follow the medical device regulation, the MDR and GDPR, you're safe for the entire European Union markets. Obviously, uk now is a different game a little bit, but they're very similar. And GDPR, you're safe for the entire European Union markets. Obviously, uk now is a different game a little bit, but they're very similar. So that by itself is already pretty impressive. And then also, I find them both very comprehensive and they just make sense.

Speaker 1:

So I find all the details on risk management, quality control, documentation that is part of the medical device regulation actually almost helpful. Or I would say it is clearly helpful in developing, because if you sit down and you just want to create a software for the medical space, obviously most of these questions you'll be worried about yourself. You want some kind of quality control. You want some kind of quality control. You want some kind of risk assessment and management. But how would you do that? Everyone can just come up with their own things.

Speaker 1:

But obviously lots of people and mostly smart people sat down and put it into big frameworks and it's almost like them giving you a checklist that you can just check off and tick off and use as a guideline and then be pretty sure that you're going to be on the safe side. And if you get it audited and you know, even looking at it from a financial perspective, you know, I think roughly maybe 50K um for for regulation, when you bring a new product to to market, a software product, medical device, um risk class 2 to a um. I've I've gone through this once or twice. I would say, as an overhead, as additional costs for, you know, personnel and and um consultants and so on, I would say um, maybe 50k and I don't think that's crazy in terms of financial planning. That's you know.

Speaker 2:

I agree. I just wonder where is the line between deployment and implementation of an actual healing software? Let's say that's intention is already to. You know, offer some benefits to customer based patients versus some space for research where we don't know where we're going with you know. Offer some benefits to customer-based patients versus some space for research where we don't know where we're going with this yet.

Speaker 2:

And my personal experience has been that, yes, it's, maybe some price tag is fixed, you know, for implementing the certification right and the paperwork. But then actually because of the slowness of local authorities, so to say, say, and depending on the market deployment, it may happen multiple times because even if you has one standard, you may. I had to get a approval from a local co-running authority, so to say, because of the delay, an actual implementation cost is extended by your operational cost of your business during the time when it didn't happen just yet right, so you can't deploy and operate and basically earn during that audit process. Maybe we don't need so much documentation up front, you know, maybe we can. We're more in an rnd mode and yeah, we're trying to move the boundary versus, because would you agree that sometimes these frameworks are also outdated for what we need to prepare and they don't allow us to go beyond what's already established in terms of R&D and product.

Speaker 1:

I have experienced that phenomena, but not so much actually in the context of creating medical software. I think other fields like tax law or general business law are often behind, more behind than what I've seen, at least in the medical device regulation, which, also looking at the frameworks, the GDPR I don't know, it was like 2008 or something like that, so it's relatively new and the MDR is basically just completely fresh.

Speaker 1:

It just came out last year or something. I mean been in the making for a longer time, so I feel both are actually quite refreshing and new and also kind of adequate for the time. I think what you were just mentioning just in terms of the operational overhead. I think what I've seen as much more difficult is working with academia and getting research, because obviously when you're doing a medical device, medical software, you need device medical software, you need clinical evaluation, you need independent evaluations, and that I have experienced to be a lot more challenging than the, the pure regulatory side of actually getting something audited and and through the, through the certification process, to be able to bring it onto the market.

Speaker 1:

That actually always was, in my experience personally, quite smooth, whereas in the university context I have experienced a lot more of the how should I call it? Like, I'll just say it, like laziness of you know I have a safe job, I'm, you know, paid by the state. No one can. You know I'm independent, no one can put any pressure on me, and that resulting into professors taking years and years, at least in the social sciences, to just get anything done. So I have a much higher frustration level with that and it is intertwined. Because you need a, you need some clinical, you know studies and sometimes even classic rcts to get through the certification. So in that case, yes, but to be honest, I don't have much of a solution to to change that um to make it easier for startups, just because obviously you still need scientific research and it's not centralized. That's nothing that a state could just offer as a service for startups or something.

Speaker 2:

Awesome, mark. Thank you so much for a really really fruitful conversation. I learned so much. Really great to have you here.

Speaker 1:

Sure my pleasure. Thanks for having me.

Speaker 2:

Thanks, mark, for your valuable insights into the intersection of mental health and technology. Your expertise in developing digital products in this field is truly inspiring and offers hope for the future To our listeners. Stay tuned for upcoming episodes where we will explore new perspectives on the world of technology. Don't forget to subscribe so you don't miss out. Thanks for listening.

People on this episode