Forget frequently asked questions. Common sense, common knowledge, or Google. How about advice from a real genius? 95% of people in any profession are good enough to be qualified and licensed. 5% go above and beyond. They become very good at what they do, but only 0.1%.
Richard Jacobs has made it his life's mission to find them for you. He hunts down and interviews geniuses in every field. Sleep science, cancer, stem cells, ketogenic diets, and more. Here come the geniuses. This is the Finding Genius Podcast with Richard Jacobs.
Hello, this is Richard Jacobs with the Finding Genius podcast, now part of the Finding Genius Foundation. My guest today is Maria Greischer. We're going to talk about machine vision AI application. Maria is a seasoned executive with over 18 years of experience in AI driven technology and application. So welcome, Maria. Thanks for coming. Thank you. Pleasure to be here. Yeah. Tell me a bit about
your background with machine vision AI? Like, you know, what are some really interesting things you've seen develop over the past, you know, 18 years? Okay, well, of course. So let me start a little bit with my background. So I've been working in startups all my life. That's the only thing I've been doing so far. Knowledge of startups. So really, it has been very interesting to see the development of
the technology and like what we were busy with like 15 years ago, 10 years ago, five years ago and what we're busy with right now. There is really so much advancement going on and the pace of the development, the pace of the innovation is accelerating. Like the more we live, the faster things change.
That has been true so far, at least from what I have seen. Please keep going with your background. What are some of the really, I guess, interesting things you worked on? And then we'll get to what you're working on today. So there are multiple medical advancements. Medical field has been an important field in our life so far. And there are many, many different verticals in medical AI specifically that are being developed, like medical AI and medical technology.
My favorite, I would say my favorite applications and also that what I specialize in is visual AI application. And what is being developed right now in the medical domain is absolutely amazing. And I would like to share it in a little bit more detail. But before we...
dive into this, I just want to do like a little comparison to how things have been done so far in our modern world and how they are starting to be done and what's actually the difference that AI, medical AI brings, like what makes it so unique, what makes it so important and incredible. Let's step in. What are you working on right now?
Okay, so well, right now, multiple medical AI applications, for example, if you look at the CT scan, the CT scan is being taken in the hospital or x-ray or MRI scans, like everything that's visual. So those visual images so far need that
an expert, radiologist, cardiologist, or whoever the specialist is. They need an expert to look at them and to understand what's going on and basically determine the pathology, determine if there's something wrong or if everything is okay on this image. Right now, all this is being replaced by AI.
And basically, that's kind of looking into what we do. Keymaker, we are a building block in creating those types of applications. We provide training data. We train those models. We create custom training data for those models. But what I would like to elaborate is, kind of talk a little bit more about, is how this actually, like,
how different it is from what we knew and we've been doing so far to what's going on right now. So, for example, if we take traditional medicine, how it's been working so far. So, if we have a doctor, for example, a test meter, like any specialist, and the specialist, even if it's like the best specialist in the world, it's one person. So, this person has a certain number of patients he's seen so far. He has
certain number of years of education, things he experienced in his life. So it's good. So let's say when we go and see the specialist, we get pretty good expertise of one person. Now, if the same diagnosis, same expertise is given by AI, let's say on the same person,
for the same image here, instead of one person, even the best person in the world, we tap into the intelligence and knowledge and expertise of the hundreds, thousands, hundreds of thousands of different experts all put together. So it takes the whole medical examinations and understanding of the condition to a completely different level. So you're not just working with the best expert in the world. How many images have you sent to an AI company
on training data, let's say, to identify lung cancer? Noyan? So, yes, yeah, I understand what you mean. So, it really depends on the model. Usually, there are different models that try to do the same, different companies that develop those models. So, it
It takes like thousands of images that are annotated, multiple specialists to train a model properly. Now, this is not a process that's just you do it one time and that's it. No, it's an ongoing process. It's constantly developing, which means that the doctor has to go to courses and go to seminars and read papers to constantly keep his knowledge up to date. Same with AI. It has to keep learning. I'm sure we're learning not from one person, but we're learning from thousands of experts.
So what kind of things are you seeing? What are some examples? So, for example, brain tumors, identification of brain tumors, early identification of brain tumors. That is done immediately, done with very, very high precision. Another, actually, I would say one of my favorite examples is ultrasound and ultrasound in emergency rooms. And what's happening here, the response...
The velocity of the response is very important. So identifying if there's free fluid for any injury on the ultrasound and identifying it fast, super important to save lives. So when we have a person doing it, there's a delay in the response. It also depends on the viability of the person. But when we have AI doing it, right now it's starting to be implemented in multiple emergency rooms. It's immediate and it's way, way more precise. So here we're actually seeing a significant impact
measurable impact on AI systems. So I mean, what is AI looking for? What is its diagnosis rate compared to doctors? So when you're looking at an ultrasound, it's like ultrasound, for example, it's like a lot of things going on. So the AI would recognize the same reflux, the same internal bleeding as heart, let's say,
part of an ultrasound, it would know the area right away. So it would know to recognize that this is internal bleeding with way higher precision, accuracy, and way faster than a person. Well, what is way higher and way faster? What is the number? Well, I cannot tell you exactly the numbers, how, like,
Like AI, it's immediate, essentially. It looks at the ultrasound and it's there. While for the person, it might take a few minutes till they see it there. And also they might miss it because there's always... Well, what's the background efficacy rate? What is the rate with the AI of detection? Like the false positives or false negatives? I cannot tell you that exactly. But what... Because we develop the system, we don't implement it. But it's significant enough that it's proven that AI works better. Okay.
Okay. So why all of a sudden over the past couple of years does AI seem to be a lot more advanced? What's happened in the field where now you get things like ChatGPT and reasoning modules and all that? Why has AI suddenly jumped up? So ChatGPT, I would say that's not part of my domain, so I will not be able to answer that properly. But in terms of visual AI, like with every technology, we hit like a point in a way, like a tipping point.
where we know how to train models. We have enough data to train models and it just works faster. It just takes more training data, better performing models, and it just works. It's technology. Are you seeing that in your machine learning or what are you seeing? Yeah, well, Roy,
The project that we are working with is machine vision AI only. So it's only visual projects. We see it a lot with visual projects like before. Well, also when you develop a model, regardless what the model is, you develop it once and then you retrain it. So the older the model is, the more training models went through, the better it gets. So we already have like a few years of training models.
and creating models so the models just get smarter. It's like a brain, like human brain that learns more and more with time and doesn't forget what it does not forget what happens before. It just gets better. Is there any drift in the machine vision outputs that it's training on? Can you tell if there is such a thing? What do you mean? A drift in the data. I don't know, maybe for some reason, you know, internal bleeding around the stomach now is showing up
differently than it did years ago. Maybe because people's health have changed or their average. What is the drift in the data? So this is something that's very, very interesting question. This is, I would call it bias in the data. It happens. There's always a bias. And that's why
We always need human input and we always need to retrain those models and validate that they're working right. So that's, I would say, why we still have human input and it's not just 100% automated and we stopped training right now. How do you identify bias? What's an example of some bias that you've seen in your models?
We have to figure out where the genes come from. So, for example, let's say if we looked at, let's take a simple example. Let's look at blood cancer. So you have the cancerous cells and sometimes regular cells. The model can recognize regular cells as cancerous cells by mistake. This can happen. So here it's very important that we still have expert input and we do validate the model and make sure that those false positives are marked.
as such, and then we retrain the model on eliminating those false positives.
Before we continue, I've been personally funding the Finding Genius podcast for four and a half years now, which has led to 2,700 plus interviews of clinicians, researchers, scientists, CEOs, and other amazing people who are working to advance science and improve our lives and our world. Even though this podcast gets 100,000 plus downloads a month, we need your help to reach hundreds of thousands more worldwide. Please visit findinggeniuspodcast.com and click on support us.
We have three levels of membership from $10 to $49 a month, including perks such as the ability to see ahead in our interview calendar and ask questions of upcoming guests, transcripts of podcasts you're interested in, the ability to request specific topics or guests, and more. Visit FindingGeniusPodcast.com and click support us today. Now back to the show.
Okay. I mean, how many of the scans do you look at manually to see if it's a false positive? Is it every two to one or is it only... No, no, no, no, no. It's usually there is ongoing process of validation going on. So the experts always look, the data science team usually always looks at the outputs of the models or certain subsets of outputs of the models and validates them if they are correct.
What percentage of scans, for instance, for internal bleeding are looked at manually? Statistically, what will be enough? 1%, 10%? So this would depend on the model and depend on specific use case, but it can be anything from 10% to if the model is performing well, it can be less than 1%. Okay. So again, you've gotten it to a very high efficacy. Do you see it getting better and better still or...
It's always going to get better and better. It is. It's always going to get better? I think, I believe so. I believe so. At least so far, because think of it, there are always edge cases. There are always new things that are coming up. There's also overtraining. I mean, there's getting stuck in a localized minimum or maximum. You have a speecher tends to dominate what's seen in the visual field there. I mean, again, I know there's overtraining. So how do you make sure the model doesn't wander into bad territory? How do you tear the weight or...
How do you zero it out or make sure it's reset and replant? So when the models are trained, when we start training a model, we feed it a certain number of scans. After some time, we need less and less training, but we do need training for each case or new things that come up. So I would say never... Or an outlier. Like, what do you do if they're outliers? So that's where we need...
We have human input and then we teach the model as we would, like as an expert in this field would do. They would learn what it is and explore, basically create the training data that outlines this age case to the model. What's an example of it? For example, new types of tumors, for example, or if you're going to the same ultrasound where the, let's say, the organs we're looking at look different, completely different, or it's a person with results that,
haven't been seen before in any type of data. Edge cases happen. And that's the thing about edge cases. We can't predict them. They just happen. And when they happen, then we deal with them. You don't just delete them. I would think you would learn from them. But do you not include them in the training data? Or what do you do with
No, we have to include them. So in this case, we identify it as an edge case or the model fails. The model basically says, OK, I don't know what to do with this. This looks different. Then we have experts, let's say radiologists or a number of experts that look at those edge cases and they decide what it is and mark it as what they decided that it is. And we use that as a training data for the model. OK, are edge cases particularly useful or are they dangerous?
Why, Serzio?
How are they useful? Well, if you encounter something new, you want the models to learn that something new exists. And if they're encounterless again, they will know what to do with it. So what kind of edge cases would you get with internal bleeding, for instance? Well, it's kind of hard to answer this question. It just doesn't look like it's a bleeding. It's really specific, but let's say it doesn't look like it's a bleeding, but it is, or vice versa, like false negative or false positive. It can go both ways.
Okay. So, I mean, do edge cases tend to support false positives or false negatives or is it random? It really depends. There is no rule that can be anything. That's the whole idea for edge cases. You never know what it is, but when you encounter it, model fails, then we have to do it.
We have to like train the model on those, like on those edge cases as well. But it can be. Maybe there's a skew in the edge spaces and they're not symmetrical around the main data. You know, maybe that would tell you something. Well, long tail. Yes. But again, so it's important for model training.
Okay, so you use AI to detect, let's say, internal bleeding when someone comes into the hospital. What else are they being used for, the vision sense? There are actually so many applications. It starts from recognizing pathology on the scans, like any type of scans, to cameras in hospitals or camera in the hospital.
elderly homes that ensure that the people are well, like they recognize, let's say, if it's an elderly home, the camera, the smart camera, of course, there's like full privacy to it, but the smart camera would be able to see if the person, say the elderly person in the house having an issue or they're having a heart attack or they fell down, basically motion recognition as well, or they need help. So instead of basically a person to press a button, call for help, the camera would recognize it right away and
call the ambulance, call the sport. So it's everything. It's really helping to, if you're looking specifically at elderly care, it's really helping to improve their life and save lives of elderly people because instead of... How do you know? Is it in use or is it still being used? Yeah, it's in use. Yeah, yeah, it's been in use. It's been in use actually for quite a while. It's getting better and better, especially for emotion recognition or...
action recognition. It's usually it's autonomous cameras that are deployed at people's houses. Multiple companies, multiple service providers that do that. But the idea is this camera, fully autonomous, that can recognize if the person's having like a heart attack or stroke or like
anything is wrong with them and then it calls for help right away. Where are these? Only in hospitals or? You have them in hospitals as well. You have them in private homes. This is like a service, like you have a security camera in the home. It's a choice of, it's like people's choice to use the services or not. Some people just have it in their home? Yeah, some people do. Like this technology has been around for a few years already. Of course, it's getting better and better. But most
More and more people choose to use that in their home. It's like having a smart camera facing outside, detecting motion, but the same thing, but more sophisticated, fully private inside. What companies produce this? Where can people get it, for instance? That's a good question. So I will not be able to answer that. But I'm sure if you search for like even the ChatGPT or Google cameras, smart cameras for elderly care, there are multiple companies that do that.
Now, we like Keymaker, we are a service provider for creating the training data. So we will be on the other end of those cameras. We will be helping to develop and train those models, but we don't sell them. So like kind of going up there.
Up the development chain, we would be like the building block of creating this AI and then multiple companies, multiple camera companies or healthcare companies that would purchase this AI as a service together with the camera if needed and deploy it under their brand.
Many other interesting examples of work that you've done? No, but they use AI to diagnose new things. Any other examples? There's so many. Let me think for a second. What else is really interesting? I would say one of the examples I personally really like goes back to what I mentioned before, is x-ray or MRI recognition. If there's any issue, let's say broken bone or internal bleeding or tumors, etc.,
But the way it's used is in remote locations. So you have, let's say, an x-ray scanner in a remote location, either in places that are just, say, far away. There's no hospitals there, no doctors there, but there's like a little station or a medic station with those sensors.
with a scanning device. And instead of having or waiting for a doctor or transporting yourself to a hospital, the diagnosis can be done right away at the spot. So it really enables fast and cheap healthcare in a way in a remote location. Now, the most, just so far, like the most important part of the health, like of helping somebody is understanding what's wrong.
And here we immediately understand what's wrong and choose, even remotely choose the appropriate treatment for that person. So it really helps to help people to get proper health care in the remote or locations where nothing's near there, no hospitals, no anything.
Okay. What's the best way for people to keep tabs on your work? Where should they go? Well, again, like Kimo can, we're not like the end product creator. We're just a building block in creating those models. So nothing really, but there's always like news and things, new things come up and healthcare. So just, you know, general stuff. Okay. Well, very good. Well, thank you for coming on the podcast and explaining. I really appreciate it, Maria. Thank you.
If you like this podcast, please click the link in the description to subscribe and review us on iTunes. You've been listening to the Finding Genius Podcast with Richard Jacobs. If you like what you hear, be sure to review and subscribe to the Finding Genius Podcast on iTunes or wherever you listen to podcasts. And want to be smarter than everybody else? Become a premium member at FindingGeniusPodcast.com.
This podcast is for information only. No advice of any kind is being given. Any action you take or don't take as a result of listening is your sole responsibility. Consult professionals when advice is needed.