Corporate innovator: Akshay Sharma, Executive Vice President of AI at Sharecare

Steven Loeb · July 27, 2021 · Short URL: https://vator.tv/n/52b4

Sharecare went public VIA a SPAC merger in early July

21105_big

While most entrepreneurs want to be the one to discover the next Amazon or Twitter, oftentimes major technological shifts are coming from the big companies, the players that have been on the scene for years, if not decades. Those companies have survived because they know how to pivot. They're the ones who either seed new ideas or acquire them and distribute them. 

In this column, we talk to those companies and their innovators who are preparing them for what's coming.

In our latest interview, we spoke to Akshay Sharma, Executive Vice President of AI at Sharecare, a health and wellness engagement platform providing consumers with information, programs and resources to improve their health. Sharma joined Sharecare in 2021 after it acquired doc.ai, where he had been CTO. Doc.ai built technologies where it could collect data about a person using their mobile phone, including being able to ingest across different devices, such as Fitbits, as well as from aggregators like Human API or Validic.

Sharecare was founded in 2010 by WebMD founder Jeff Arnold and Mehmet Oz, in partnership with Harpo Productions, HSW International, Sony Pictures Television, and Discovery Communications. It had raised $425 million in funding before going public via a SPAC deal in early July. It is now trading at $6.74 a share.

VatorNews: A good way to kick this off would be to give me the high level about Sharecare for people who aren't really familiar with you, who don't know what you do. Just give me that high level about what the company's mission is.

Akshay Sharma: Sharecare is a digital health platform that helps people manage all their health data in one place, which is usually on the app. So, if you're familiar, healthcare is extremely fragmented, which means that your data could be in many different places and getting them to a single point of view is very difficult. And it's not just the data that is siloed; even the care is siloed. So, what Sharecare is doing is taking this fragmented version of the healthcare ecosystem and tries to unify all of this into a single point of view by working with all the relevant stakeholders. By unifying and aggregating this data from multiple sources, we can create a personalized health and wellbeing experience for the person. That's really what Sharecare is all about: it's a digital health and wellness platform.

VN: It seems like your goal is to be a one stop shop for healthcare, helping people manage their care from a single platform. Just so we can visualize that as consumers, how do your members typically engage with your product?

AS: We have what we call a B2B2P model. Basically that means is we work with business to business as a business; so, we work with employers, we work with health plans, we work with providers, we work with pretty much all the stakeholders that encompass what we call healthcare, then we also directly work with the people that work for their company. So, if you're an employee of a company, we would love to work with you as part of the employee onboarding. 

One way we work with people is having their employees offer certain wellbeing and wellness platforms from Sharecare; employers join Sharecare and then we can onboard all their employees. So, that's a primary means of how we work with people, but then there's also a B2C component. Anyone can actually download the Sharecare app and start using certain free services that we already offer. What's important, though, is that we want to continue to help with your journey past either the employer, or the insurance company, that you're with. Think of Sharecare as your continuity of health across different companies and insurance companies that you may be using, translating over time.

VN: You mentioned data in your previous answer, so where do you get your healthcare data?

AS: The primary source of data is, obviously, working with payers to get claims data. When you think about our primary offering, it's trying to understand what has happened to you in the past, a lot of which comes from understanding what's in the claims data. Then we're also able to ingest benefits and eligibility data that's coming from different employers. As part of your onboarding from an employer perspective, we are integrating with different insurance companies to get claims data, we are integrating with employers directly to get benefits and eligibility data, and then we make sense of it. So, that's one side of all the different integrations. Then there's also the marketplace integrations: there could be multiple point solutions offering care for different services, so there could be a wellness or mental health service that we can integrate with. We are able to integrate with third party partners and vendors, and also integrate data that comes from those services.

For context, I joined Sharecare from an acquisition of a company called Doc.ai, where I was the CTO, and we built a lot of interesting technologies using AI where we could collect data often from the consumer and the person directly using their mobile phone. We are also able to now ingest interesting types of health signals or data directly coming from mobile phones, which is the whole holistic picture to build a longitudinal view of health. So, there are multiple data integrations and we are trying to bring all of that together and make sense of it, obviously keeping the person in loop and getting the right opt-ins to do that.

VN: When you say you're getting health signals from mobile phones, are those apps that people have? Is it a Fitbit or something like that, where the data is going to their phone? What kind of data are you actually getting from mobile phones?

AS: Some background on this: I've been in this space for about 10 years, trying to build different technologies and companies, and one of the things that I saw about eight years ago is the rise of wanting to be able to aggregate health data from different places, using one single point of view from the consumer perspective. That includes being able to ingest across different devices. Obviously, there are a lot of devices, like Fitbit, which you mentioned, but then there could also be aggregators, like Human API or Validic, which allow you to integrate a whole bunch of other types of wellness devices. Being able to do that, and mediate all of that through a consumer on the mobile phone, is the first part of it. But then we're also seeing medical records being another important aspect of being able to facilitate that data ingestion to consumers, for example Apple Health. I was part of another company called Human API where we built some of these data exchange networks, again, putting the consumer in the central point of view to facilitate this data exchange. Those are typically what I call ‘API-based’ or traditional integration data. 

What I've noticed in my journey in healthcare is that those are discrete data points. What happened to you at a place of care is pretty much what was revealed as a sum of things that happened to you before that. But then, in order to really understand what your health as a continuous function looks like, we need to actually bring new types of health signals that we can capture through mobile phones. What I mean by that is, now we're able to actually understand your health by looking at your phenotypic data. How do we extract information by trying to understand how you feel that day? How do your facial expressions mean certain health signals? We are running some clinical trials that we can actually look at your video data for a while and understand the risks for certain diseases. But then we can also use your location information to understand what are your socioeconomic risks for health, and so on and so forth. So, there are new, interesting, continuous signals that we can actually capture, again, using the right opt-in, including building privacy preserving technology so we can actually give  confidence to the consumer that we are in fact trying to keep the data safe right. 

So, those are the additional types of data that we can actually capture on the phone, and we are trying to use that to understand and build a comprehensive profile of health so that we can understand your journey from the past to the future.

VN: You mentioned privacy and that when you were talking about facial expressions and all that kind of stuff, how do you make sure that the data is private? I would think that this is something that people would have to opt into, so how willing are people to give up that data?

AS: Let me answer your second question first: absolutely, all of this is through an opt-in. None of this happens without the right set of disclosures, the right set of onboarding for the user and, on a per usage basis, being able to clearly articulate what is being captured, what is being inferred, and then actually what is being sent back to a cloud service. 

Now going back to your first question, how do we think about building this in a privacy preserving way? And I'm using that phrase and it means something technically but I can break it up for you. First of all, back in the day, it was pretty difficult for us to do something on the phone while leaving the data on the phone. So, in order for us to understand your health, we needed to collect the data on the phone, which is in itself a pretty difficult task, if you ask me. Now, with the phones becoming better, and with 80% of the country having a smartphone, we can now opt-in and bring them to healthcare, which is fantastic. We are able to collect this type of rich data that I talked about before, then get the right opt in, send it to a cloud environment, where we can now infer what's in the data. For example, we can take a picture of your selfie, and from the selfie we can infer certain phenotypic characteristics, and if we collect enough of those, we can actually build a profile for you where we can help you monitor certain conditions better. That's the traditional way of looking at how AI can help, and this requires you to send the data to a cloud environment.

Where privacy kicks in is in two ways; one is, obviously, bringing the right op-in and giving all the tools for the users to be able to control what's on a cloud server, and whether they can actually request for it to be deleted. Those are, I would say, table stakes. The second way is something in which we have invested a lot of time coming from Doc.ai, and that's building what we call edge computing technology. So, edge computing technology is a set of tools and technologies that can run on the phone where the data that's being collected, such as your picture of your selfie or a video, we can actually leave that on the device, so it's only with you on the phone. Then we run a machine learning model that was trained on the cloud which is doing the prediction on the data that was captured on the device. So, basically, what comes out of the prediction is a set of scores or a certain set of predictions that we then ask the user to agree or disagree with, because it's very important that AI should not be completely seamless. There has to be some opt-in and the user has to participate in understanding what the outcome is.

At the end of the day, the privacy preserving edge computing technology leaves the most critical and protected health information data on the device, but the sends the outcomes, which could be, "Hey, we think that this was a pretty good score, do you agree or do you disagree?" If you disagree you're allowed to correct it, and the corrected values are sent back to the server. So, by not even bringing the critical data to the server, but just the inference, we're able to go one step further in trying to make sure that sensitive data is left with the user.

VN: Is this commercialized at this point, because I have to say I haven't actually seen anybody ask me to opt-in to something like this yet. Is this actually something that is being done or is this something that's going to be done in the future?

AS: It is already live: if you download the Doc.ai app, we are doing something like this where we can capture your food information, for example. You can take a video log of what you eat and the actual pictures that you're using stay on the device and only the inferences are being sent back. We already have technologies that can capture your prescription bottle information; if you have multiple prescriptions, we can actually look at an image of your prescription and infer what's in the data. So, these are all commercialized products.

We are running some clinical trials as part of Doc.ai’s Omix platform, where we are studying a group of patients who have myasthenia gravis, which is a neuromuscular disorder that leads to atrophy of facial muscles. We are capturing video information on a daily basis on a set of patients and, in this case, they're actually opting in to share the video journal back with Doc.ai. The goal there is to understand what are the risks and symptoms and see if we build an AI model to predict the flares for this. If we can do that, then we can bring the model back in a clinical setting, back to the edge. So, all of what I've mentioned is actually already commercially available, it's running; some of them are in a hybrid state, where we actually have to take the data to the cloud, for the clinical trials especially, but eventually what comes out of that is an edge model. 

Another example was we studied about 2,000 participants in an IRB approved study for allergies. The goal was to see, based on how active you are, where you are, and certain other information around the geography that you live, can we predict the risks for certain allergies for the next few days for you? This was a study that was conducted for over a year and the whole study was done on the phone. They opted in to share this data and, once that was done, we've actually now published a paper on it, we were able to build a model that actually has a very high level of accuracy in order to predict the risks. Now we've been able to take that model and convert that to an edge model, so we are working to figure out how to bring it back to actual users, but all of what I'm talking about here is pretty much ready to go and it's possible today.

VN: You had mentioned that you came over from Doc.ai, where you were CTO. Doc.ai, for people that don't know, was a company that built medical dialogue systems and conversational AI for personalized healthcare. So, talk to me about how your role and your priority has changed since coming over to Sharecare. 

AS: My priorities, actually, are the same as what we were doing. As part of the acquisition, we are becoming part of a bigger company and with the bigger ambitions and goals that Sharecare has. So, on one level, as the CTO of Doc.ai, my goal was to build teams and technologies and products and serve our end users to build AI technologies and build privacy preserving technologies so we could build many models that help users for either engagement or clinical validations. Now, with Sharecare, we will continue to invest in those tools and technologies; in fact, our platform on which we do all of this, called Omix, is an integral part of what Sharecare is going to offer.

Then, we also look at what Sharecare has: an enormous amount of customers and data, and we are trying to bring the concepts of AI that we've done at Doc.ai back into the bigger picture of what Sharecare can offer. So, imagine going from a small company in Palo Alto to a big company with more than 2,500 people and many, many customers. So, it's about taking the ideas that we did pretty well in Doc.ai and scaling that to all our customers and their users, because what we need right now with health is going one step beyond just data aggregation and data understanding. We need to go to the point where you understand the data and then you're able to bring what AI can offer, which is how do you nudge people in the right direction? How do you build their journey so that you can actually plan their future better? At Sharecare, we have the mass and the data in order to do that. We are pretty excited to do that right now. 

VN: This acquisition and this transition from Doc.ai to Sharecare happened, I believe, in January of this year, which was at the height of the pandemic. What was it like making that change at that moment, considering what was going on with the healthcare system?

AS: It's funny you asked about the pandemic because it's unfortunate that the pandemic happened, but it was a moment of truth for digital health companies. What we saw was the rise in the need and adoption of digital health companies; if you look at what happened in about a year's worth of data, first of all, care became virtual. So, with the hospitals pretty much swamped with having to deal with the COVID patients, they did not have time to take seriously critically ill patients with other conditions, and so we had to move care to a telehealth offering. That was one of the best things that probably happened for the country because the fact that we can actually provide care from anywhere in the country is an enormous benefit.

What that meant, both to Doc.ai and Sharecare, is that it accelerated the growth of several of our technologies. It was not just care that went virtual: we are seeing clinical trials going virtual. Omix, where we built all these studies that I talked about, is a virtual, remote first, clinical trials platform, where you can actually recruit patients from anywhere in the country. There are certain rare diseases for which you will not be able to find a concentrated set of people that can actually be in a single city, where they have to go to a single site for data collection. Now, with these tools and technologies, including the privacy preserving technologies, we can access the talent anywhere. So, what Sharecare saw is an acceleration of many of our technologies , with clinical trials being one, telehealth, and mental health being the other one. We saw the rise of need for mental health because a lot of people are home and they’re not getting care on time, so how do you address these? So, all these technologies that Doc.ai had built got significant adoption and Sharecare saw the need for augmenting their current offering as part of what we brought to the table.

Sharecare also saw an increase in their own adoption of digital health tools; employers and employees need tools at home in order to manage their stress levels or health or well being. So, those tools have now become very important and critical because of the pandemic. In fact, in some ways, the pandemic has caused this transition and we're hoping that there's a lot more digital tools available for everyone in the country so that they can access care on demand and it doesn't have to be restricted based on where they live or how much time they have to wait.

VN: Obviously your expertise is in AI. We've talked about that a bunch already in this conversation, but that's a term that seems to be thrown out there quite a bit. It might actually be losing its meaning or it might take on different meanings, depending on who you're talking to. So, for our listeners, how do you define AI? What exactly do you mean when you say artificial intelligence?

AS: It means many different things, and it means something different when we talk about healthcare. 

Let's go back to a statement I made earlier, which is that healthcare is fragmented. We talk about how you get care in different places, you get your prescription from a different place, you eat several different types of food, but there are discrete data points and there are continuous data points, all of which have an effect on understanding your health. So, the first order of the problem is, how do you even collect these different types of data? AI is instrumental in being able to help or facilitate even the data collection side. We talked about how you can get your health records using your phone, you can get your pharmacy prescriptions, lab records, all of which can be mediated through API and several other types of integrations, putting consumers in charge on their phone to be able to do that. But then, how can we think about other types of health signals, like your phenotypic data and genetic data? We can import that and understand what's there, while keeping your data on the device. So, AI can facilitate or accelerate interesting types of data collection.

So, I would define the idea of AI as, one, on the first data collection side of things. Can we use smarter, better ways to import data? One interesting thing that I've observed is that most people drop off if you're throwing in a username and password for them to remember to pull the data from a different place, when they actually have a prescription bottle right in front of their hand so they know what they're taking. So, if you want to integrate, you have to make it much, much easier for users to pull in their data in one single place, and that's where I see AI because we can use your phone's camera, we can use your GPS location, and a bunch of other types of information that our phone offers, to extract certain information, and bring the user in on that transaction. So, AI actually helps that and there I see it's about smarter data collection. 

The second part of AI is, now you have multivariate types of data, and what I call "multi omics" or "polyomics" data. So, you're getting genetics, you're getting phenotype data, you're getting metabolomic data, and so on and so forth. How do you make sense of this? This is a hard problem in healthcare, because there is not enough data, so you are dealing with the world of sparse datasets. In the world of machine learning and AI and statistics, there are certain tools at hand for you to deal with that, but where we can get creative is in bringing other types of data. There's enough corresponding data, which has some correlation value in the public domain. There could also be other ways where you are actually engaging users to make sure that what you're trying to predict can actually be reinforced by that feedback. So, the feedback loop is very important. So, to make sense of what's in this humongous amount of data, which is still sparse, we really need machines to help us. That's a domain where we need machine learning and AI to actually facilitate data understanding and going from data understanding to building models and from building models to putting it back as a feedback loop back to the user. So, we use AI for that, basically going from data collection to data understanding to some output, like a model, that comes out of it, and then using that to get feedback from the users or the right stakeholders, is how I see AI.

AI is more than a tool, it's a culture. If you don't think about data first, if you're not data driven in healthcare, because there are so many silos, you're basically doing lossy transformation when you go from one silo to another. So, instead of that, if you're able to bring everything together in one single point of view, bring all the different stakeholders that matter for healthcare, so your clinician, your physician, your actuarial person, your patient, your data scientist, and your data engineer, in one single point of view, the data point of view, then you're able to build something that's much better for the user. We call it data fluency, so that's really what AI is about from my point of view.

VN: Sharecare uses data fluency and federated AI, and I was wondering if in layman's terms you can explain to me exactly what those mean and how they differ from typical analytics and intelligence that most companies use.

AS: I'll split them into two parts. So, data fluency: I talked about data being fragmented and siloed, so what does that mean? Even the data creation, where you acquire data on an individual, even those actually are isolated from the different places you can actually procure data. I'm talking about a traditional health system. So, your doctors don't necessarily have your Fitbit data, or they don't necessarily have your data that you're generating on what food you eat or where you've been and the exposures that you may have had. That information is fragmented, it's in different places. So, that's one problem. The second problem is that when you actually bring all of this together somehow, you still have different stakeholders that need to look at this data in one single point of view. So, what a clinician looks for is very different from what a data scientist looks for. There is an expert, and then there is somebody who probably understands the idea of data better. Then you also have engineers that are working on different things for transforming data from one place to another. Healthcare being siloed is one thing but it also creates, at a micro level, many different silos. So, you have different systems and you have to jump through the hoops to understand what's in the data. While you jump through the hoops, because there have been inherent assumptions that have been made, there is some level of loss in those transformations.

A classic example of this is your data engineers are ingesting data, putting it in a database, then a data scientist takes a copy of the data, and may accidentally miss certain things because there are too many fields, and they will take a certain subset of it and try to build a model, the report of which is now exported into a Word file or Excel file that's reviewed by a clinician. Now, all the system boundaries that you're crossing are usually one way functions; it's very hard to go back. What ends up happening with all of this is that you're in the classic problem with healthcare, which is that you don't have enough signal that you can generate, and you don't have any feedback loop between the different stakeholders. So, data fluency is the idea of getting rid of all of these different silos, instead investing in tools and technologies that help us bring any type of health data in one single point of view. Let's not take that out and put it into other subsystems, so you have a reporting subsystem where people look at charts and reports from a different system, or people want to export that into an Excel file and look at it in a different way. Those are the old ways of doing things and they lack feedback. Instead, if you could keep the data in one single place, build Jupiter-based notebooks, we use a lot of Python technologies and one of the things that we use is a notebook, which lets you actually look at the data but intrinsically build different views on it. Now, all the stakeholders are looking at different types of views: the clinician will look at it more like probabilities of what's happening across patients, and a data scientist is looking at the machine learning model, an actuarial person is looking at certain other functions, but all of them are actually looking at the same data but presented in slightly different views. What they do on these systems lets us immediately update our underlying code that's generating all of this. That rapid ability to collaborate across different stakeholders is what data fluency is. Doc.ai and Sharecare have invested significantly for all our data pipelines to be able to generate these data fluency aspects.

The second thing that you mentioned is federated AI, and this goes back to the point I was making earlier on privacy preserving technologies. So, one thing is you want to preserve and secure the sensitive data that the user has. Say, your genetic data, for example, or your facial data or videos of certain things that you want to take; you want to leave that on the device, which makes a lot of sense because that's the most private information that a person has. But you still want to be able to let the person participate in the larger context of what healthcare is, which is being able to understand that person's individual health, or using the data, and look at the across different other users around the around the country, around the world, and be able to look at baselines and help them improve their health. In order to do that, we still need to learn about these different types of data in several different ways. What federated AI or federated learning, in this case, helps us do is leave the data where it is, that's on the phone; we have a baseline model that we have trained on the server using other types of data, but now we can bring that model onto the device, not just to do a prediction on the collected data, but also improve that model on each device and aggregate the model weights, not the data, back into the server. So, if you have enough people using this, we're continuously learning a model that is a better representation of health. Of course, there are many challenges: we still have to solve problems around reversibility of the model and there's a whole bunch of investments we have made in our research to understand how to keep the model safe. So, that's what I would call federated AI. 

That idea is also extensible beyond just the phones: I mentioned data fluency, which is bringing all of the different types of data in one single place and having all the key stakeholders collaborate. Now, what if two different organizations actually are trying to do this and want to collaborate? So, let's say you are a provider with a lot of medical records data, but then you want to collaborate with a lab record company, and there may be organizational issues in wanting to exchange this data. We could use federated AI to learn between the two different systems on different types of data for either the same users, or sometimes different users, without actually having to move the data to a central location. This is a very powerful concept that lets us actually allow different organizations to collaborate, if they have a common interest while keeping the data secure and private without compromising any of the individuals.

VN: Using those tools that you just talked about, what are some of the insights that you've been able to glean from your data?

AS: We have a tool called Genewall, which lets you bring your genetic data and keep it on the device. None of that data is sent back to Sharecare, for example. You can get yourself sequenced from 23andMe or Ancestry or any clinical grade sequencer; if you have access to the raw data, we let you put it on the phone, it stays on the phone. Now, we have certain genetic risk models that we can run, and those outputs are actually shown directly on the device. Think of what's possible now: if a certain set of users are interested, we can have them opt in and we can look at a population level and see what certain risks look like for certain conditions. Again, these are models, they have to be clinically validated; we make sure that there is enough rigor when you're dealing with different conditions and genetics, for example, but we can do this now by keeping the data on the device for the user. So, we're learning interesting things from that. 

The allergy model that I talked about is another thing that I'm pretty excited to see what we learned from. What we built was an allergy model that had an accuracy of more than 80%, but can we now take that model, put it on the device, let users actually opt in, and continue to train and personalize that model for the individual? Overall, by doing so, can we see if the accuracy of the model goes up? That's the next experiment, that's the next thing that we want to attempt. We can do things like this, we've seen evidence of this beyond Sharecare. For example, people are looking at MRI data from one hospital lab and are able to bring similar types of data from another hospital and see if they can actually build an MRI detection for certain things without actually having to exchange data. We are seeing this work practically, even in healthcare. This has already been done and used by several other non-healthcare companies to improve certain things, especially if you're using certain types of keyboards on your phone. The fact that the keyboard learns certain common words that you use, which may or may not be in the English dictionary, over time so it can autocomplete, that's a great example of how federated learning is being used. None of those words are actually being sent to any cloud provider, it's learning on the device, and overall, as a population, the model can improve based on how people use it in a given geography.

VN: We’ve been learning and studying so many different healthcare companies and I'm trying to position Sharecare in the ecosystem. It seems like a horizontal platform, aggregating data and also providing data intelligence to many constituents. It sounds a bit like Komodo health, if you're familiar, which is trying to create a healthcare fingerprint with longitudinal data. I think you're also adding mobile, which might be a little bit different, but do you see any overlap or do you see them having the same mission?

AS: Unfortunately, I don't know Komodo Health so I can't speak to the specifics there, but your idea that it's a digital health hub where we want to, irrespective of where you are in your care journey, bring you to the platform, either through web or mobile. We are able to track your different episodic things, either through claims or benefits or eligibility or EMR at a hospital that you go to. Then we can also add in the layer of these continuous data points, using the different opt in mechanisms and AI technologies that Doc.ai has built, to let you build a more continuous profile. Then, using using different AI strategies, we will be able to understand your journey better. What does your future look like? Can we help you better with that? And then, how do you use that? If we know that we need to intervene, how do you take that and get you to advocates or care managers sooner? So, that's really what Sharecare can do. So it is, to some extent, what you mentioned, but it's also more than that because we can actually also get you care on time, based on where you are.

VN: Who would you say amongst all the constituents who is your target customer?

AS: There are many, many different audiences that we sell to, the biggest one being our enterprise line of business, which is mostly about different payers and health plans that want to offer digital health solutions for their employers. We work directly with the employers, or through their health plan. That, I would say, is our biggest source of customers. What we do there is onboard employers and offer digital health and wellness solutions for their employees to produce.

There's also a provider solution, where we work with providers directly, especially in the value-based care setting that it's slowly shifting towards. There are more than 6,000 individual providers using the system and maybe about 75 health plans using this, where we provide them enterprise tools that let them bring the right type of data, so they can understand and figure out what they need to do next. 

We also have a solution group where we work with pharma companies to understand and position how they can recruit better, based on certain conditions; this is a separate line of business. The last thing I'll say is that we also have a community wellbeing index team that actually offers different social determinants of health and community wellbeing data that can be used by employers, or anyone that's interested in how to improve a certain community over time, from a health perspective.

VN: Healthcare is 20% of GDP, a lot of the way we can reduce some of those costs is actually taking a look at chronic conditions. That’s 80 to 90% of the cost. How do you see AI reducing overall healthcare costs? 

AS: You're right, in the sense that, I think, 5% of patients cost 50% or more of the expenses that an employer actually has to face. So, being able to intervene sooner is the most important thing that you could do, and part of the idea there is can we do that by keeping the users engaged on a platform, and using different types of data? Can you predict if somebody’s risk suddenly changes? Like, from a risk stratification perspective, if it changes quickly in the next few months? And, if so, can we intervene sooner? That's the primary thing that AI and data can help with.

For a lot of patients, to be honest, they know that their journey is difficult but, at the same time, if they can get care on time, they can manage it better. Sometimes what happens is that there is either that they don't have the right framework, or they don't have the right help, in order to go get care, and then, eventually, it becomes a significant risk for their health situation. The advocacy solutions that we have at Sharecare are able to figure that out sooner, work with both the patient directly, as well as our clinical advocates, to help them understand that sooner so they can go to the right care at the right time, before the risks become significant later. Where AI helps is that we can, looking at this data coming from many different people, as well as the continuous data coming from the right opt ins from the users, as well as certain other types of public data sets that we can actually crunch different types of models on, we can look at these baselines over time and see who is standard deviations away. Can we help and nudge them to go get care or talk to talk to the right person sooner? So, that's really what AI can do. I actually believe that that's possible. Now, it's not possible with AI by itself; you need to absolutely pair the user with another person, either a clinical advocate or a care nurse, behind the scenes in order to help them and facilitate them to go get care on time. AI is about the tools that let you nudge the two different actors sooner so that this happens.

VN: Where does the doctor fall into all of this? How are you affecting them? Does your AI make their job easier? Are you allowing them to focus on higher risk patients?

AS: That's definitely one case. The way you can look at it is, even before it gets to a doctor, care nurses actually get involved and find and help people be steered to the right place sooner. One of the things that happens is that, traditionally if you use different systems without all the different types of data, there are many people who are listed as high risk, but then following up with them, historically, hasn't worked. The reason for that is that if they find that they are going back to the doctor having been told they're high risk, and they are not seeing any benefit come out of that. Now, how do you take that and find the key people that need to be addressed sooner? The way I see that happening is that we have advocates and care nurses that can actually intervene sooner and triage them better in the system, because getting a doctor's appointment directly has historically been a challenge. You don't have access to other people that can help you get to them sooner, but with these digital tools, as well as people tools, we can actually triage people better in the system. 

The second way, obviously, is passing all of this context. What the tools and AI has identified, what the care manager understands, what the user has understood, packaging all of that and passing on that context for the right point of care would be the second way that we see where doctors can use that to make better decisions.

VN: It's tough to talk about anybody in the healthcare system at this point and not talk about COVID. I know we talked about it a little earlier in this conversation, where you mentioned the move to digital, and the effect that that's had. What is that going to look like going forward? I think that people aren't really sure about whether or not patients are going to continue to use telehealth going forward; are they used to it now, or are they going to go back to the way it used to be? What is healthcare going to look like going forward now that COVID has changed so many things, or do you think revert back a little bit to the way it was before?

AS: It's going to be a hybrid. It's going to be in between. What we understand about healthcare is that the care element actually has to do with the human. So, as much as I can understand from a tool or a technology what I need to do, I really want to talk to a person, so that's very important. The human touch cannot be replaced. Telemedicine, to some extent, while it actually has accelerated access, it has diminished a little bit of what the human touch felt like. So, what we will go back to is that telemedicine probably won't go away because the biggest issue that I see is being able to get someone on hand quickly. But, then, going from there to get access to a doctor in person, and getting that sooner  rather than having to wait, especially with certain chronic conditions, where the wait times can be significant. So, being able to get triaged sooner is what I see telemedicine and care advocacy platforms will offer. So, it's going to be in between. 

With issues like mental health and others, there is really very limited access for people, and that can be facilitated from anywhere. The initial triage can always be virtual but then immediately being able to take that to an in-person consultation is where what the future of care would look like.

VN: So, considering that world that you're talking about, that hybrid world, where does Sharecare fit into that?

AS: Sharecare is absolutely playing in both of those worlds. We already have virtual tools and platforms, including clinical trials that Bambi was referring to earlier, where we could run clinical trials on our platforms. So, there are tools for clinical trials, there's tools for being able to onboard employees and bring all of their data from a different place and continue to help them manage the data. But then, behind the scenes, we already have advocates and care nurses that help triage for high risk patients or any employee that has questions. We can actually bring people to address and answer that so you can do telehealth consultations, you could do phone calls, but then also they can help you triage and get your appointment scheduled sooner and quicker, because they know the real risks, if that doesn't happen. So, Sharecare actually was designed to operate in this world already.

You can listen to the podcast of our conversation with Akshay below: 

Support VatorNews by Donating

Read more from our "Meet the Corporate Innovators" series

More episodes