AI, medicine, and Big Tech (Oh my!)
In our last newsletter I talked about horizon scanning and the rising need for tech that fits the new paradigm of aging in a digital world. This week I want to talk about AI, medicine, and Big Tech. First, a quick review. Task-specific AI, or models that are trained on specialized datasets with a limited type of data to accomplish specific tasks, for example, screening medical images for tumors, or predicting patient outcomes based on hospital data and intake specs, have been on the rise in medicine for the last few years. This flavor of model, thanks to the acceleration of technical development over the last three years, is now considered “traditional” AI, and foundation models have taken the medical arena by storm. Foundation models are usually “multi-modal”, meaning they are able to take in different types of data - images, text, speech, sensor readings, etc. - making them more flexible than task-specific AI. This flexibility is especially important in medicine, as medicine is inherently multi-modal. Medical professionals rely on patient reporting, visual assessment, quantified measurement, and imaging, just to list a few types of data used in patient diagnostics. Foundation models are currently being developed and refined towards assisting with reporting to lower the burden on the medical professionals, in health literacy and translation to improve doctor-patient communication, towards improving diagnostic accuracy, and beyond. However, you still need datastreams that are relevant to your desired application to train these models.
So where does Big Tech fit in? Foundation models require a lot of data, a lot of processing power, and development of these models requires skilled researchers and engineers. Hardware-centric Apple spent $29.9 billion on research in 2023. Software-centric Google spent $45.4 billion that same year. So Big Tech has the advantage of computing resources, researchers, and funding, but some data is easier to access than other data. Medical data, especially with expert context, is hard to access and slow to acquire, even for Big Tech. Medicine also requires a level of data quality that surpasses many other applications. As I’ve discussed before, Apple is now building lab and medical-grade sensors into their devices, but their primary interest is in selling more hardware, so they rely on external companies and research partners to build on top of their devices (see: Apple Research, where you can volunteer to join health studies run by research groups partnering with Apple). Apple devices facilitate data collection and participant access, but external experts contextualize this data. Google is more software-centric - their acquisition of FitBit in 2021 is still their primary presence in medical-adjacent hardware - but they are also running similar partner studies. They have a leg up on Apple in software, but they rely on researchers to collect data through their own measurement tools towards then training AI models. Both of these companies rely on partnerships with research hospitals, university labs, and startups to drive data acquisition, contextualization, and development. It’s a symbiotic ecosystem 🧠
Want to know a bit more about my background and inspiration? Check out this segment from my interview on the Tech Business Podcast with Paul Essery! The full interview is here.