New Matter: Inside the Minds of SLAS Scientists

Measuring Biomolecules with Quantum Electrochemical Spectroscopy

April 10, 2023 SLAS Episode 146
New Matter: Inside the Minds of SLAS Scientists
Measuring Biomolecules with Quantum Electrochemical Spectroscopy
Show Notes Transcript

This week we're exploring a new analytical technique quantum electrochemical spectroscopy (QES) developed by Probius, an SLAS2023 Innovation AveNEW company. To explain how QES uses molecular vibrations to analyze biological specimens are Co-founder and CTO Chaitanya Gupta, Ph.D., and SVP of Marketing & BD Juan Cruz Cuevas, Ph.D.  

Key Learning Points

  • What is quantum electrochemical spectroscopy? 
  • The problems/issues this technology solves 
  • The level of throughput for QES
  • What you need to know to use QES in your research
  • The future advances for QES

Stay connected with SLAS

Innovation AveNEW
Innovation AveNEW is a specially designated area of the annual SLAS Conference and Exhibition floor that provides an opportunity for emerging and start-up companies to actively engage with exhibition attendees and purchasing decision-makers from more than 40 countries, while SLAS covers the costs of exhibition fees, travel and lodging (for one representative).

About SLAS
SLAS (Society for Laboratory Automation and Screening) is an international professional society of academic, industry and government life sciences researchers and the developers and providers of laboratory automation technology. The SLAS mission is to bring together researchers in academia, industry and government to advance life sciences discovery and technology via education, knowledge exchange and global community building.  For more information about SLAS, visit www.slas.org.

Upcoming SLAS Events:

SLAS Building Biology in 3D Symposium

  • April 16-17, 2024
  • Jupiter, FL, USA

SLAS Europe 2024 Conference and Exhibition

  • May 27-29, 2024
  • Barcelona, Spain

View the full events calendar

Hannah Rosen: Hello everyone and welcome to New Matter, the SLAS podcast where we interview life science. I'm your host, Hannah Rosen, and joining us today is Chaitanya Gupta, also going by CG to be kind to me and my terrible name pronunciations, and Juan Cuevas. And they are both coming to us from Probius, who was an Innovation AveNEW company at SLAS2023, and they are joining us today to tell us all about how Probius is using quantum electrochemical spectroscopy to measure biomolecules. So, welcome to the podcast both of you. 

Chaitanya Gupta: Thank you, happy to be here.  

Juan Cuevas: Thank you, Hannah, I'm glad to be here as well. 

Hannah Rosen: Well, we're happy to have you guys. So, to start off with, I would love it if you could both just kind of give us a brief description of your professional backgrounds and areas of expertise. Maybe CG ought to go first. 

Chaitanya  Gupta: So, I'm Chaitanya , I'm Co-founder, CTO of Probius. And as Hannah mentioned, we're commercializing spectroscopy light platform called QES. I'm a PhD in chemical engineering, minor in physics from the University of Illinois at Urbana Champaign. Completed my PhD there, came to Stanford to do a postdoc, seeded the idea behind this new sensing technology at Stanford and, fast forward five years later, decided to spin it out into a commercial enterprise because we were seeing a lot of demand for this particular sensing paradigm from our friends over at the medical school who were just across the street. And so that's how Probius got started. 

Juan Cuevas: So, my name is Juan Cuevas, I did a bachelor in biochemistry in Argentina, then my PhD in pharmacy in Barcelona, Spain. And then I moved to the dark side, to the companies, to the enterprise world. I've been working in the genomics world for a while with... with extinct companies like Affymetrix, then Thermo Fisher. And then I moved to proteomics, Seer proteomics, and more or less a year ago I moved to Probius. I moved to Probius because what I like to do is to take technologies and make them a product and help scientists move to the next wave of things, and definitely got super impressed by what Chaitanya and the team developed about the QES and I could not say no. 

Hannah Rosen: Awesome, thank you both. So, you know, before we dive into the technical aspects of what you guys are doing, I'd love to hear a little bit about you... your experience on Innovation AveNEW and at the SLAS 2023 conference in general, how was it for you? 

Juan Cuevas: So, the CEO, Emmanuel, and I went to SLAS, we participated in Innovation AveNEW and the Ignite Award. Overall, the experience was fantastic. I did not have very high expectations because I never participated in... in... in SLAS of this part. I tried to go on my previous life, but then pandemic happened. And then, when we went with Probius we... we went to Innovation AveNEW because it was a very good, nice first exposure to... to the world. This was the first conference for us. I have to say I was impressed how... we saw how well organized, how much support you have, how you are placed in the middle of the conference space, you get traffic, how you are promoted by the apps and everything that is communication wise. And at the end on the results, getting a lot of people at the mini booth looking at you, talking to you, asking the question “What is this? What do you do?” Which is great icebreaker. So, in... in numbers we got as many people visiting us as what I experienced in my past lives with other companies, and we got very many contacts that are very interested in continuing with us the exploration of how to use this QES technology. So, super happy. 

Hannah Rosen: Great! I'm so glad that we could provide a great first conference experience for Probius. That's quite an honor for us.  

Juan Cuevas: Thank you. We... note, we did not win the Ignite Award. We are not going to blame anyone for that. But it... still it was a fantastic experience and now I'm a proponent of... of Ignite and Innovation AveNEW. I already told a couple of friends on startups they have to consider this when they start. 

Hannah Rosen: Great! Well, we love to hear that. And I was not heard of the judging panel, so I promise I had nothing... I had no say in the results there. [laughing] 

Juan Cuevas: [Laughing] 

Hannah Rosen: Oh, great. Well, I would love for you guys... because this is actually, you know, your company is my first time encountering this concept of quantum electrochemical spectroscopy. So, you know, I’s love it if you could tell me and all of our listeners, you know, what exactly is QES. 

Chaitanya Gupta: I can take that. So, quantum electrochemical spectroscopy, or QES, is a new analytical technique that we developed that basically the... the fundamental hypothesis here is that vibrations can be a new unit of biological information. It's a unit of biological information that transcends multiple length scales. So, you have vibrations between atoms at a bond level, you have vibrations at a molecular level, and you also have mechanical vibrations and supramolecular structures like cell membranes. And so that concept of vibration can potentially describe a chemical entity at an atomic level all the way to a cellular level. And so, you have this unifying theme, or unifying construct, that is impacted by the chemical, biological, mechanical and electrical properties of the environment around it. And so, you potentially have a way, a new way of defining biology using vibrations as this unit construct of information. And so, this is the idea that we came up with is, rather than using, you know, nucleotides as the unit of genetic information, amino acids as the unit of proteomic information, metabolites as the unit of metabolomic information, what if we had one unit of information that transcended all these different windows of biology? Then you have a way of describing a sample at multiple length scales using that unified theme.  

And so, that... that was the idea behind the company is let's. Let's use vibrations as the unit of information, and let's figure out a way to measure these vibrations in a sample over a wide range of values with very high precision, using a technology that is scalable, and so QES is really that approach. QES is fundamentally based on a very old and somewhat obscure spectroscopy technique called inelastic electron tunneling spectroscopy. So, interestingly enough, it was invented by accident in the Ford Motor Research Labs in the 1950s. What the researchers were doing over there is they were trying to study the tunneling of electrons. So, tunneling is basically a wave like transition of electrons from one energy level to another. And to study that, they were basically taking two metal plates that were very closely spaced and they were applying a bias, a voltage bias between the blades. And as a consequence of the bias, that was pushing electrons from one plate to another across this very narrow gap. And what they expected to see was just, you know, normal tunnelling current as a function of voltage. Instead, they ended up seeing these discretized disturbances show up in the measurement of the electronic current and... and they couldn't figure out what these additional signatures were because of. And it turned out that some of the oil in the vacuum pump that was being used to generate the vacuum between the plates was leaking back and was sort of showing up between the gap of those two metallic plates. And what these scientists at the Ford Motor Research Labs were measuring were the vibrational spectrum of that oil, as measured by that transitioning electron. And so the electron as it's moving across that gap between the two plates, is scattering off the vibrational modes of the oil like species between those plates, and that scattering event is showing up as a signature in the measured current of that transducer.  

That was the genesis of the IETS method, and it basically showed that there is an electronic equivalent to traditional photon-based spectroscopy techniques. So, the idea of using photons to interrogate vibrational modes had been around for decades before that. And what these researchers showed there is that you can actually use electrons to measure that same vibrational spectrum, you know, if you have the appropriately designed transducer. The problem is IETS is not very practical, and the reason is that to be able to measure that scattering between the electron and the vibrational mode as a discrete signature and the measured current you have to get rid of all the thermal disturbances which can interrupt that exchange of energy between the electron and the vibrational mode, which means that you have to cool the system down to, you know, sub 10 Kelvin like temperatures, you have to have a very high vacuum so that the gap between the two plates is highly evacuated except for the molecular species that you want to analyze. And all of that was hard to do in the, you know, late 50s, the 60s and so IETS never really took off as an analytical approach because of these limitations.  

And so, our contribution here was in realizing that there was another way around this problem of thermal disturbances. So rather than trying to tamp down on the thermal disturbances by turning the dial on a cryostat and reducing the temperature, what if we could boost up the signal of the interaction between the electron and the vibrational mode? Right? So you're not trying to damp down on the disturbance, you're trying to push up or amplify the signal. So that's an interesting work around, around this issue of thermal disturbances essentially swamping out the signal of interest. You're creating a transducer that allows you to amplify the signal and now you can actually measure the signal despite the presence of that thermal disturbance. And so, with that approach, with the appropriately designed transducer that could allow us to perform this amplification, we can now measure vibrational like signals using electronic transduction. And we can do so at room temperatures and in liquid environment. And... and so now we have a way of using these... of... of transducing these molecular vibrations as units of biological information in real biological samples, without having to resort to ultra-low temperatures and vacuum like environments. And so, once we are able to generate vibrational spectrum using this electronic transduction approach, or vibrational spectrum of the sample, we basically have a multiscale and highly discretized representation of the sample that becomes equivalent of a digital twin where you've collected all the information that you possibly can about the sample at multiple scales of biology and that information then sits in a, you know, a cloud storage environment and it gets compared against a set of references like you do in spectroscopy to determine exactly what is present in the sample. So the beauty of this approach is that we're not using any a priori hypothesis to define what it is we are looking for in the sample, we just collect all the data, data that is available in the sample at multiple scales of information, and once that data is available as a digital twin of the sample, we can then pull up the appropriate set of references to compare that model... that sample signature again to figure out what's... what's in the sample. 

Hannah Rosen: So, it's sort of almost like a high content screening sort of approach where you just collect everything and then decide what you want out of it, once you have all the data. 

Chaitanya Gupta: That's... that... that captures it very well. 

Hannah Rosen: Well, I mean that's a fascinating and incredible journey of this technology. So essentially what you're saying is that everything from, you know, individual atoms to proteins to cells, structures and the cells that are all... everything vibrates at a different frequency, essentially, and you're able to record all of that and figure out what's vibrating where. So could you essentially then say, you know, if I had a single cell, you could tell me the... all the... the... the proteins that are present, like the... down to the, each individual atom that makes up that cell? 

Chaitanya Gupta: So, we have over a period of time demonstrated the ability to identify atomic species, peptides, oligonucleotides, single amino acids, whole proteins, all the way to single cells using the same platform, the same hardware funnel to collect the data. But what changes are the set of references that you compare the data against. So, the reference for a single cell will look very different compared to the reference for a protein or a reference for an amino acid. The other aspect of this is because we... we rely on vibrations to characterize these species. We've also shown the ability to distinguish between molecules that are very similar to one another. So, for example, you take an amino acid, you replace a hydrogen on it with a deuterium. Because the mass of the deuterium is different, that bond between the deuterium and the rest of the amino acid is going to vibrate slower, and because it's vibrating slower, we can detect that change, right? So, we can detect very, very small and subtle changes in the molecule because those seemingly small changes have an outsized impact on the vibrational frequency fingerprint of that particular molecule, and that allows us to distinguish between molecules that may otherwise be very similar to one another. So, the point is I guess we can measure species across different length scales of biology, but we can also do it with very high resolution, and that high resolution allows us to separate out or distinguish between two molecular species, or, you know, two atomic species or two cellular species that are otherwise similar to one another but are differentiated in maybe one or two dimensions. 

Hannah Rosen: Yeah, I mean, that's remarkable. I can see why you need machine learning to sift through all this data. Because it must just be an unbelievable amount. I'm curious, you know the... the way... I mean the way that you're recording is... it must be a very fine detail because what I wonder is there ever a situation where the vibrations of two different structures like, interacts in a way that mimics the vibration of something else, and, you know, how much, you know, signal to noise, how are you able to filter out the noise of all the other, you know things vibrating and interacting together to make sure that the signal you're getting is really what you think it is. 

Chaitanya Gupta: Yeah, absolutely. The... the key is really to defining the appropriate reference that you would compare the signature against, right? So, going back to the example you gave of, you know, molecule A, molecule B and A/B complex, right? So, what we've seen for those kind of examples is that an A/B complex would have a vibrational spectrum that has elements of A alone, elements of B alone, and then there is something that is different from both A&B, and that's something really determines that span of vibrational frequencies. That sort of distinguished from that of A alone and that of B alone, is really what defines the interaction domain between A&B and so, you know, let's say you have the references for A and. B. And then you collect a... a signature that looks like, you know, some mix of A&B with a set of new vibrational signatures. That instantly tells you that yes, you've got some elements of A in there, you've got some elements of B in there, but then you also have this interaction that's resulting in these new sets of vibrational features that we have never seen before, and so maybe that would point you to the fact that A&B are interacting with one another. So, it's using this sort of step-by-step approach to building out the references and the complexity of that reference database that allows us to tease out information about individual molecular species as well as supramolecular structures that could be composed of these individual species. 

Hannah Rosen: That's amazing. I mean, how long did it take you to train the machine learning algorithm for sifting out all of these different vibrations? I mean, that just sounds like an almost impossible task. 

Chaitanya Gupta: No, but that... that's the key thing, right? We're... we're looking at the signature in entirety. So, what we're trying to do is, you know, let's say I want to detect TNF alpha in serum. Right, TNF Alpha is a complex being, and so what I'm going to develop as part of my reference data set is a baseline signature for what TNF alpha looks like, and so to be able to do that I'll create a set of standard editions of TNF alpha spiked in the electrolyte without any kind of sample matrix. I'll also create a set of standard additions of a bunch of off target species, so species that are similar to but different from TNF alpha. That cohort of samples are my reference samples on which I will build a model for what TNF alpha should look like. Then I'll use that model to try and predict the concentrations of TNF alpha in a set of samples where TNF alpha has been spiked in, let's say whole blood. So I'll... I'll prepare another set of validation samples where I'll spike TNF alpha in whole blood, and I'll use the model that I built using my pure electrolyte samples to try and predict how much of TNF alpha is in those whole blood specimens. Because I've introduced whole blood into the mixture, I'm going to get a bunch of errors and offsets or biases in my prediction. So I've developed a model. I know what the offsets and biases are as a consequence of the introduction of the sample, these two things together allow me to make an accurate prediction of TNF alpha in the actual sample in which I want to perform the assay.  

So, it's a systematic process of train, validate, and test that you have to walk through for every single analyte for which you want to develop these references. Now that said, we've been able to build out a fairly automated and I'd say, you know, medium to high throughput version of our platform that allows us to run through the generation of these reference samples fairly quickly. As an example, we just went through a pilot recently where we stood up a biomarker panel of about 20 different biomarkers, so proteins and metabolites all in the matter of a few weeks. And with that, you know, panel of 20 odd biomarkers, we were able to essentially do a quantitative estimation of these 20 odd biomarkers in rat 7 samples where the rats had been, you know exposed to a drug as an example. So, the point I guess I'm trying to make is that it's fairly quick to turn around a... a set of references and models for the analyte. Given the highly automated nature of the platform, given the fact that we don't rely on chemicals and reagents, given the fact that we don't rely on sample prep because from the user's perspective all they have to do literally all they have to do is pipette 4 microliters of the sample into the consumable, pair the consumable with the instrument, and that is it. The rest of it is all done in the back end in the machine learning environment, which is highly automated and, you know, can be parallelized. And so it can be extremely fast. 

Hannah Rosen: That's amazing. And so, this is something that people can purchase the equipment to do this, have it in their lab and they can just do it from their... their own lab, they don't need to send it off to you guys and have you guys process the samples for them. They could just buy it and they're... they're just ready to go. 

Chaitanya Gupta: Absolutely, absolutely. And the... the instrument, I mean, if you've seen it, I'm not sure that you have, is about the size of an iPhone. The... the instrument itself is something that can be, you know, I... I'd mentioned at the very beginning, QES... that the motivation behind QES was to enable a way to develop a high resolution digital fingerprint of the sample in a scalable manner, right? So, the scaling part of it is what is enabled the, you know, the... the... the footprint of that instrument to be about the size of an iPhone, and that's something you can literally keep on your desk. You can connect to your local Wi-Fi or you can use an Ethernet cable to connect to your local area network. The instrument is controlled via a user interface that is accessed through a web portal so that web portal can be accessed on your phone. It can be accessed on a tablet. It can be accessed on a standalone computer. It doesn't have to sit right next to the instrument. So, you can literally control the instrument from your home. The only thing the user has to do is to pipette 4 microliters of sample into that consumable element and pair the element with the instrument and then they just press go and the measurement happens. 

Hannah Rosen: That's amazing. Have you thought about using this for, you know, like, diagnostic purposes? This sounds like an amazing thing to take into the field, you know, if you need to go out and do some, you know, in the field diagnostics. It just sounds like a perfect way to go about doing that. 

Juan Cuevas: I'll... I'll take that one. So, definitely so, Chaitanya, Emmanuel, the founder, I've thought about that. We have discussed very long about that. Our end goal is to get to help practice a more decentralized healthcare within this technology and the instrument and describe this. It will be great in that setting. But there is a path we need to follow, you know, there are regulations, there are clinical trials. So there is a path we are going to walk to try to get there, and the best way to start, we think, is by starting with a preclinical applications, maybe then translational and then going into clinical and ID. So that's... that's what we are going to work in the next, let's say, a few years. 

Hannah Rosen: I mean, that's very incredible. So, I mean it's... I... I can... I feel like I almost don't even need to ask you this next question now, because I can see it for myself. But, you know, for you specifically, what interested you initially in pursuing this idea of quantum electrochemical spectroscopy? 

Chaitanya Gupta: So, it's... it's a kind of a somewhat of a long story, but I'll, I'll try and keep it short. So, I had the unfortunate experience of trying to develop a... a de novo ELISA assay for botulinum neurotoxin. This is part of a DoD sponsored project and it was a terrible experience from the perspective of a first time grad student who's trying to understand how to do an ELISA assay right? Just fairly complex. It was highly dependent on the chemicals and reagents that I would purchase from manufacturers. It was time consuming and I was pulling my hair out by the end of it and so at that point I sat down and... and tried to think as to why, you know, as a developer, would... would happen this way. Like, what's... what's the reason? So, it's... it's interesting. If you think about it, you know, the... the whole idea behind traditional ELISAs or immunoassays is this notion of using a probe of some kind to capture the target of interest and then that capture event is detected using some kind of, you know, either an optical label or an electrical detection technology, or maybe even mechanical methods, but the... the central theme is this idea that we need some kind of a probe to capture the target, and or in the case of mass spec, it would be around, you know, doing sample prep to try and get rid of everything else, leave just the target, and then look at a spectrum of that target and figure out, you know, whether that target is there or not.  

So, it's very hypothesis driven. So, when a researcher is presented with a sample, they have to formulate a biological hypothesis first, based on that biological hypothesis, they will decide OK, I will look for biomarkers XY&Z in these samples. And based on their choice of XY&Z, they will then decide the tools and the workflows that they need to examine the samples for XY&Z. So, it's a very hypothesis driven, hypothesis first approach and it relies on this idea of using chemicals, reagents and sample prep to perform a signal amplification from the target, right? Basically, when you're doing sample prep, you're getting rid of everything that's not the target and leaving just the target of interest. That's one signal amplification mechanism. Or when you're using a probe, you capture the target, you're again capturing and then isolating the target from the rest of the sample, thereby amplifying the signal from the target of interest. You have a hypothesis first approach that's trying to amplify the signal from the target of interest to be able to validate the biological hypothesis that you're making and the very onset of the workflow. And this idea of using probes, right? It... it comes... the specificity of the probe it... it comes from the... the notion of biological signal transduction. So, when signals get transmitted from one signaling pathway to another, it's via molecules that bind to proteins or other species with a very high degree of specificity. And what we've tried to do is we've co-opted that signal transduction method and applied it to biosensor.  

However, sensing itself does not work like that in biological organisms. If you think of the sense of sight, the sense of taste, the sense of smell, it... it doesn't work on this one-to-one specificity, right? The way taste or smell works is that you typically have an odor molecule that the thousand-odd receptors in your nasal canal will sense. Those thousand-odd receptors will generate a multiplexed, multi-dimensional signal of that odor molecule that will then go to your brain and the brain will reference that multidimensional signature against a set of references in the memory to figure out exactly what that odor molecule corresponds to. What smell does that odor molecule correspond to? And so, it's this idea of generating a high dimensional yet not super specific signature that can then go to the back end, namely your brain, where the brain introduces the specificity by comparing it against a set of references. So that's a very different way from the biological signal transduction approach where you're taking a probe and you're trying to capture that particular target with a high degree of specificity. And so, these were the questions in my mind. So, it was like, you know, if... if biological signaling works a certain way and we are sort of co-opting signaling for sensing, what if instead of trying to do that, we actually tried to use the mechanism of biological sensing for our biological assays, which are fundamentally a sensing paradigm, right?  

And so, that's... that's what we tried to develop it. We said... we said a way by which we can generate a broad spectrum, multi-dimensional signal associated with the sample. Then that signature to some kind of back end that would then compare it against a set of references, and from that comparison, we would get information about what is present in the sample, very much like the sense of smell and the sense of taste. And so that was sort of the genesis of the QES approach. But to be able to get there, you need to have a unit of information that is consistent across length scales of biology, right? Otherwise, you're capturing information about a very small pie of the sample, and so if you're only looking at small molecules, then you have information about a very small subset of the sample, not of the entirety of the sample. And so we needed something that would get us to a multi-scale representation of the sample. And that's when we thought of vibrations and a scalable way to measure vibration was the IETS approach. And so we had to figure out a way to make IETS practical, scalable, and usable to be able to generate that broad spectrum signature and then have that sensing mechanism work the same way that biological sensing work. Does that make sense? 

Hannah Rosen: Yeah, it... it does. It's very impressive. Quite an ambitious thing to set out to do, and even oppressive that you were able to accomplish it so... 

Chaitanya Gupta: And yeah, I mean I... I... I do want to give a shout out here to DARPA and the Department of Defense, right? They were the guys who first saw the idea behind this proposal. And they funded us, even though it was a crazy idea, but they were willing to take the risk on a, you know, a 30 something year old post grad, you know, immigrant who they had really had no uh, you know, prior experience in this area and... and they did and the outcome of that today is progress. So, shout out to them. 

Hannah Rosen: Yeah, that's... that's fantastic. And then, a great plug for why we need to be funding the more crazy ideas, you know. Sometimes I feel like so often people just want to fund the ones where we're pretty much guaranteed a result. But no, this is a great example of let's... no, let's go for the crazy idea. Well, that's fantastic. So, you know, what type of research do you think that QES is ideal for? 

Juan Cuevas: So, Chaitanya explained is a pretty broad approach where we can measure vibrations and vibrations and are the biological unit of information we gather, right, so that gives us a lot to capture. But we are going to start in the more narrow way. We are a small company, we cannot do it all, so the places we chose are three: preclinical research biomarker, discovery, and bio manufacturing. And I'll explain you very briefly on why each of those. Preclinical research, there is the problem we are trying to solve. In particular there is, for example, can you use mouse models? The sample amount you have is very limited, because to sample a mouse, obviously it's a small organism, so you don't... you cannot extract a lot of blood or a lot of sample from from the mouse, so you have to make choices on what you analyze, and you have to make a bias, as Chaitanya explains, which analyte are you going to analyze? For example, with an immunoassay. So, the beauty of our platform is only with the four microliters you can get the the vibrational signature and then decide where you go after, right? For example, you go after inflammation signals or near other generation signals or, you know, the whole cytokine map, whatever it is, or metabolites, and so on and so forth. So, we have practiced that with models of diabetes in mice, and we have practiced that with models of liver toxicity in rats, and we have some... some... some things there to support that.  

That's preclinical biomarker discovery as also Chaitanya explained, the signal we capture is very complex machine learning, very nice food for machine learning and AI. So you get a lot of signals, you get high dimensionality. So, we have been using this to try to find the biomarkers where the example here will be tuberculosis, we are doing a second stage of a collaboration with the Stanford Group where they are trying to detect tuberculosis from a non-infectious sample. When you take a sample to detect tuberculosis it is sputum. If it is positive, it's an infectious sample, so you have to take it to a lab that protects the operator, right? Instead of that, if you do it from blood plasma, the mycobacterium is not there. So that's what we were able to achieve. This cover a signature that correlates with the infection status of tuberculosis. And on the addition of that by doing that post... posteriori analysis... analysis afterwards, we were also able to classify for HIV despite we have no data at the beginning on that comorbidity, that was a posteriori. And the... the last application area we are going to focus now at the beginning of our road map is biomanufacturing or bioprocessing because the problem there is to create analytical methods to this... to characterize your process, to characterize your molecule, to characterize your protein of interest, right? And a lot of these are non-immunogenic, so you cannot use immune based assays and a lot of these are difficult to develop on a, you know, mass spec, NCMS, whichever technology. So. We bring there a very simple way of doing that standard addition that Chaitanya explain, and the ability to quantify those processes all detect and normalize in different batch of that processing. 

Hannah Rosen: So yeah, it sounds like for a lot of situations where this technique may not be, you know, usable right now it's just simply because you haven't gotten to them yet. Do you see any situations where it's gonna be really difficult for you to be able to perfect this... technique to use in research, or that you think that maybe just this this technology will never quite be able to do? 

Juan Cuevas: I'll give you a few and definitely we'll punt them to Chaitanya because he has a lot of experience the challenge in the technology. So, the places where we know they are challenging and... and... and, you know some of these problems, as Chaitanya said, might be solvable in the future. But there are some small molecules that where they have, you know, the vibrational modes are a little bit less, we cannot detect those yet. For example, some metals we cannot detect them directly, so we have to do like, a sample prep and that if it's the purpose of not doing sample preparation. So that might be something we improve on. The... the other example is compounds that are volatile, for example, are non-soluble in our electrolyte, which is equals, yeah, water soluble. So those are things that we think are solvable, but it's not something we have tested or tried yet. But those are two that come to my mind. Chaitanya, what else do you think would be a limitation of QES? 

Chaitanya Gupta: Yeah, I mean the... those are sort of the primary fundamental limitation. So, for example, this technique would not be good at trying to detect parts per billion, or how much lead there is in drinking water because lead is, you know, it's... it's a... it's... it's an atomic species. There's typically a tradeoff between the size of the molecule and the lower limit of detection, so the smaller the molecule, the less rich its vibrational signature, and so your lower limit of detection is kind of on the high side. But for things like proteins, which have a, you know, a very, very rich vibrational signature, you can detect them as low as picograms per milliliter in complex samples like serum, plasma, or whole blood. The... the other things that this platform may not be suitable for in its current avatar is kinetic measurements. The measurements where you're trying to understand how things are changing as a function of time, but over very short timescales, right? So, if things are changing over the time period of the scan itself, then those changes are going to register as drift in the measurement, and you're not going to be able to necessarily make a whole lot of sense as to what's going on from the raw data. And so, it's... it's not particularly suited for those kind of measurements. But let's say like, things like determining the reaction constant, you know of the binding of a antigen to a probe where that time constant is on the order of minutes though. It's... it's for those kind of applications that the platform would not be suitable for in its current avatar. We might be able to get to a point in the future where the scan is fast, and in that scenario we may be able to measure, you know, these kind of time dependent reactions, but we're not there yet. 

Hannah Rosen: That makes sense. What's the level of throughput that you're looking at with this technique? You know, are you able to do multiple samples simultaneously? How long does it take to run a sample? 

Juan Cuevas: So, the... the instrument we have now which is, Chaitanya correct me if I'm wrong, the 4th iteration of the prototypes you put together, can run six samples at the time. So, you... you can allocate 6 aliquots of of samples at a time. And the... the time it takes to acquire the data is 30 minutes, so it's pretty fast. The beauty of this instrument is... is an SPS format which SLAS community is very familiar with. So, you can tile it and you can, you know, put it on a liquid handler, and by having only 1 pipette in step it's very easy too. Note here is we have not completed our automated protocols yet, but what we heard from customers, when we did voice of the customer to see what was important and not important, is for many of them, not all of them, it was very important to have a flexible throughput, more than a high throughput, because for what they are thinking at this point regarding applications that bio manufacturers that pre-clinical or biomarker discovery, sometimes they have high demand, sometimes they have low demand. So, they are... they... they don't want to wait and batch, they want to run on time. And that gives them an elasticity on this throughput. 

Hannah Rosen: Right. So, if there's a researcher out there listening who has heard everything you say, they're like, I'm in. I want to do this. I want to use QES, what are the major things they need to know and what are the immediate steps they should take? 

Juan Cuevas: One of the beauties of the instrument is we can put it on a... on a case and ship it. So one of the things we are doing is, you want to try it and you want to work with us, let's discuss. Let's see if our interests, you know, merge, and we'll ship you the instrument. So you don't have to get an MTA, you just try it, whatever you are. As we said before, we are going step by step. We are trying to be careful. So, where we are now is what we call limited release. So, we are choosing a set of 10 to 15 partners that are going to be working with us on this early commercial days. We are very happy to... to hear your ideas and... and move from there, and if you're happy and we're happy, we'll ship you the instruments so you can try it for... for a couple of weeks before committing to... to using it. 

Hannah Rosen: So, you know, looking towards the next couple of years, you've mentioned some, but you know what are you really... what advances are you really focusing on and are you really hoping to make with this technology? 

Chaitanya Gupta: Yeah, I can... I can take that. So, the first step is really increasing the menu of references and models of analytes and... and we would do this in a... in a kind of a hybrid scenario where we would develop some of these reference data and model ourselves, but we would also be happy to work with customers to help us build these. And we'd be looking to add, analytes and signatures in inflammation, metabolic and neurodegeneration, which is sort of our starting point, but we look to expand beyond that as well. You know, chronic diseases, oncology, etcetera. And so that's sort of 1 starting point is increase the menu of references that we would have available so that the user can basically ping the sample with as many different analyte species as they want to look... look at. On the hardware side, we... we are going to try and continually improve on the throughput. So scaling the system down so that we can enable 24, 48, you know, 96 plus wells so that we can run that many samples in 30 minutes. So that that would put us in the, you know, high throughput, sort of hands-off data acquisition mode, sort of in the, you know, high throughput screening category, if you will.  

There is also work to be done on the sensors themselves. You know, I was talking about increasing the speed of the scan. One way to do that is to have a multiplicity of sensors on an individual chip, and those individual sensors can basically target different segments of the voltage scan, and then you can stitch all the responses together to create the composite scan of the sample. And so that would allow us to reduce the time per scan from 30 minutes all the way down to a few minutes. So those are some of the, you know, technical improvements that we see very near-term. Long-term, I mean, the sky is the limit, right? So, if you think about how we are positioning this platform, it's this idea that a user can click from a drop down menu, select the kind of analysis they want to perform on their sample, and the machine learning pipeline will perform that analysis on the, you know, the vibrational signature based digital twin of that sample. You can, you know, because it's a... it's a... a what... what we're delivering to the end user, it's not a reagent, it's not a sample prep method. It's essentially a... a model or a reference data set. You... you provide these services, you know, as a subscription service, think of a Netflix like approach where you can choose which analytes you want to look at in exchange for a subscription fee. And like Netflix, we could eventually, once we have a community of users, point users to useful analysis that they may want to perform. Given what the community at large is doing, and given what the users themselves are interested in. So, our recommendation engine that would point users to, you know, in a in a knowledge-based manner, point them to specific analysis that they might want to perform on their samples. So, ultimately we see this as truly a knowledge platform where... where the data is converted into useful information for the end user, and there are multiple hierarchies of this useful information, and so that would be, you know, development that would happen on the on the back-end side on the machine learning prompt specifically. 

Hannah Rosen: Yeah, I mean that all sounds really exciting. Again, really ambitious, but you've already proven that you can tackle on ambitious projects with great success. So that's all sounds really exciting. Unfortunately, we are out of time, but I just... I really want to thank you both, Chaitanya and Juan, for joining me today. I've learned a ton. I'm really excited about this and I'm really excited to see kind of where Probius goes in the future. I really hope we'll see you at some future SLAS events, and really looking forward to... to seeing where you guys go. 

Chaitanya Gupta: Thank you so much for the opportunity to talk about Probius, yeah. 

Juan Cuevas: Thank you. Thank you, Hannah. Thank you the New Matter podcast and definitely we'll see each other on the next SLAS meeting. 

 

Podcasts we love