Eswar Iyer, CEO of Aikium

Welcome to Partnology’s Biotech Leader Spotlight Series, where we highlight the remarkable accomplishments and visionary leadership of biotech industry pioneers. This series is about showcasing the groundbreaking strides made by exceptional leaders who have transformed scientific possibilities into tangible realities. Through insightful interviews, we invite you to join us in following the inspiring journeys of these executives who continue to shape the landscape of the biotech industry. This week we are recognizing:

Eswar Iyer is the Co-founder and CEO of Aikium Inc., a therapeutics company pioneering new treatments with Yotta-ML², the world’s first AI-driven trillion-protein wet lab screening platform. With over 100 patents spanning single-cell and spatial multiomics, protein and tissue engineering, hardware, and AI, Eswar has dedicated his career to advancing high-definition biological design and discovery. He completed a postdoc with George Church at Harvard’s Wyss Institute, working on groundbreaking synthetic biology approaches, and holds a PhD from George Mason University. His innovations have driven multiple industry-defining platforms and contributed to technical diligence for acquisitions totaling $480 million. Today, Eswar leads Aikium in developing targeted protein therapeutics for diseases with significant unmet needs, focusing on areas where traditional approaches like antibodies and small molecules fall short.

You transitioned from scientist to CEO in under a decade. Walk me through that journey – what key moments or decisions led you to where you are today?

That’s true — I transitioned from being a scientist to becoming a CEO, and it’s been an interesting journey. I’ve always wanted to be an innovator. For me, success means matching my best skill to the greatest need and building a career around that. My strongest skill has always been solving technically challenging problems, and I genuinely enjoy doing that. I also wanted my work to serve the broader community.

Innovation, to me, is the bridge between technical problem-solving and societal impact — it’s about building something useful, not just something that satisfies personal curiosity. That’s always been my orientation: to keep learning and evolving.

Over the past decade, even during my PhD and postdoc, I pursued innovation—filing patents, working in George Church’s lab, and getting early exposure to the startup world. Later, at 10x Genomics, I helped develop several products and build a strong patent portfolio. That experience really shaped my path.

Eventually, starting a company with my incredible co-founders felt like a natural next step. I wasn’t chasing a title—I was chasing a problem. Working with the right people and surrounding myself with the right team pushed me in this direction.

There are a few guiding principles I’ve found helpful—both personally and with co-founders. First, while you can’t predict the market or many external factors, thinking deeply about a problem from first principles can give you a real advantage. It helps anchor your decisions in substance rather than trends.

Second, surround yourself with the best people. You’re the average of the five people you work most closely with. Working with smart, driven collaborators helps you identify problems more quickly—both in depth and in breadth—and navigate uncertainty with more clarity.

It’s hard to say whether any specific experience or lesson I’ve had would directly apply to someone else, but I think these principles are more widely useful to anyone exploring this space. 

What was the founding insight or unmet need that led to the creation of Aikium? What did you see that others in the drug discovery space missed?

One of our core beliefs is: if an idea seems like a good idea, it’s usually not a good idea. I’ll attribute that insight to Ed Boyden, who I worked with — along with George Church. They both made similar comments that really stuck with me. The thinking is: if something seems obviously good, it’s probably because many other people are already working on it. You don’t want to follow the crowd. You want to peel back five layers and ask: what is the fundamental challenge here? Why should we exist?

We asked ourselves, if I were investing my father’s pension fund into this, why would it be worth it? That’s the level of seriousness we brought to evaluating the opportunity.

As we dug deeper — beyond the surface-level hype around AI — we had conversations with key opinion leaders, did extensive literature reviews, and uncovered a consistent theme: there are many therapeutic targets that current technologies like small molecules or antibodies simply can’t access. That insight revealed a big enough challenge to devote the next decade to. It also resonated on a values level — it felt like work that could truly serve humanity. And if we succeeded, the value created would be recognized and rewarded.

More specifically, we identified a major bottleneck: a sparsity of high-quality data available for machine learning training in protein engineering. My co-founder, Venkatesh Mysore — an AI expert with experience at NVIDIA, Atomwise, and The D.E Shaw resarch institute — made a prescient observation early on: algorithms won’t be the moat. Someone will always write a better one in three months. The moat is in the data.

That insight turned out to be exactly right. So we focused on building a platform that could generate the high-value data needed to train machine learning models in protein engineering — data that would allow us to design molecules that traditional approaches couldn’t reach. That was the foundational insight that led us to create the company.

Your platform, Yotta-ML², screens a trillion AI-designed proteins. Can you walk us through what makes this scale possible—and why it matters for tackling “undruggable” diseases?

Our flagship platform is called Yotta-ML², which stands for Yotta-scale Machine Learning on Massive Libraries. “Yotta” refers to 10²⁴ — the largest unit prefix in the metric system — and reflects the scale at which we’re operating. Here’s how it works:

We start with AI-designed molecules, generated based on pre-trained models and guided by specific properties we impose. This design process produces a vast number of molecular variants, which we can then test experimentally. The core idea is to learn as many structure-function relationships as possible, as quickly as possible. The natural question is: How can we screen a trillion molecules?

The key is in barcoding. Every molecule we synthesize is tagged with a unique barcode. This enables us to efficiently encode, track, and test a trillion molecules in a single tube.

Traditional techniques like yeast display involve large cells expressing a small protein on the surface, with the DNA inside the cell encoding the protein. You select for the protein, then sequence the DNA to identify it.

Phage display — a parallel innovation — uses virus-like particles to present small proteins for screening, but the particle bodies are still relatively large and not optimized for scaling.

What we’ve built is a more elegant system: a method to covalently and stably link each protein with a molecular barcode — the instructions needed to make it — in a compact and highly efficient format. This allows us to screen orders of magnitude more molecules per experiment than legacy methods.

Several innovations, tricks and trade secrets had to come together for this platform to work as a seamless engine including:

  • The AI to design molecules
  • Efficient, on-demand DNA library assembly and QC at scale
  • Validation that we are generating a trillion uniquely barcoded and AI-designed molecules with high fidelity
  • A rapid screening system to test the proteins

It’s a full-stack innovation platform, where every layer — from design to build to test — has been orchestrated to operate at unprecedented scale.

Aikium integrates AI, synthetic biology, and protein engineering. How do you think about orchestrating these disciplines into a coherent platform strategy?

That’s a big differentiator at Aikium. Many companies typically lean heavily into AI and then test only a small number of those designs. Others focus primarily on synthetic biology—building out the wet lab—and add AI as an afterthought or a more recent layer. Some try to do both, but often at a smaller scale.

Given our particular technical capabilities and background, it was very clear to us that, to build a powerful engine, AI and wet lab operations must be deeply integrated. The design-build-test cycle needs to flow seamlessly—almost like an assembly line—while being iterative, automation-compatible, and highly scalable. We took all of that into account from the outset. It’s not an accident; having experience architecting complex platforms, we were intentional in how we built this one.

AI is only as powerful as the data it learns from, and a wet lab is only as useful as the AI’s ability to interpret and adapt from it. So, we combined both systems to enable continuous, reciprocal learning.

Once the AI designs the molecules, we still need to produce them at larger scale and run biophysical assays to evaluate key characteristics: are they stable, soluble, monomeric? Do they have the right binding properties? What do we see in size-exclusion, chromatography, and so on?

This is where the protein engineering aspect becomes critical—we use that feedback to rapidly evolve the molecules. So these three components—AI, wet lab, and protein engineering—must be tightly orchestrated to keep up the pace.

Ultimately, the speed of our design-build-test cycle is everything. The more high-quality iterations we can run in a shorter period of time, the faster we learn, and the faster we reach our goals. We achieve this by scaling our libraries, our AI capabilities, and our testing capacity, while tightly integrating all of them. That’s how we’re able to operate at a higher RPM than other teams in this space.

You’ve worked on cerebral organoids and raised bioethical questions in your Harvard work. How do those experiences inform your thinking today, especially as synthetic biology becomes programmable at scale?

One of the things I worked on in George Church’s lab at Harvard was figuring out how to build cerebral organoids at scale, such that each one could serve as a variant for high-throughput drug discovery. I was probably one of the first to culture cerebral organoids in George’s lab, just to try it out and learn the challenges and understand the gaps for innovation.

What I quickly realized, though, was that cerebral organoids are highly variable. So I needed more defined, reproducible, biomimetic structures—human cells that more accurately represent human disease biology. That led me to start experimenting with 3D printing, photolithographic patterning of proteins, hacking standard inkjet printers to print morphogens, and experimenting with scaffold stiffness. The insight I had was that biology builds reproducible tissues and organs by using spatial patterning—by programming molecular gradients. I wanted to replicate that: to program the structure and reproducibility into the organoids, while also capturing phenotypic and genotypic readouts. That was the inspiration behind much of my work, from organoids to spatial omics and beyond.

In fact, this perspective was shaped by my earlier background in molecular neuroscience, where I worked on one of the simplest model organisms: fruit flies during my PhD with Dr. Daniel Cox at George Mason University.. I used to laugh when people said they worked on flies, and I’m sure some listeners might still feel that way—but it’s actually one of the most powerful systems for rapid design-build-test cycles. You can isolate a single neuron, knock out a specific gene, and ask precisely what happens. The statistical and experimental control you get from that kind of system showed me just how powerful reproducibility can be. That inspired me to think differently about building organoids.

As we began developing more reproducible organoid structures, we also encountered some deeper, bioethical questions. For example, if you grow a brain organoid for eight or nine months, what are the implications? What defines its personhood? Is it thinking? Is it sentient in any way? The original bioethics frameworks were written in the 1970s and ’80s, under the assumption that biological development would follow certain sequences—body before brain, etc. But now, we can short-circuit that. We can make a brain without the rest of the body. That raises profound implications. These technologies are incredibly powerful and could transform how we develop medicines—far more effectively than screening in rats or mice—but they also push us into new ethical territory.

Fortunately, George’s lab encouraged deep thinking on these topics. I had the opportunity to collaborate with Jeantine Lunshof, one of the leading bioethicists in the field, along with George and John Aach, to explore these issues. We published a paper and continued to examine the broader implications of this work.

That experience exposed me to the “softer side” of science, which is just as important as the technical side. Overall, I think it helped shape me into a more well-rounded scientist—one who considers not only the technology and discovery process, but also its societal impact and the responsibility that comes with it.

What do you think biotech as an industry gets wrong when it comes to AI? Where is the hype overblown, and where are we underestimating its potential?

I think I have a very specific answer to this, but broadly speaking, the industry is at an inflection point. AI has come of age, and it’s already beginning to drive major changes. Even since last year, pharma companies have started positioning AI as a central part of their strategy. Transformations like this are rare—maybe once a decade or even once in a career—and it’s striking how quickly the impact is being recognized.

That said, where people may be getting it wrong—or where the hype exceeds reality—is in the how. Everyone agrees that AI will add value by making drug discovery faster, cheaper, and better. The question is: how will that value be realized?

There are three main ways to improve an AI model:

  1. More compute power – This benefits everyone equally.
  2. Better model architecture – For example, building a model with 100 million parameters versus 10 million can improve performance.
  3. More and higher-quality data – This is the real differentiator.

OpenAI demonstrated in their research on scaling laws that while model architecture (parameters) improves performance, adding high-quality data improves it exponentially. That’s a powerful insight.

In five years, companies with access to high-quality proprietary data will have a much greater impact because they’ll be able to build fundamentally better models. This is where the real value lies. For example, Scale AI focused on annotating large datasets—Meta recognized the value and acted on it.

It’s easy to overestimate what AI will do in one year and underestimate what it will do in five or ten. Right now, most companies are focused on building bigger models with more parameters because it’s the most accessible path. Our approach at Aikium takes a contrarian view—we’re focusing on building better data. The models will follow.

Another gap between hype and reality is the nature of biological data itself. Today’s biological models are trained on extremely sparse datasets. Language, for instance, is complex, but biology is orders of magnitude more so. Each protein can interact with thousands of others in highly contextual ways. Yet we have three to four orders of magnitude less data in biology than in language.

If we can enrich our understanding of protein-protein interactions and gather higher quality data in this space, we believe we’ll be able to build far more powerful models. This data scarcity—and the opportunity to solve it—is where I believe some of the biggest breakthroughs in AI for drug discovery will emerge over the next five years.