
AI can point the way, but engineers must still walk it—testing, refining, and manufacturing what the computer imagines.
In the labs of the future, AI is becoming an alchemist, conjuring up new materials by the thousands. Traditional lab-by-lab screening can only scratch the surface. For example, researchers have evaluated only a few thousand of the millions of possible metallic-glass alloys in five decades.
In late 2023, Google DeepMind reported that its new “GNoME” AI had predicted 2.2 million novel crystal structures, including 380,000 candidates likely to be stable in reality. These include thousands of layered compounds akin to graphene and hundreds of potential battery electrolytes.
Yet this computational bounty raises a central engineering question: how do we move from AI proposals to real-world materials? Building a material still requires the gritty work of synthesis, testing, and scale-up. As NREL materials data scientist Steven R. Spurgeon observed, “the true revolution in autonomous science isn’t just about accelerating discovery but about completely reshaping the path from idea to impact.”
In other words, AI can point the way, but engineers must still walk it—testing, refining, and manufacturing what the computer imagines.
From computation to creation
In one recent demonstration, an autonomous lab produced 41 new compounds in 17 days, all first proposed by AI. These experiments came after a deep-learning model sifted through possibilities to find high-probability candidates, a definitive transformation from brute-force searches typically favored in the past.
Historically, researchers would spend years (or lifetimes) tinkering in the lab to stumble on new alloys and polymers. Modern AI flips the workflow: instead of humans scrambling for serendipity, algorithms predict which combinations of elements or molecules are worth making in the first place.
Cornell engineers, for instance, recently built physics-informed generative networks that embed crystal symmetries and chemistry rules into the AI itself. The result: entirely new crystal structures that are mathematically possible and chemically sensible.
Behind this “algorithmic alchemy” is the sobering fact that material spaces are unimaginably huge. Consider a typical alloy: even fixing three or four base elements, the possible compositions and microstructures easily number in the millions. Argonne National Lab’s Polybot robot illustrates the combinatorial challenge: to make a conducting polymer film with ideal conductivity and low defects, the team faced nearly a million different processing recipes—far too many for humans to test by hand.

Artistic rendering depicting Polybot, the AI-driven automated material laboratory. Credits: Argonne National Laboratory
Similarly, battery researchers estimate that the landscape of solid-state compounds contains millions or even billions of viable candidates. Each factor multiplies the options from element ratios to heat treatment and additives. Traditional R&D can only sample a tiny fraction of this space. AI, by contrast, can rapidly screen and narrow down promising regions using physics simulations and data-driven models, turning an intractable search into a focused plan of attack.
The tools of the trade
Materials scientists now rely on machine-learning techniques to navigate these vast design spaces. One cornerstone is Bayesian optimization, an active-learning strategy that greedily picks the next best experiment. Instead of randomly testing formulas, the AI evaluates which unexplored candidate will most likely improve the target property (strength or conductivity, for example) and runs that experiment next.
This means researchers can reach their goals with far fewer trials. For example, a study of shape-memory alloys showed how Bayesian-guided loops found new low-hysteresis alloys in 36 steps—a tiny fraction of the ~800,000 combinations in the search space.The open-access literature now bristles with Bayesian-driven materials discoveries, from finding sharp photovoltaic absorbers to identifying ultra-hard alloys.
Neural networks play a complementary role by learning materials’ complex rules. Graph neural networks (GNNs) in particular shine at modeling crystals: they treat each atom as a “node” and each bond as an “edge” (imagine a social network of atoms). These networks can ingest data from hundreds of thousands of known compounds and predict properties (like stability or band gap) for completely new combinations.
In one recent example, AI scientists trained a GNN on theoretical crystal structures and then deliberately included high-energy “decoy” structures in the training set. This taught the AI to distinguish truly stable arrangements from impostors; the result was a model with five times better accuracy at predicting stability than a conventional model.
Graph-based models have even powered AlphaFold-like breakthroughs in solids: Google’s GNoME architecture used graph networks (along with active learning) to zero in on low-energy crystal candidates.
On top of predictive models, generative AI is now entering materials science. As generative models can craft images or molecules, they can propose brand-new compounds or alloy formulas. IBM Research, for instance, released “foundation models” trained on billions of molecular structures. These models can screen millions of known chemicals for desired traits and generate novel molecules entirely new to nature, bypassing the traditional trial-and-error route.
In short, machine learning techniques—Bayesian search, graph networks, and generative models—are rapidly turning materials discovery into an engineering-driven optimization problem.
In practice: Startups and national labs in action
These AI tools are already in deployment. Citrine Informatics, a materials-AI company, helped aerospace engineers develop a new aluminum alloy in days instead of years. By simulating thousands of powder and nanoparticle mixes, Citrine’s platform homed in on a high-strength alloy called AL 7A77, now used as the first 3D-printable aluminum powder that meets aerospace specs.

The aluminum 7A77.60L powder. (Credit and copyright: HRL Laboratories)
Even NASA’s Marshall Space Flight Center is a customer for this AI-designed material. Automotive and battery firms also collaborate with platforms like Citrine’s to rapidly screen polymer electrolytes and coatings. In one cited example, Citrine’s ML pipeline can rapidly filter through 2,500+ polymer candidates in 5 months, tasks that used to take much longer.
National labs are pushing the envelope with far more ambitious projects. At Lawrence Berkeley Lab, Google DeepMind’s team and lab scientists worked together to turn AI predictions into real crystals. GNoME’s 380,000 high-confidence structures were added to open databases, and lab researchers have already synthesized dozens of AI-suggested materials. In fact, over 700 of GNoME’s predicted crystals have independently been made in labs worldwide, confirming that the “model’s predictions of stable crystals accurately reflect reality”.
Argonne National Laboratory and the University of Chicago built “Polybot”, a robot that automatically fabricates and tests polymer films. Polybot tackled a famously complex problem: making an electrically conductive plastic without defects. Because even slight changes in processing (coating speed, temperature, additive mix) can change the outcome, the possible process recipes run into the millions.
Instead, Argonne’s team let Polybot run experiments 24/7, guided by AI. Jie Xu of Argonne emphasized, “Polybot operates on its own, with a robot running the experiments based on AI-driven decisions”. The robot then analyzed film quality on the fly and proposed the next experiments. In this way, they achieved film conductivities “comparable to the highest standards currently achievable,” and even produced production-scale “recipes” for industry use.
Across all these projects, a theme emerges: AI is rapidly accelerating the front end of materials R&D, but engineering bottlenecks remain on the path to real products. Synthesizing a pencil-size sample in a robotic lab is far from producing tonnes of it at semiconductor purity or certifying it for aircraft use. This “valley of death”, where promising lab discoveries stall on the way to market, is well known in engineering.
The revolution won’t be complete unless researchers start designing materials to be “born ready” for scale-up. That means adding manufacturability metrics and lifecycle considerations into the AI’s goal, not treating them as an afterthought.
It also means creating modular, standardized lab equipment and data systems so an AI’s recipe can be handed off seamlessly to pilot plants or factories. Even the most promising AI-derived substance is worthless if it can’t be reliably made at scale.
For now, the central question of this frontier remains: AI can propose materials at a scale once unimaginable, but can those materials be validated, scaled, and qualified for industry? The answer will require not just smarter algorithms, but new engineering practices.
As one DeepMind researcher put it, we are compressing “millennia to months” in materials innovation, yet we must ensure that timeline feeds into real products. The coming years will see engineers and scientists working hand-in-hand—AI suggesting designs, robots churning out test samples, and teams figuring out manufacturing hurdles—turning algorithmic alchemy into tangible advances in energy, electronics, and beyond.