A virtual screening campaign using machine learning identified molecules with potential for development as novel CDK9 inhibitors for the treatment of cancer, according to early research findings published in Biomolecules. Integration of artificial intelligence (AI) into the drug discovery phase allowed for an accelerated analysis and identification of 14 molecule candidates that were then further tested

AMD Unveils Yotta-Scale Data Centre for Next Era of AI
As demand for AI training and inference accelerates, AMD has unveiled a comprehensive data centre roadmap designed to support the transition to yotta-scale computing, introducing new platforms and accelerators that could reshape enterprise AI infrastructure.
At CES 2026 in Las Vegas, AMD has outlined how it plans to underpin the next phase of AI through large-scale data centre innovation, unveiling new platforms and accelerators designed for what it calls the era of yotta-scale computing.
The announcements span rack-scale infrastructure, enterprise accelerators and next-generation GPUs, all aimed towards hyperscale and enterprise data centre environments that are grappling with unprecedented computational demands.
In the show’s opening keynote, AMD chair and CEO Dr Lisa Su highlighted how demand for AI training and inference is driving rapid growth in global compute capacity and reshaping data centre design.
The company positioned compute infrastructure as the foundation of AI’s expansion, noting that global capacity is expected to grow from around 100 zettaflops today to more than 10 yottaflops within five years.
“At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone,” Lisa told the keynote audience.
“As AI adoption accelerates, we are entering the era of yotta-scale computing, driven by unprecedented growth in both training and inference.
“AMD is building the compute foundation for this next phase of AI through end-to-end technology leadership, open platforms and deep co-innovation with partners across the ecosystem.”
Helios platform targets exascale performance
Central to AMD’s technology vision is the AMD Helios rack-scale platform, which the company described as its blueprint for yotta-scale data centre infrastructure.
Helios is designed to deliver up to three AI exaflops of performance in a single rack, targeting trillion-parameter model training and large-scale inference workloads that are becoming standard requirements for advanced AI applications.
Supporting this growth will require modular and open rack designs that can evolve across multiple product generations while maintaining energy efficiency and bandwidth at scale.
The platform combines AMD Instinct MI455X GPUs with AMD EPYC Venice CPUs and AMD Pensando Vulcano NICs to enable high-speed scale-out networking.
These components are unified through the AMD ROCm software ecosystem, reinforcing the company’s commitment to open platforms within the data centre.
At CES, AMD offered an early look at Helios and unveiled the full AMD Instinct MI400 Series accelerator portfolio while also previewing its next-generation MI500 Series GPUs, scheduled for launch in 2027.

Enterprise-focused accelerators address on-premises deployments
A key addition to the MI400 Series is the AMD Instinct MI440X GPU, designed specifically for on-premises enterprise AI deployments.
The MI440X targets scalable training, fine-tuning and inference workloads in a compact eight-GPU configuration that can integrate into existing data centre infrastructure, addressing the needs of organisations that require sovereignty over their AI computing resources.
The MI440X complements the recently announced MI430X GPUs, which focus on high-precision scientific computing, HPC and sovereign AI workloads.
These accelerators are set to power major AI factory supercomputers globally, including Discovery at Oak Ridge National Laboratory, Tennessee, US and the Alice Recoque system, France’s first exascale supercomputer.
AMD also shared further details on its Instinct MI500 Series GPUs.
Built on the AMD CDNA 6 architecture, advanced 2nm process technology and HBM4E memory, the MI500 Series is expected to deliver up to a 1,000x increase in AI performance compared with the MI300X GPUs introduced in 2023.
The company positioned the platform as delivering leadership performance across every level of the data centre stack.
Technology partnerships demonstrate infrastructure impact
While the announcements spanned from data centres to the edge, AMD emphasised the role of large-scale infrastructure in enabling AI innovation across industries.
During the keynote, partners including OpenAI, AstraZeneca, Illumina and Blue Origin highlighted how AMD-powered data centres are supporting breakthroughs in areas such as life sciences, gen AI and advanced research.
AMD also reinforced its involvement in the US government’s Genesis Mission, a public and private initiative aimed at securing long-term leadership in AI technologies.
Alongside its infrastructure roadmap, AMD announced a US$150m commitment to expand access to AI education, supporting efforts to bring AI into more classrooms and communities.
Together, these initiatives underline how AMD sees data centre innovation as central to both technological progress and broader societal impact in the AI era.
