The server racks are silent. There are no fans spinning at 10,000 RPM, no GPU arrays drawing thousands of watts, no rivers of cooling water beneath the floor. Instead, inside two small facilities in Melbourne and Singapore, living human brain cells are processing information — and, according to their creators, doing it better than any silicon chip could hope to match on energy efficiency.

This is not science fiction. On March 10, 2026, Australian biotech startup Cortical Labs announced the launch of what it calls the world’s first biological data centers — facilities built around its CL1 biological compute units, each housing hundreds of thousands of lab-grown human neurons wired directly onto silicon.

It is one of the most significant demonstrations that the word “biocomputer” is no longer a metaphor.


From Pong to Doom to Data Centers

Cortical Labs has been building toward this moment for years. The company first turned heads with DishBrain, an earlier prototype that taught a culture of neurons to play the classic game Pong — a rudimentary but striking proof that living cells could be coaxed into performing computational tasks. Last month, the company went further: its CL1 unit learned to navigate Doom, a far more complex 3D environment, within a single week of stimulation.

The jump from playing retro video games to powering actual data center infrastructure is enormous — and intentional. Cortical Labs is not just doing neuroscience for its own sake. It is pitching biological computing as a serious, scalable alternative to GPU-based AI infrastructure at a time when the world is running out of cheap power to feed its machine learning ambitions.


What Is a CL1 Unit?

At the heart of these biological data centers is the CL1, a self-contained wetware computing device first unveiled at MWC 2025. Each unit is built around a multi-electrode array — a chip embedded with roughly 800,000 human neurons grown in a laboratory from blood-derived stem cells.

The system works by sending precisely calibrated electrical signals into this neuron layer. The cells respond, rewire, and adapt — much as a brain does when learning a new skill. Cortical Labs’ proprietary software stack, called the Biological Intelligence Operating System (biOS), mediates this interaction, translating neuron firing patterns into computational output.

Unlike a GPU or CPU, the neurons are not programmed in any conventional sense. They are stimulated and observed. Over time, they self-organize in response to the inputs they receive — an emergent form of learning that does not require the massive pre-training pipelines that define today’s large language models.

A nutrient-rich solution circulates through each unit to keep the cells alive. The CL1 monitors temperature, gas mixture, and waste filtration automatically. Under these conditions, the neurons remain viable for up to six months.

Each unit carries a price tag of approximately $35,000.


The Energy Argument

If there is one number that justifies the entire project, it is this: 30 watts.

That is the power draw of a single CL1 unit. For context, a high-end GPU used for AI inference can consume anywhere from 300 to 6,000 watts under load. A full rack of CL1 units draws between 850 and 1,000 watts total — less than a household microwave oven.

The implications are stark. Global data center electricity consumption is already measured in hundreds of terawatt-hours per year, and AI workloads are accelerating that figure rapidly. Governments and technology companies are scrambling to build new power plants, negotiate nuclear energy deals, and retrofit their cooling infrastructure — all to feed an insatiable appetite for compute.

Cortical Labs CEO Hon Weng Chong made the contrast explicit, telling reporters that each CL1 consumes less electricity than a handheld calculator. The company’s position is that biological computing does not just do things differently — it does them with a fundamentally different energy footprint, one rooted in billions of years of biological optimization rather than decades of semiconductor engineering.


Melbourne and Singapore: Two Very Different Deployments

The two facilities represent different stages of confidence and ambition.

The Melbourne data center, launched as the first of its kind in the world, is explicitly a proof of concept. It will house 120 CL1 units and serve as the primary testbed for refining the technology, validating reliability across continuous operation, and establishing operational protocols for biological compute infrastructure.

The Singapore facility is a more ambitious proposition. Built in partnership with sustainability-focused data center operator DayOne Data Centers, it will begin with a pilot rack of just 20 CL1 units deployed at the Yong Loo Lin School of Medicine at the National University of Singapore — a deliberate choice to embed the initial phase within an academic and research context. From there, the Singapore site is planned to scale up to as many as 1,000 CL1 units, pending validation and regulatory approvals.

The Singapore partnership is notable. DayOne has positioned itself as a sustainability-focused operator, and its alignment with Cortical Labs signals a genuine belief that biological computing could eventually fit into a commercially viable data center business model — not merely as a curiosity, but as a deployable infrastructure technology.


Synthetic Biological Intelligence: A New Paradigm?

Cortical Labs calls its approach Synthetic Biological Intelligence (SBI) — a deliberate reframing that distinguishes it from both traditional AI and from neuroscience research. The claim is that neurons offer something silicon cannot easily replicate: adaptive, self-organizing computation that improves with use without requiring explicit training epochs.

Whether this constitutes a genuinely new paradigm or a clever framing is a matter of debate in the research community. Reinhold Scherer of the University of Essex has noted that while the technology opens exciting doors for exploration in learning and training, neurons fundamentally cannot be programmed the way conventional computers can — they must be guided. That distinction has significant implications for what kinds of tasks biological computers can and cannot perform.

The honest assessment from most outside researchers is that CL1 units, despite their novelty, are nowhere close to replacing GPU clusters for large-scale AI training or inference. The computing capacity is modest. The tasks demonstrated so far — Pong, Doom — are trivially simple by the standards of modern machine learning. And the challenges of reliably interfacing biological matter with software at scale remain formidable.

But that may not be the right benchmark to apply. Cortical Labs is not trying to out-FLOP an H100 GPU. It is exploring an entirely different point in the design space: ultra-low-power computation for adaptive tasks, potentially suited to edge inference, biological research applications, or domains where continuous learning from sparse signals is more valuable than raw throughput.


Why This Belongs in the Biocomputer Story

The emergence of biological data centers is exactly why the definition of “biocomputer” needs to expand beyond its narrow origins. For decades, the term carried either a reductive meaning — the brain as metaphor — or a speculative one — DNA-based logic gates in a test tube. What Cortical Labs is building occupies a third position: living neurons as actual compute substrate, interfaced with electronics, maintained by software, and deployed at infrastructure scale.

This is the convergence of synthetic biology, neuroscience, and computing engineering. It is the same impulse that drives research into brain-computer interfaces, organoid intelligence, and neuromorphic chips — but here, the living material itself is the chip.

The two-dimensional neuron layers Cortical Labs uses are, as even the company acknowledges, simple compared to actual brain architecture. The human brain contains between 60 billion and 99 billion neurons. Each CL1 holds roughly 800,000. The gap is vast. But every field has to start somewhere, and the fact that these cells are already learning to navigate a first-person shooter inside a commercial data center product is not a small thing.


What Comes Next

Cortical Labs plans to run further proof-of-concept programs at both facilities over the coming months. The focus will be on demonstrating reliable, repeatable performance across a range of tasks — moving beyond games toward applications with practical research or commercial value.

The broader challenge is validation: not just technical, but regulatory and ethical. Deploying human-derived biological material as commercial computing infrastructure raises questions that the semiconductor industry never had to grapple with. How are the neurons sourced and consented for? What are the obligations of an operator running living tissue? How is “end of life” for a CL1 unit handled responsibly?

These are not hypothetical concerns. They are the kinds of questions that will shape whether biological computing remains a laboratory novelty or evolves into a recognized category of infrastructure — one that sits alongside GPUs, FPGAs, and neuromorphic chips in the toolkit of 21st-century computation.

For now, in a pair of modest facilities on opposite ends of Asia-Pacific, human brain cells are doing something they have never done before: running a data center. Whether that is the beginning of a revolution or a fascinating dead end is a question the next few years will answer.


Cortical Labs is based in Melbourne, Australia. Its CL1 biological compute units were first announced at MWC 2025. The Singapore biological data center is being developed in partnership with DayOne Data Centers at the National University of Singapore’s Yong Loo Lin School of Medicine.

Featured image: AI-generated illustration via Gemini.