Developer Blog
FEB 2026

Pavan Dhadge

Testing AGI in an Alien Universe

There is a common thought experiment about artificial intelligence. If we train an AI model only on data available up to the 1900s, and restrict the domain to physics, we could let it run. We let it independently rediscover theories like relativity using experiments provided via simulations or automated result engines.

When I first thought about this, it sounded like a perfect benchmark. Can then could we claim to have achieved AGI? I am not so sure anymore.

The Limits of One Domain

A test like this feels direct, but it also hides a lot. It only tests intelligence within physics. Physics is a beautiful and complex system, but it is just one shape of the world.

Similar results should be achieved in domains like biology, economics, or language etc. Real intelligence is not stuck in one lane. It moves between them. If an AI can only do math and physics, it might just be a very good calculator. It needs to show that it can handle the messy, unpredictable nature of other fields using the same core logic.

The Hidden Assumptions

I used to think that keeping the training data old was enough to make the test fair. But that is missing half the picture. Architecture and training biases may encode modern assumptions, even without modern data.

# We build the bias into the tools themselves
$ ./train-model --data=1900s --architecture=2026-design

When we build the machine today, we build it with the knowledge of today. The shape of the neural network or the way we set up the reward system might quietly push it toward relativity. It is like giving someone a maze where the floors slope slightly toward the exit. It finds the answer, but we might have secretly guided it there.

The Need for an Alien World

To truly control this, we need to remove human history entirely. We need to put the model in a space where we cannot help it, even by accident.

We should place the AI in a simulated universe with consistent but alien physical laws. Give it different gravitational constants. Give it non-Euclidean space. Give it a reality that no human has ever studied or understood.

# Booting up a clean room
$ simulate-universe --rules=alien --geometry=strange

In this space, there is no historical leakage. We cannot accidentally encode the right answer into the architecture because we do not know the right answer ourselves.

It Teaches Us How It Actually Thinks

This alien terminal pushes the AI to actually learn. When it is in a universe it was not built for, it has to be a true observer. It has to look at the raw data, build hypotheses, run its own tests, and figure out the exact shape of that strange system.

It turns mysterious behavior into something it can inspect and understand from scratch. Every rule it finds is a small piece of proof that it is actually thinking, not just remembering.

Final Thoughts

I do not think true intelligence is about looking backward to solve what humans have already figured out. It is about making sense of the completely unknown.

If we place an AI in a strange, alien simulation and it can derive the physics for that reality, it proves it has real, general reasoning. And if it can do that for an alien world, it can surely do the same for ours.