While listening to a recent talk by Carlo Rovelli on the history of science, I found myself trying to imagine how an AI could embody the same ideals.
What Do We Mean By Science?
There has been a lot of debate about what proper “science” is, and I won’t pretend to be an expert here. Rather, I’ll appeal to two quotes by Mr Rovelli during his recent talk.
To slightly paraphrase, a scientific theory is “an effective, good organization of our experiences”. I think this is a good summary of what a theory is:
- our theories for particle physics explain what happens when we run colliders
- our theories for chemistry explain what happens when we mix compounds
- our theories for astronomy explain what happens when we look at the sky …and so forth.
Then science becomes the process by which we find “a better organization of our experiences” or “a better conceptual way to think about reality”.
I think that this is a good framework to explore what it would mean for an AI to do science.
Building a Model for Science
I think the key challenge for modeling science as a process will be finding a framework to propose experiments.
An experiment isn’t merely a random assembly of parts which generates physical data, but a careful mechanism to disambiguate between different possible models. That is, we’re not randomly sampling data – but sampling purposefully at the bifurcation points of possible models. The question of how to model models within an AI, so it can locate these bifurcation points and build tests to determine which model is contradicted by data, is the crux of scientific AI.
I believe that effective semantics and internal languages propose an answer:
- data should be generated from experiments
- from which we generate an effective semantics model
- for which a set of possible internal languages is proposed
- from which the system finds contrary statements between internal languages
- for which new experiments are devised
- from which new data is generated
To view that schematically: