Open questions
-
Is it too early to begin working on automating scientific research right now? I would suppose that 90% of scaffolding and ducktaping done at the present will be useless two model generations down the line. On the other hand, it really matters to be well-positioned when the time is ripe
-
I would expect that it will become increasingly obvious over time that one can build an AI scientist. Thus there will eventually be massive resource investments regardless of my contributions. Should I be content with being the one who makes this happen a few months faster? Or can one somehow make sure that this development “starts on the right track” for incredible societal benefit?
If I am being honest with myself, I think I would also be enthusiastic about building the machine that discover’s all theorems and secrets of universe even if someone else would have built it a week afterwards. It just seems too amazing not to build.
-
Will the most economically valuable uses of the models be developed near exclusively within the AI labs? Or will they provide access to doing RL on their models and there will be a landscape of companies like FutureHouse where most of the applied work eventually happens?
-
Do we need more than hill climbing something close to current architectures with clever RL on the right benchmarks to fully automate all human cognitive tasks with roughly human sample efficiency?
Originally written as part of a cold-email to John Schulman in July 2025. If you have any thoughts or also find these questions exciting, write me a mail!