Reasonable AI Policy

AI will transform our world. Along with great promise, it also poses a real risk of human extinction. We are exploring how best to guard against catastrophic outcomes by preventing the training and deployment of dangerously capable systems until we know how to align them with human values and ensure control of the future is not lost. This requires a focus on possession, collection and usage of high end AI-focused computer chips and data centers, as the only practical way to prevent the unsafe training of dangerous systems. Done correctly this can avoid stifling our ability to enjoy the gains from innovation, and also ensure our privacy remains respected. The wrong regulations, poorly targeted, could easily either cripple the existing industry or be captured by it to protect insiders, without protecting us from extinction risks.

Please note that this topic introduction is preliminary, and more details as well as policy proposals, academic studies, data, model legislation, and other resources on this issue are forthcoming. We share this overview now to provide initial context and give readers a sense of our current perspective and approach to the issue.

Previous
Previous

Rethinking Housing Policy