How should states regulate advanced artificial agents that can plan future better than us? Can we maintain control over AI when highly capable systems intentionally bypass human oversight to maximize long-term rewards? Are safety tests reliable when AI systems behave differently during tests to ensure they pass? Who should be permitted to build such systems? What should the right governance frameworks look like?
Explore these questions with Michael Cohen, a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. Cohen will present his forthcoming lead-authored editorial in Science to address the prospect of AI systems that cannot be safely tested.
Date: Mon Mar 25 2024
Time: 17:00-18:00 GMT
Location: Seminar Room 1, Oxford Martin School (University of Oxford)