photo of Dewey Murdick

Dewey Murdick

Executive Director - Center for Security and Emerging Technology (CSET) School of Foreign Service, Georgetown University

Working groupExpert Group on AI Risk & Accountability
Stakeholder TypeAcademia
GPAI
ONE AI Member
AI Wonk contributor

Dewey Murdick is the Executive Director at Georgetown’s Center for Security and Emerging Technology (CSET). He serves as an unpaid advisor to several organizations: the OECD Network of Experts on AI (ONE AI); the Center for a New American Security (CNAS) Task Force on AI and National Security; and as an advisor for the National Network for Critical Technology Assessment. Prior to joining CSET as its founding Director of Data Science, he was the Director of Science Analytics at the Chan Zuckerberg Initiative, where he led metric development, data science, and machine learning and statistical research for scientist-facing products and science-related initiatives. Dewey served as Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security. At the Intelligence Advanced Research Projects Activity (IARPA), he co-founded an office in anticipatory intelligence and led programs in high-risk, high-payoff research in support of national security missions. He has also held positions in intelligence analysis, research, software development and contract teaching.

Dewey’s research interests include connecting research and emerging technology to future capabilities, emerging technology forecasting, strategic planning, research portfolio management, and policymaker support. He holds a Ph.D. in Engineering Physics from the University of Virginia and a B.S. in Physics from Andrews University.

Dewey Murdick's videos

The OECD Framework for the Classification of AI Systems

The OECD Framework for the Classification of AI Systems

February 2, 2021clock4 mins

Different types of AI systems raise very different policy opportunities and challenges. As part of the AI-WIPS project, the OECD has developed a user-friendly framework to classify AI systems. The framework provides a structure for assessing and classifying AI systems according to their impact on public policy in areas covered by the OECD AI Principles.

The OECD Al Systems Classification Framework

The OECD Al Systems Classification Framework

February 6, 2021clock90 mins

The OECD’s Network of Experts on AI developed a user-friendly framework to classify AI systems. It provides a structure for assessing and classifying AI systems according to their impact on public policy following the OECD AI Principles. This session discusses the four dimensions of the draft OECD AI Systems Classification Framework, illustrates the usefulness of the framework using concrete AI systems as examples, and seeks feedback and comments to support finalisation of the framework. Aclassification framework to understand the labour market impact will also be introduced.

AI System Classification for Policymakers

AI System Classification for Policymakers

January 28, 2021clock13 mins

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.