I think AI governance should involve much broader participation than what we've seen so far. The current approach - primarily driven by researchers, tech companies, and policymakers - leaves out too many voices that will be affected by these systems.
Democratic Participation We need genuine public input, not just expert committees. This could include citizen assemblies, deliberative polling, or other methods that help ordinary people engage with these complex issues. Taiwan's vTaiwan platform and Ireland's citizens' assemblies on contentious issues offer models.
Affected Communities First People who will bear the brunt of AI impacts - workers whose jobs might be automated, communities subjected to algorithmic policing, students evaluated by AI systems - should have the strongest voice in setting rules, not just token consultation.
Multiple Governance Layers Different rules for different contexts: international coordination on existential risks, national laws for broad frameworks, local regulations for community-specific concerns, and industry standards for technical implementation.
Iterative and Adaptive Unlike traditional laws that change slowly, AI governance needs mechanisms for rapid updates as technology evolves. This might mean more regulatory agencies with flexible rulemaking authority rather than rigid legislative processes.
Transparency in the Process The rulemaking itself should be open - who's involved, what evidence they're considering, how decisions get made. Much current AI governance happens behind closed doors.
Global but Not Uniform Different societies will reasonably want different approaches to AI, but some coordination is needed for systems that cross borders or pose global risks.
The challenge is that democratic processes are slow while AI development is fast - but I think we need to solve that tension rather than defaulting to technocratic solutions.