Writing (selection)

My current writing centers mainly around the regulation, governance and ethics of artificial intelligence in the European Union and internationally. On this page and on my google scholar account you can find a selection of writing on that topic. Additionally, I cover the EuropeanAI ecosystem since 2018 with over 40k words in my newsletter.

Other research on emerging technologies, ethics and cognitive sciences has led me to contribute to Enabling Better Mental Health via the Ethical Adoption of Technologies as member of the World Economic Forum's Global Future Council on Neurotechnologies and to publish a popular article on AI usage in mental health practice on the World Economic Forum's Agenda.

Prior to doubling down on the governance of emerging technologies and artificial intelligence, I worked on long term policy making and for rights for Future Generations with the United Nations and national governments, some of which is explained in this co-authored report.

I have spent a many years in the design and performing arts industry and a lot of my current projects aim to combine this. 



While AI principles seemingly continue to mushroom, it remains difficult to verify whether or not (and to what degree) the recommendations outlined therein have been acted upon and implemented by organizations. A multi-disciplinary group of 58 researchers from institutions across the globe recently attempted to address this problem. They published a report entitled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”. The report proposes 10 mechanisms to improve the verifiability of claims made about a given AI-based system. The mechanisms provided can serve the purpose of enabling verifiability of claims from the perspective of developers and industry, but also to ensure that civil society, government and other stakeholders can evaluate claims.

The report is divided into three sections covering institutional mechanisms, software mechanisms and hardware mechanisms. Each section outlines relevant recommendations ranging from privacy-preserving ML, compute support for academia to bias and safety bounties.


Compared to other global powers, the European Union (EU) is rarely considered a leading player in the development of artificial intelligence (AI). Why is this, and does this in fact accurately reflect the EU’s activities related to AI? What would it take for the EU to take a more leading role in AI, and to be internationally recognised as such?


This new report surveys core components of the EU’s current AI ecosystem, providing the crucial background context for answering these questions. It outlines the EU’s high-level strategy and vision for AI, before looking at three crucial components the EU will need to implement this vision: funding, talent, and collaboration. The report aims to provide deeper insight into EU activities related to AI, to rectify any misconceptions about the EU’s level of involvement in AI development, and identify priorities for strengthening the current ecosystem.


Some key takeaways from this review include:


  • There is a clear emphasis on ethics and responsibility in the EU’s AI strategy and vision, especially relative to the US and China. The importance of ethics is clear in key publications laying out the EU’s AI strategy, in the makeup of the groups established to implement this strategy (particularly the High-Level Expert Group on AI which recently published a set of Draft Ethics Guidelines on AI), and in recent EU regulation, most notably the General Data Protection Regulation (GDPR). If the EU can quickly and effectively establish itself as leading in ethical AI, this could help give it a unique competitive advantage.

  • One barrier to the EU’s global competitiveness in AI development is a relative lack of VC investment and startup funding. However, the EU is beginning to address these funding challenges, for example with the newly proposed VentureEU fund and the European Fund for Strategic Investment. Though the European funding landscape is slowly changing, it remains to be seen whether these initiatives will be enough to make a meaningful impact.

  • Another challenge for the EU is ‘brain drain’ of talented researchers and developers to other continents. Part of the problem is that academic salaries are often not high enough to attract and retain top AI researchers. A number of different strategies for addressing this have been proposed - including boosting academic salaries and numbers of PhD positions, increasing visas to attract overseas talent, and increasing training and reskilling initiatives - but attracting and retaining top AI talent remains a significant barrier to achieving the EU’s vision.

  • The EU’s AI ecosystem could be strengthened by increased collaboration between member states, building on the EU’s track record of major collaborative projects including the Human Brain Project and CERN. There are several ongoing and upcoming EU-wide collaborative initiatives, including for example the European Lab for Learning and Intelligent Systems (ELLIS). Successful collaborations could also help address the above two challenges by attracting both more funding and talent.


By providing an overview of the EU AI ecosystem and highlighting some important initiatives, this report hopes to open a wider conversation about what EU leadership in AI could look like and what might be needed to get there.


" Artificial Intelligence (AI) is expected to have enormous impact in addressing many of the greatest societal challenges that face us today, e.g. ageing, transport, and the environment. It is expected that it will help improve the quality of life of citizens both at home and at work. In addition, it will contribute greatly to increasing European industrial competitiveness across all sectors, including small and medium-sized enterprises and non-tech industries.

Europe has a leading edge in AI and robotics, as acknowledged by the excellent scientific standing of European researchers, including a number of worldwide AI experts originating from Europe. This strong expertise is also reflected in the level of investment in Europe from world leading companies, either in existing labs or companies, or in creating new major R&D labs in Europe. In addition, Europe has a vibrant start-up landscape. However, these AI resources are scattered throughout Europe, and we must acknowledge that international competition is fierce. Therefore, in order to fully exploit the potential of AI for the benefit of the European economy and society and to guarantee Europe’s leading position in AI, it is essential to join forces at the European level to capitalise on our strengths.

This report is an initial snapshot of the European AI landscape

Articles and Papers


Article surveying how an AI mega-project could be established in a timely manner in the European Union. It argues that an immediately implementable strategy for the European Union (EU) could mix institutionalised coordination with a more decentralised model than a CERN-type institution would allow for.


Recent progress in artificial intelligence has raised a wide array of ethical and societal concerns. Accordingly, appropriate policy and governance for this emerging technology will be needed. While there has been a wave of promising scholarship on AI ethics, however, these communities at times appear divided amongst scholars who emphasize 'near-term' concerns, and those focusing on 'long-term' AI policy, with debates at times hamstrung by adversarial exchanges. In this paper, we seek to map and critically examine this alleged 'gulf' between these communities, with a view to understanding the practical space for inter-community collaboration on AI policy. In order to do so, we examine-and contextualize-four alleged faultlines (epistemic, normative, pragmatic, and strategic) between these communities. We argue that on many of these dimensions, disagreements are either exagerated, and that community interests are more often aligned (and less often conflictual) than is often perceived. By drawing on the established constitutional-law notion of an 'incompletely theorized agreement', we argue that on certain (if not all) issue areas, scholars working on AI policy from near-term and long-term perspectives can converge and cooperate on mutually beneficial projects, and accordingly we call on the communities to reach beyond their current dynamics, to more productively chart shared avenues for work on responsible AI.

Read the paper here.



This document serves as a short overview of the Policy and Investment Recommendations for Trustworthy AI. I introduce their background and purpose, and then dive into the main summary of the 33 recommendations.


End of 2018, the European Commission released two major documents outlining a coordinated plan for AI. It includes a projected aim of €20bn in funding by 2020 and lays the foundation for coordination on AI between European nations, with an invitation for international cooperation.

This is a partial overview of the Coordinated Plan on AI that I have cut down, re-structured and edited to highlight the aspects that I am most excited about.


The Federal Government accepts the assignment the rapid progress in the field of artificial intelligence (AI) offers. To that end, it will harness this innovation boost for the benefit of all. We want to safeguard the excellent research location Germany, expand the German economy’s competitiveness and promote the various applications of AI in all areas of society. The latter will be supported in terms of societal progress and in the interest of citizens. The focus for this will be the benefit AI can bring to people and environment.

We will also continue exchanges with all members and groups within society. Germany is well positioned in many areas of AI. The ‘AI made in Germany’ strategy makes use of existing areas of strength and transfers those to areas where the potential of AI hasn’t yet been fully exploited. The 2019 federal budget offers €500m to strengthen the AI strategy for 2019 and the following years. ‚ÄčThe federal government wants to provide €3bn until 2025 for the implementation of the AI strategy. ‚ÄčIt is expected that this commitment will leverage a doubling of financial resources through its impact on business, science and Germany’s sixteen states.


My take-aways are that the document isvery aligned with European Commission’s AI strategy and the Digital Day Declaration - strong focus on knowledge transfer (connecting R&D&I with economy and industry). There is a focus on building AI clusters, research orgs etc. (beginning with FR collab.);  AI for benefit of society (w. mention of a variety of typical european values); and focus on AI impact on employment (incl. re-/up-skilling, adding AI to several subject courses).

It identifies a strong need to support non-traditional innovation and make use of existent potential. It contains various mentions of monitoring AI-related developments, through e.g. international AI observatories.  The document outlines a need to combat brain-drain / attract new experts; better access and usage of available data w/ infringing on citizens’ rights; expand technical infrastructure for AI

It proposes work on verifiability, transparency etc to combat discrimination, manipulation [...]; work on standards setting; also - “AI made in Germany”; and to collaborate internationally (G7, G20), as well as to work with developing regions.

© 2020 Charlotte Stix. 

  • LinkedIn - Black Circle

Sign up to the EuropeanAI newsletter here.