The Compliant and Accountable Systems Research Group is a multi-disciplinary team working at the intersection of computer science and law.
Broadly, our research focuses on issues of compliance and accountability as they relate to emerging technologies. We consider how technology can be better designed, engineered and deployed to accord with legal and regulatory concerns, and seek to better ground legal and policy discussions in technical realities.
Broadly, our research focuses on issues of compliance and accountability as they relate to emerging technologies. Some current topics include:
- Algorithmic ‘reviewability’: Contrasting with the general focus on ‘explanation’, we consider that which is necessary to facilitate the review of algorithmic and automated decision-making (ML) systems.
- Systems transparency – auditing complex and automated systems: Exploring requirements and mechanisms for facilitating the meaningful inspection and interrogation of systems and their behaviour. A current focus is on auditable augmented/mixed/virtual reality systems, as well as the Internet of Things, and the use of machine learning in various contexts.
- Algorithmic bias / fairness: Focusing on trade-offs in various context, and methods and tooling for supporting practitioners.
- Data protection and privacy enhancing technologies (PETs): Considering issues, critiques, management and interventions regarding personal and confidential data.
- Platforms and online harms: Exploring the design, use and abuse of platforms in perpetuating harms. The current focus is on social media (recommender systems), virtual user-spaces (games/XR), and cloud services, including ‘AI as a Service’.
- Decision provenance: Considering how tracing the flow of data can be leveraged to assist accountability in complex, automated and ML-driven environments.
- Compliance and rights engineering: How systems can be better built to be (demonstrably) compliant with legal obligations, and to account for rights – fundamental, group and individual.
- Centralised v. decentralised infrastructures: Considering the potential of data management and compute infrastructures, and their legal, regulatory and policy implications. Currently looking at personal data stores and data trusts.
The group is involved in a number of projects, supported by a range of funders. These include:
- Realising Accountable Intelligent Systems (RAInS): Exploring issues of accountability, particularly relating to audit, in intelligent systems (AI/ML driven environments). A collaboration with the Universities of Aberdeen and Oxford. Funded by the EPSRC (a TIPS 2.0 project).
- Towards a legally-compliant Internet of Things: Investigating means for addressing compliance and accountability issues in the Internet of Things (pervasive computing). Funded by the EPSRC.
- Advancing Data Justice and Practice:Developing resources to help policy-makers, practitioners and impacted communities gain a broader understanding of data governance [website]. A collaboration led by the Alan Turing Institute funded by the Global Partnership on AI (GPAI).
- Internet of Stings: Data flow auditing in the consumer IoT: Investigating issues of data flow leakage and blocking in the consumer IoT, and the associated legal implications [description]. In collaboration with Imperial College, funded by the Information Commissioner’s Office (ICO).
- Detecting and understanding harmful content online: Exploring methods and tooling for detecting harmful content (inc. hate speech), and developing governance regimes. A collaboration with Kings College London, QMUL, UCL and the Alan Turing Institute. Funded by Alan Turing Institute.
- Legal Systems and Artificial Intelligence Exploring the potential of, and limits to, the computational techniques underlying law-related AI, and the legal, ethical and cultural dimensions to such in different regions. Funded by ESRC and Japanese Science & Technology Agency.
- Modern Slavery – Privacy, security and trust implications: Exploring the issues of data management in the context of modern slavery, across various stakeholders. A project collaboration with the Alan Turing Institute.
- Microsoft Cloud Computing Research Centre (MCCRC): A collaborative project with the QMUL Centre for Commercial Law Studies to perform a tech-legal analysis of issues at the intersection of cutting-edge technology and law [website]. Funded by Microsoft.
- Contextual fairness in ML: Exploring context-aware approaches to issues of fairness in machine learning systems. Funded by Aviva.
- Trust & Technology Initiative: A separate but related initiative involving members of the team that works to foster interdisciplinary research on trust & distrust regarding emerging technology [website]. Funded by the University of Cambridge.
We are keen to supervise student projects on related topics, at all levels (undergraduate or post-graduate). Some project suggestions are available here.
The group also delivers the Advanced Computer Science MPhil unit Technology, Law & Society. The course aims to develop an awareness and consideration of the broader context of tech, and how systems can be designed and engineered to facilitate accountability, legal compliance, and generally be better for society.