The focus of the Centre for Digital Trust and Society is on barriers to and enablers of trust in digital technologies.
Our scope includes cybersecurity, but we also see digital security as part of a set of broader issues of trust and trustworthiness, distrust and trust exploitation, and trust-building and resilience.
Many interpretations of and approaches to ‘trust’ exist across technical and social science disciplines. ‘Trustworthiness’ can mean strikingly different things to computer scientists, data scientists, engineers, psychologists, and philosophers.
Likewise, each discipline has its own approaches to investigating trust. Despite its challenges, we see this diversity of views and methods as an opportunity for creative engagement across disciplines.
We identify four, inter-linked, facets of trust that shape our activities:
- Competence - a trusted/trustworthy system must do what it has been designed to do – and do it reliably, safely and securely, and be capable of recovery when compromised.
- Responsibility - a trusted/trustworthy system is designed, implemented and operated for social good.
- Verification - mechanisms exist to confirm the competence and integrity of the system, for instance, technical verification, transparency of goals and methods, clear lines of accountability and responsibility, legal/regulatory measures, and public scrutiny.
- Robustness to exploitation - trust/trustworthiness is undermined when a system is exploited and manipulated. This could be by hostile state or criminal actors, for example, through fraud, corruption, or disinformation campaigns. Trust is also abused by organisations who collect and use digital data to the detriment of those whose data is being exploited, for financial gain (eg industry) or surveillance (eg government).
Systems in this context could be social, technical, or socio-technical.