FCA Insight Lecture: Dr Joanna Bryson - Artificial Intelligence and Machine Learning

United KingdomScotland

On 19 April 2018, the Financial Conduct Authority (FCA) held their second Insight lecture in London on Artificial Intelligence (AI) and Machine Learning. The FCA’s Insight website was launched in 2016 as a platform to exchange ideas and thinking behind regulations and to make regulators more accessible to the public.

The lecture was presented by Dr Joanna Bryson, Associate Professor at University of Bath and Affiliate, Center for Information Technology Policy, Princeton, who has been writing about AI and society since 1996. Dr Bryson is an internationally renowned expert on AI ethics, developing AI and understanding human intelligence, human cooperation and cultural change. The lecture focused on what society should and shouldn’t worry about in relation to AI and Machine Learning and whether or not regulation in this area is feasible.

Insight Lecture – Artificial Intelligence and Machine Learning

AI poses a range of challenges to policymakers. As a technology that is now pervasive, it is impacting on democracy, security and the global economy in ways that are not yet fully comprehended. By increasing communication, interdependence and discoverability, we decrease privacy and individual autonomy; which is evident from the recent heightened publicity surrounding data protection. Dr Bryson argues that AI is already super-human in many domains; it has mastered chess, forging voices and manipulating videos, that in the future it will be able to capture and express all of culturally communicated human knowledge. It is, however, likely that the pace of improvement will start to slow as AI catches up to the frontier of human knowledge.

In relation to regulation, the discussion focused on the need for transparency and clarity. Advocating that AI should be subject to regulation and audit is not the same as saying that AI cannot have proprietary intellectual property or must all be open source. Medicine has 10 times as much intellectual property as IT, yet it is well regulated.

Drawing a parallel between AI and architecture; just like the calamitous approach to building construction before regulation was mandatory, ICT is now falling down on people, too, and affecting everyone. Dr Bryson argues this is the normal process; following a big transformation like AI, it inevitably takes time for society to catch up and implement the appropriate governance around it. This should, however, not be done by rewarding companies via the capping of liabilities for automated business processes (i.e. in relation to driverless cars) or motivating the obscuring of code by reduced liabilities for learning or ‘conscious’ algorithms.

AI requires security and this can be achieved and maintained by conferring governance to the government and professional societies. AI’s place in society is determined by us; it’s cultural, not normative and cannot be trustworthy unless it is protected by security. The question is not what AI is doing to us; it’s what are we doing to each other using AI?

FCA’s approach to AI

Following the lecture, the FCA discussed the possible benefits and risks associated with the growing use of AI and how this impacts their mission to serve the public interest. Through the Innovate project, an initiative kicked off in 2014 to promote competition in the interest of consumers, the FCA has supported a number of initiatives touching on AI. Decision making and delineating the circumstances in which accountability kicks in were highlighted as important areas of interest. Technology has the ability to help protect consumers and ensure they are getting targeted services and support, but it needs to be safe-guarded.

A House of Lords report on AI in the UK published on 16 April 2018 can be found here.