SAL Law Reform Committee Reports on Artificial Intelligence


The Singapore Academy of Law (SAL) Law Reform Committee (LRC) set up a Subcommittee on Robotics and Artificial Intelligence (Subcommittee), to review and make recommendations on the application of the law on AI systems. As part of the Subcommittee’s report series on “impact of robotics and artificial intelligence on the law”, the reports are intended to encourage systematic thought and debate between various policymaking and industry stakeholders, such that public policy on AI does not lag behind the exponential growth in commercial use of AI. Many of the principles in the report draw on various AI reports and guidelines published by other jurisdictions, including the UK, European Parliament, Australia, Japan and the United States.

The 2 reports that are already published in July 2020 are:

(1) Applying Ethical Principles for Artificial Intelligence in Regulatory Reform, and

(2) Rethinking Database Rights and Data Ownership in an AI World. 

The remaining 2 reports of the series will cover application of criminal law to the operation of AI systems and technologies, and attribution of civil liability for accidents involving automated cars.

Report 1: Applying Ethical Principles for Artificial Intelligence in Regulatory Reform

Overview: The Subcommittee has identified a growing consensus that AI systems must be human-centred and built on strong ethical foundations. This report identifies ethical principles relevant for legal reform for AI, and provide examples of human-centred approaches for each issue. The ethical principles discussed in the report are as follows:

  1. Law and Fundamental Interests: AI systems should comply with the law, and not be used for unethical or criminal activities, especially as AI systems may be vulnerable to cyberattacks. The 2 difficulties with imposing criminal or civil liability on AI systems is: (a) lack of a traditional “mental state” of knowledge or intention attributable to a person, and (b) that a decision made by an AI system usually involves a long causation chain from system creation to deployment.
  2. Considering Effects, Wellbeing and Safety: AI system designers and deployers should consider the likely effects of the AI system throughout its lifecycle, including safety, security, legal, ethical and other issues, by conducting risk and impact assessments and evaluations. AI system should be rational, fair and without intentional or unintentional biases. AI systems should also be designed and deployed to maximise holistic wellbeing and safety metrics and minimise harm, by considering factors such as human emotions, empathy and personal privacy.
  3. Risk Management, Respect for Values and Culture, Ethical Use of Data: AI systems must go through rigorous testing, from laboratory tests to controlled real-world environments. Policymakers should consider “safe by design” requirements, to prepare as far as reasonably practicable, a design plan that eliminates all foreseeable design risks, including unintended programming paths and upon scaling the AI system in the wider community. In addition, AI systems should be designed to avoid inherent bias or values, while making decisions within the risk appetite for privacy and ethics that are acceptable to the community’s culture. Use of personal data should also be compliant with data protection laws and good practices.
  4. Transparency, Accountability: How and why an AI system made a particular decision should be discoverable, i.e. through traceability, explainability, verifiability and interpretability of the AI systems and their outcomes. The public can then rely on “honest” interactions, including transparency on how trade-offs are made in AI decision making. Policymakers will need to consider liability apportionment between AI system designers, manufacturers, deployers and users, and record key high risk areas and parameters to trace responsible parties.

Report 2: Rethinking Database Rights and Data Ownership in an AI World

Overview: This report considers key data-related and intellectual property laws on databases and data ownership, especially over “big data” databases used for AI systems. These systems deal with huge datasets and databases of personal and non-personal data that feed into AI systems using advanced technology and analytical methods. Any deficiencies in laws on data or databases may have ripple effects on laws managing AI systems.

Topic 1: Databases

  • Existing Legal Protections: The report analyses whether the protection of databases as “literary work” under the Singapore Copyright Act is adequate. The law currently focuses on a human author’s intellectual effort and use of creativity or mental labour. In contrast, big data compilations do not have a single author, consists of automated data collection into raw machine-generated databases. The focus on the creative element exclude from protection valuable databases.
  • Recommendations: Introduction of a sui generis database right (that exist in the European Union) is not appropriate for the limitation under Singapore law. What the report calls for is greater clarity (e.g. as IPOS administrative guidance) in recognising significant intellectual effort required in implementing efficient and scalable electronic databases, such as building systems of collection and routine to ensure data quality in the database, and whether such processes can qualify to meet the originality requirement. Clarity should also be provided on originality in various contexts, e.g. software designers translating business rules into software code. There should also be re-examination of the fundamentals of authorship under the current Copyright Act, especially as larger portions of database algorithms are automated.

Topic 2: Data Ownership

  • Current Status: The report reviews whether data collected by AI (especially IoT devices), whether as individual or a combination of data elements, need to be granted property rights. Personal data is protected in Singapore under data protection laws, but the data subject is not granted legal ownership of his data. Ownership of personal data is sometimes claimed by organisations in contract instead. For non-personal data, the entity that created or controls the non-personal data retains sovereignty over the data.
  • Merits of Granting Property Rights over Data: There are various arguments for granting property rights over data, such as providing a clear method to protect privacy, relying on existing property laws to provide established protection, and its ability to provide greater rights to individuals against large corporations exploiting the data. Currently, data is protected through a mix of copyright, confidentiality and privacy laws. Copyright law protects the “expression of ideas”. Confidentiality protects non-public information from disclosure for a contractual term. Privacy laws primarily protect the individual, as opposed to the information per se.
  • Recommendations: The report concludes that creating a property right for data is not necessary. Introducing other rights (e.g. data portability obligation under the PDPA) is a good example of adding specific data control methods that are both protective of individual rights as well as supportive of data innovation. There may also be room to consider whether rights developed for personal data (e.g. data portability) should also be considered for non-personal data. This may drive growth in the volume and variety of services relying on machine-generated data.

Link to Reports: The SAL’s official reports can be found here.