Is an intelligent AI tool right for your business?

United KingdomScotland

The National Cyber Security Centre has recently released guidance on assessing intelligent tools for cyber security (available here). This guidance provides a valuable opportunity for businesses of any size to reflect on how they should (or equally, should not) adopt or deploy intelligent technologies. While it is tempting for businesses to leap to embrace artificial intelligence (“AI”), the guidance sets out the key factors which should considered prior to introducing an intelligent system, namely:

  • the business’ own needs;
  • the nature of the technology underpinning any products being considered; and
  • whether an intelligent tool will provide an overall gain in security.

The guidance covers an overview of AI, and considers four aspects of intelligent security: establishing the need for AI, dealing with data, available skills and resources, and getting the most from AI. In this note, we provide an overview of the guidance, which should provide a useful checklist for businesses considering their next step, or a first foray, into AI.

AI today

Today, AI is already being used for a wide variety of applications. For example, in Cyber Security where increasingly machine learning algorithms are being used to classify messages, IP addresses, or events as being potentially malicious or warranting further investigation. AI tools are also being used to visualize data such as communications being sent over networks in order to identify patterns and detect anomalies which may be cyber security risks. However, many machine learning tools and algorithms are themselves open to adversarial attacks and can compute erroneous results in extreme or unusual cases. In addition, AI which is treated as a black box in the cloud is often open to reverse engineering even though software developers may be unaware this is possible (see further our previous article here).

Establishing the need for AI – is it necessary for your business?

Comparing tools

The choice of intelligent technology for your business is the first hurdle in adopting AI. Even where it remains relatively easy to compare products, with new, comparatively untested cutting-edge technology, it may not be possible, or at least simple, to ensure you are choosing the best option. While it is relatively straightforward to compare products in mature markets, e.g., anti-virus software, it may be more difficult where a product is a first-mover in its area, and you are forced to compare your options at a more macro level. To mitigate against this risk, businesses need to allow enough time to speak to vendors and to fully understand and assess the options available to them, before committing to a product.

Once you are satisfied with your choice of tool, you should carefully consider the wider context of implementing AI, such as:

  • What kinds of issues are you intending for the tool to solve? Could those issues be addressed by modifying your current approaches?
  • Will the solution impact any core business functions?
  • What would be the impact be of the solution being incorrect?
  • How will the tool improve on information / technologies / processes currently available? Will the proposed solution replace or change the process currently used in relation to the issue/function?

Responsibility – where does it fall?

As, in general, the powerful autonomous artificial intelligence of science fiction films is still (as of yet) unavailable, intelligent technologies of today are effectively limited to new ways of processing data through computers. It is therefore important to be aware of the decision-making boundaries and reliability of the tool you select. Various factors (including certain controlled by you – see our discussion below on data quality) feed into a tool’s reliability, but as an overarching principle an intelligent tool may not always make the ‘right’ decision. In each case, there remains a question about with whom responsibility falls, and your business needs to carefully weigh the risks of full automation against the risks of introducing the possibility for user error, training or configuration error, or bias. You need to consider whether you are more comfortable with a tool which: (a) supplies an individual with information, to aid in making a decision; or (b) is fully automated.

After making yourself comfortable with the above, the guidance suggests you undertake an assessment of whether the problem is one which is best solved by AI. For example:

  • Are there any specific legal considerations? Consider what sort of data your organisation deals with – if the tool uses sensitive information, you need to ensure that your data handling processes are properly followed. Ensure that the tool’s functionality is not inappropriately limited as a result of data processing restrictions. Be aware that the data protection laws restrict the use of algorithms in certain instances. Understand that even unintentionally introducing operator or data bias into the algorithm could lead to potentially discriminatory (and therefore illegal) behaviours.
  • Do I need to understand how the tool reaches its decisions? Complex AI solutions often operate as ‘black boxes’ where it is impossible for us to understand the processes the algorithms use to reach their decision. If that is the case with your chosen solution, you need to ensure that your business is comfortable not fully understanding how the tool makes its decision. Does the lack of transparency or apparent lack of control pose an operational risk? If yes, how will you manage that risk? Do you need a kill switch? How will a temporary unavailability of the tool while you deal with an issue (e.g. unexpected bias) impact your business? What continuity measures do you need to put in place?
  • Governance processes – it is possible that key decisions for your business may be made by a tool without human intervention. It could be difficult for your staff to understand if, and when, they should intervene in a decision made by an intelligent tool. The person who is in charge of deciding whether to intervene, should feel empowered to do so. Some tools do not provide an opportunity to intervene. What control framework do you need to put in place? How will you monitor the tool and its inputs and outputs? How will your technical and organisational measures adapt to reflect changes in the tool and its risk profile as it develops and learns?
  • Correct handling of data – there is significant overlap in the issues that businesses must consider in SaaS contracts and in implementing AI. For example, what security is available to data in transit? How and where will the vendor store your data? Businesses should ensure appropriate due diligence is undertaken on the vendor, so that your data handling requirements are met, and so that you are not in breach of any data protection laws.
  • Am I comfortable with aggregation or synthetisation of my data? Data synthetisation between customers by vendors is common in intelligent technologies. While there can be benefit to this – in many cases, AI develops and is more useful, the more data it is able to process and ‘learn’ from, there is also risk – what if, for example, a third party’s data introduces a bias, or a virus? The third party data should also be relevant to your needs – if the external data is ‘out of scope’ and does not assist in ‘training’ the AI that you utilise, it is possible that the risks of synthetisation outweigh the benefits. Conversely, if the product is ‘static’, it will not adapt after its initial ‘training’, so you need to be comfortable that the tool will remain accurate as your business develops and changes.
  • Is it worth the cost? Unlike many technologies available today, many AI products are not ‘plug and play’. Consider whether additional tools are required to be able to collect, store, or read the data produced by the tool. Will you need to hire additional staff, or spend significant amounts upskilling your existing staff? What cost and effort are required to implement, train, monitor and manage the tool and its data feeds / outputs?
  • Is it worth the risk? Ensure you fully understand the tool’s inherent vulnerabilities (in the context of your organisation) and whether the tool can “fail safely”, i.e., without unnecessary risk to any people or your organisation. Will additional protections (e.g., new cybersecurity) be required given the new, valuable, data you have available? Pools of data could be prime targets for data theft.
  • What is required on an ongoing basis? Is the tool so specialised that you are completely reliant on the vendor for support? If so, and the vendor rachets up the cost of its support products, or no longer offers the product, your business may find itself in a vulnerable position. Try to identify where any possible risks or gaps in the tool’s support may lie, as it is likely that you will require support for the product at some point (if not frequently) in the tool’s lifetime.

Driven by data – understanding how the product deals with your information

Many AI systems are reliant on the quality and quantity of data fed into them. At an initial stage, it is important to understand from the vendor what sort of data the particular tool requires, whether you are able to provide your data in the right format and why any sensitive or protected information is needed. Similarly, knowing what makes high quality data will improve your output. For example:

  • Comprehensiveness – if the data contains blanks, is incomplete or is otherwise unrepresentative, the tool may not have the full set of information it requires to form appropriate processes or find valid solutions.
  • Diversity – in general the more diverse the data on which the AI is able to be ‘trained’, the better its decision making, as it won’t be confined to a narrow spectrum of circumstances.
  • Accuracy – incorrect labelling, or other misrepresentation of data can be fatal to the success of a tool. For this reason, it is important to monitor decisions made by the tool, to ensure that operator feedback hasn’t unintentionally introduced a bias.

Utilising your intelligent tools

Once you have selected and implemented your intelligent tool, you should ensure your business is in the habit of continually assessing and evaluating the tool’s benefits and limitations. For example, you should ensure that:

  • the tool remains within the scope of work it was specifically obtained for - outside of this, accuracy may be limited.
  • the data does ‘enough’ – is the output providing you with sufficient information to be able to make the decisions you obtained the tool for? For example, the guidance emphasises the distinction between being able to: (a) detect anomalies; and (b) understanding whether the threat is ‘real’.
  • you have fixes and fall-backs – without having a ‘plan B’ where the tool fails or is down, may lead to your business having periods where it is unable to operate, or lead to errors.

Comment

The National Cyber Security Centre guidance (available here) provides a useful basis for businesses to consider whether AI, in its current format, is a worthwhile investment, given its possible risks and financial costs. Businesses need to carefully consider whether their needs can, and should, be met by augmenting their current processes and procedures, before introducing intelligent technology. Equally, businesses should not be too quick to dismiss AI on account of the time and financial investment required to properly implement intelligent tools, as they could provide the springboard for the next stage of their growth.