Attacks on computer networks are increasingly common, often leading to severe economical and reputational damage to organisations. Subsequently, Intrusion Response Systems are recently an active area of research which seek to automatically respond to alerts generated by Intrusion Detection Systems. Current Intrusion Response Systems often seek to find optimal responses based on a general and balanced policy such as the cost and benefit to the network overall. However, organisations are encouraged to prepare Incident Response Policies, which outline prioritisations and performance measures for their response. These policies are highly individualised to the organisation, often influenced by the type of data present within the network. Building on this it is possible for several subsections of a network to have differing Incident Response Policies, for example in a CyberPhysical network, a Control Area Network may have a much stricter policy in order to preserve a physical process. In this work we utilise a Deep Reinforcement Learning approach to allow the customisation of Reward Functions which in turn facilitates the creation of Response Profiles to align with differing Incident Response Policies. Evaluation of the Profiles is performed in a Cyber-Physical System testbed consisting of Web and Business local area networks configured using Mininet and integrated with a Tennessee Eastman Process plant running in Matlab. Experimentation demonstrates the ability of a Reinforcement Learning Agent to converge on near-optimal response to multi-stage attack scenarios in accordance with their Response Profile.
|Title of host publication
|IEEE International Conference on Cyber Security and Resilience (IEEE CSR)
|Institute of Electrical and Electronics Engineers Inc.
|Published - 16 Aug 2022