Find Paper, Faster
Example:10.1021/acsami.1c06204 or Chem. Rev., 2007, 107, 2411-2502
Deep reinforcement learning for building honeypots against runtime DoS attack
International Journal of Intelligent Systems  (IF8.709),  Pub Date : 2021-10-01, DOI: 10.1002/int.22708
Selvakumar Veluchamy, Ruba Soundar Kathavarayan

Honeypot is a network environment utilized to protect proper network sources from attacks. Honeypot makes an environment that attracts the attacker to pay their operations to steal sources. Denial of Service (DoS) attacks are efficiently noticed using the proposed honeypot method. The issues of the previous technique are that the DoS attack is a malicious act with the goal of interrupting the access to a computer network. The result of the DoS attack can cause the computers on the network to squander their resources to serve illegitimate requests that result in a disruption of the network's services to legitimate users. To overcome these challenges this method is proposed. In this manuscript, the Deep Adaptive Reinforcement Learning for Honeypots (DARLH) is proposed. Here, honeypot environment, the proposed DARLHs system implements Deep Adaptive Reinforcement Learning (DARL) with Intrusion Detection System (IDS) agents and Deep Recurrent Neural Network (DRNN) with IDS agent for observing multiruntime DoS attack. In the next level, the system creates DRNN and DARL IDS agent integration modules for effective runtime attack detections. The Knowledge Data Discovery data set pattern, UNSW-NB20, and Bot-IoT data sets are used to the scenario of DoS attack. The method is executed in Python 3.7. The experimental outcomes are likened through different existing methods, such as Game and Naïve-Bayes Honeypot, Block Chain Honeypot, and Recurrent Neural Network-based Signature Generation and Detection. The proposed method is compared with External DoS Attack, Internal DoS attack, Brute-force attack, DoS attack, Web attack, and Botnet attacks with the existing methods. From the comparison, the proposed method offers 5%–10% better outcomes than another existing method. Lastly, the test results determine that the proposed method performance is most efficient with another existing system.