## ARM IN GROWING WORLD OF CYBER-ATTACKS:

## UNSUPERVISED MACHINE LEARNING — PART 2

Hello, wonderful folks. If you haven’t already, please read our blog on Clustering. Today, we’ll talk about a new and growing concept in data science and machine learning that may help us tackle many issues that a basic model with any basic method cannot. So let’s go over some of the issues that association rule mining can help us solve and make our lives easier. ARM helps us draw connections and discover correlations and connections among large data sets, which aids us in more efficiently analysing the data sets for customer analytics, market basket analysis, product clustering, catalogue design, and store layout.

## ASSOCIATION RULE MINING:

It is one of the many intriguing methods available to us in unsupervised machine learning, which enables us to visualise the relationships between enormous datasets and determine how frequently a specific item appears in a transaction. Although I’m sure you already have a sufficient number of instances in your head after reading this definition, allow me to give you just one for the sake of my own satisfaction.

For example: Supermarket product analysis, e-commerce website, social media sites, and lastly the recommendation engines

## Let’s look at it through the lens of the if-then rule

If (x) and (y) and (z), then A:

- x, y, z: Antecedent (describes a condition)
- A: Consequent (describes the result of the condition)
- Length of a rule: Number of antecedents
- Item set: All items {x, y, z, A}

ARM looks only for co-occurrence (or association) among features (columns) of a dataset. It neither signifies causation nor correlation.

▪ X → Y does not mean X causes Y.

▪ X → Y can be different from Y → X, unlike correlation.

▪ For example, Beer → Men has no causality involved. Neither does If Beer, then Men means If Men, then Beer. This leads to the possibility of many rules.

SPEAKING OF MANY RULES, HOW MANY RULES ? If the number of items becomes very large.

## GENERAL FORMULA TO KEEP IN MIND IF DATASET IS LARGE:

## R = 3^d — 2^d+1 + 1

- Where R is the number of possible association rules and ‘d’ is the number of items.
- The number of possible association rules explode exponentially as the items grow.
- While rules like HDTV → Voltage Stabiliser, rules.
- But Voltage Stabiliser → HDTV doesn’t make sense.
- Being an unsupervised approach, no Models are built, and no error can be calculated. How can we evaluate these rules and find ways of avoiding meaningless or irrelevant rules ?

Before implementing association rule mining, let’s look at something

## TERMINOLOGIES IN ARM:

Support Count, Frequent item set, and Association rule are the three key concepts to understand.

1. **Support Count** is just another way of saying how often the data items occur.

2. The two-item dataset is referred to as the **association rule**.

3. A **frequent item** **set** is one that supports a threshold of more than or equal to.

There are several techniques to compute the connection, however the following are the top 3:

## SUPPORT :

→ As we seen above in the support count is a way of saying how often the data item occur which simply means we can calculate how often a particular items is occurring while doing the transactions

## SUPPORT (A=>B) = P(A U B)

## Let us evaluate the rule: if CCAvg is medium, then loan = accept”

- Support is the fraction of transactions in which an itemset occurs, i.e., where both antecedent and consequent occur together.
- Support, s( X → Y) = 3/13 = 23%.
- Which means this rule is applicable to 23% of the dataset.
- Recall this is like the Joint Probability, i.e., P(X AND Y) OR P(ITEMSET)

## CONFIDENCE:

→ Suppose we have four products. Because the human brain quickly recognises silly things, let’s give it a lame name. This leaves us with PRODUCT-A, PRODUCT-B, PRODUCT-C, and PRODUCT-D. Now, confidence enables us to determine how probable it is that PRODUCT-D will be acquired at the same time as PRODUCT-A. In essence, it enables us to connect two things that are probably going to be bought together.

## Let us evaluate the rule: if CCAvg is medium, then loan = accept”

Confidence is the fraction of transactions where consequent items occur only if antecedent items are already present in the “cart or basket”.

- Confidence, c(X and Y) = 3/3 = 100%, i.e The loan is accepted every time the CCAvg is Medium.
- Recall this is like the Conditional Probability i.e

P(X|Y) or P(X and Y) / P(X) or Support / P(antecedents)

## LIFT IN ASSOCIATION RULES:

## Let us evaluate the rule: if pregnant, then female”

Let’s first talk about Support and try to evaluate the above rule:

**→ SUPPORT = s(pregnant → female) = 7/13 = 54%**

Support tells us what percentage of the total records have both pregnant and female (Scientifically it sounds stupid but just take it as a lame example).

Talking about Confidence, it goes like:

**→ CONFIDENCE = c(pregnant → female) = 7/7 = 100%**

Confidence tells us the posterior probability of being female given that she is pregnant i.e P(female|pregnant).

## CONCLUSION:

- Now, we can notice from the above scenario that we have a high Confidence compared to the Support, but considering one case, what if the prior probability of being female itself is very high i.e P(female)?
- Then the above rule is not really adding much value (Scientifically and now Statistically also :D)
- But we have this constant curiosity of knowing how more LIFT the rule gives as compared to the prior knowledge ?

## 3. LIFT:

## CONSIDER THE SAME DATASET ABOVE

So, given the LIFT idea and understanding the alterations, let’s assess the rule: If pregnant, then Female.

**LIFT, l(pregnant → female) = Confidence / p(consequent)**

**= P(female|pregnant) / P(female)**

**= 7/7 / 10/13**

**= 1.3**

**Which can also be written as Support / P(antecedents) * P(consequent)**

which would be equally equivalent to formula = P(X and Y) / P(X)*P(Y)

This can, therefore, be also interpreted as the ratio of the probability of all the items in a rule occurring together to the probability of them occurring as if there was no association between.

Recall the Independence rule of possibility.

Incase you want to revise the Independence rule of possibility:

## INDEPENDENCE RULE OF PROBABILITY :

In probability, two occurrences are said to be independent if knowing one event occurRed has no effect on the likelihood of the other occurRing. For example, the likelihood of a fair coin showing “heads’’ after being flipped is 1/2, 1/2, 1/2

## SOME OF THE PRINCIPLES OF LIFT TO BE NOTED

**1. Lift = 1:** X and Y are independent and there is no association between them.

**2. Lift > 1**: X and Y occurs together more often than is expected if there were no association between them; i.e.; they are positively correlated.

**3. Lift < 1:** X and Y occurs together less often than is expected if there were no association between them; i.e., they are negatively correlated.

## APRIORI ALGORITHM → ASSOCIATION RULE MINING:

**GENERATING ASSOCIATION RULES:**

This is very fun, interesting and easy to learn concept where we follow the two-step process for generating the association rules:

**1. FREQUENT ITEM SETS GENERATION:** Finding all the item sets that satisfy the mins-up threshold.

**2. RULE GENERATION:** Extract all the high confidence rules from the above frequent item sets.

Computational requirements for the 1st step are generally more expensive than those for the 2nd step. If there are ‘k’ items, then there are 2^k — 1 potential frequent itemsets.

**FOR EXAMPLE:**

## THE APRIORI PRINCIPLE:

1. If an item set is frequent, then all of its subsets must also be frequent.

2. Conversely, if an itemset is frequent, then all of its supersets must also be infrequent.

So, if {Pizza, Milk} is infrequent, all other itemsets containing both these items will also be infrequent as support for an itemset never exceeds support for its subsets.

## APRIORI ALGORITHM EXAMPLE BASED ON REAL MARKET CASE:

Let us assume a Support Threshold, minsup of 60%, i.e., a minimum support count of 3 out of 5 transactions.

The algorithm scans the database and lists candidate 1-itemsets, C1. It then filters C1 to identify frequent 1-itemsets, F1, based on minsup.

## THERE ARE ALWAYS SOME LIMITATIONS WHEN WE TALK ABOUT THE ALGORITHMS:

In the field of machine learning algorithms, the Apriori algorithm is exceptionally strong, simple, elegant, and powerful. However, with tremendous power comes great responsibility, thus here are certain LIMITATIONS we must adhere to:

1. Requires large memory space to store the exponentially increasing candidates generated with increasing items.

2. Scans the database multiple times causing computational complexity.

## FREQUENT PATTERN GROWTH ALGORITHM → ASSOCIATION RULE MINING:

As of now, we saw that the Apriori algorithm uses the generate-and-test approach, but in the case of the FP-Growth algorithm it uses a different approach for mining of frequent datasets.

## SO EXACTLY WHAT APPROACH?

1. **The FP-Growth algorithm** encodes the dataset using a compact data structure called FP-Tree and extracts frequent item sets directly from this tree without the need for generating candidate item sets.

2. The dataset is scanned only twice, once to determine the support count of each item and then to construct the FP-Tree by running through each transactions

## LET’S UNDERSTAND IT THROUGH A SIMPLE EXAMPLE:

## FP-GROWTH ALGORITHM → LIMITATIONS:

1. FP Tree is more cumbersome and expensive to build

2. FP Tree may not fit in memory for very large datasets.

## IS ASSOCIATION RULE MINING HELPFUL IN CYBER SECURITY?

## A BIG YES !

Living in our fast-paced technological world, we now encounter cybercrime so regularly that it has become nearly difficult to totally avoid them. On the bright side, we have evolved so much in the field of technology that we now have many gems who have professional expertise in spotting potential security threats, infiltration behaviours in cloud computing platforms, and much more. But the trouble is, we don’t have enough of these professionals accessible 24/7 to help us with the expanding range of cyber attacks, but we do have technology that can help us achieve things we’ve never dreamed of.

According to recent research, an adversary has advanced their tactic, technique, and procedure (TTPs) for conducting cyberattacks, making them less predictable, more persistent, inventive, and better funded. Many organisations have chosen to incorporate Cyber Threat Intelligence (CTI) in their security posture to properly attribute cyberattacks. Nevertheless, in order to properly exploit the tremendous amount of information in CTI for threat attribution, an organisation must focus more on identifying the special insights underlying the copious data in order to achieve a successful cyberattacks identification.

Association rule mining assists us in developing an association ruleset for use in the CTI’s imputation procedure. In the CTI, the Apriori algorithm is utilised to create connection rulesets throughout the association analysis phase. In order to quantify the scalability, accuracy, and efficiency of the algorithm indicators such as support (s), confidence ©, and lift (l) are utilised. According to the findings, ASSOCIATION RULE MINING efficiently identifies the qualities, relationships among features, and identification level group of cyberattacks in CTI. This analysis has the potential to be developed into a cyber threat hunting process, resulting in a more preventive cybersecurity culture.

## REFERENCE OF THE ABOVE RESEARCH PAPER STUDY:

https://thesai.org/Publications/ViewPaper?Volume=12&Issue=4&Code=IJACSA&SerialNo=18

## GITHUB GIST :❤️

## FOLLOW US FOR THE SAME FUN TO LEARN DATA SCIENCE BLOGS AND ARTICLES:💙

**LINKEDIN:** https://www.linkedin.com/company/dsmcs/

**INSTAGRAM:** https://www.instagram.com/datasciencemeetscybersecurity/?hl=en

**GITHUB:** https://github.com/Vidhi1290

**TWITTER:** https://twitter.com/VidhiWaghela

**MEDIUM:** https://medium.com/@datasciencemeetscybersecurity-

**WEBSITE:** https://www.datasciencemeetscybersecurity.com/