Friday, November 29, 2019

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 7 - Capt Ajit Vadakayil



THIS POST IS CONTINUED FROM PART 6, BELOW--





Local Interpretable Model-Agnostic Explanation (LIME) is an algorithm that provides a novel technique for explaining the outcome of any predictive model in an interpretable and faithful manner. 

It works by training an interpretable model locally around a prediction you want to explain.
To better understand how LIME works, let's consider two distinct types of interpretability:

Global interpretability: Global interpretations help us understand the entire conditional distribution modeled by the trained response function, but global interpretations can be approximate or based on averages.

Local interpretability: Local interpretations promote understanding of a single data point or of a small region of the distribution, such as a cluster of input records and their corresponding predictions, or decile of predictions and their corresponding input rows. 

Because small sections of the conditional distribution are more likely to be linear, local explanations can be more accurate than global explanations.

LIME is designed to provide local interpretability, so it is most accurate for a specific decision or result.

Locally faithful explanations capture the classifier behavior in the neighborhood of the instance to be explained. To learn a local explanation, LIME approximates the classifier's decision boundary around a specific instance using an interpretable model. 

LIME is model-agnostic, which means it considers the model to be a black-box and makes no assumptions about the model behavior. This makes LIME applicable to any predictive model.

In order to learn the behavior of the underlying model, LIME perturbs the inputs and sees how the predictions change. The key intuition behind LIME is that it is much easier to approximate a black-box model by a simple model locally than by a single global model

Most Machine Learning algorithms are black boxes, but LIME has a bold value proposition: explain the results of any predictive model. The tool can explain models trained with text, categorical, or continuous data

While the techniques above offer practical steps that data scientists can take, LIME is an actual method developed by researchers to gain greater transparency on what’s happening inside an algorithm. The researchers explain that LIME can explain “the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.”

What this means in practice is that the LIME model develops an approximation of the model by testing it out to see what happens when certain aspects within the model are changed. Essentially it’s about trying to recreate the output from the same input through a process of experimentation.





As the ‘AI era’ of increasingly complex, smart, autonomous, big-data-based tech comes upon us, the algorithms that fuel it are getting under more and more scrutiny.

Whether you’re a data scientist or not, it becomes obvious that the inner workings of machine learning, deep learning, and black-box neural networks are not exactly transparent.

In the wake of high-profile news reports concerning user data breaches, leaks, violations, and biased algorithms, that is rapidly becoming one of the biggest — if not the biggest — sources of problems on the way to mass AI integration in both the public and private sectors.

Here’s where the push for better AI interpretability and explainability takes root.

By now, much more justifiable apprehensions, grounded in the socio-economic reality, took place in the public consciousness:--

● When AI is making judgements and appraising risks, why and how does it come to the conclusions it presents?
● What is considered failure and success? Why?
● If there’s an error or a biased logic, how do we know?
● How do we identify and fix such issues?
● Are we sure we can trust AI?


These are the questions that need to be answered in order to be able to rely on AI, and be sure about its accountability. Here’s where AI interpretability and explainability comes into play.

AI Interpretability vs Explainability

Interpretability is about the extent to which a cause and effect can be observed within a system. Or, to put it another way, it is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters. It’s being able to look at an algorithm and go yes-- I can see what’s happening here.

Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms. It’s easy to miss the subtle difference with interpretability, but consider it like this: interpretability is about being able to discern the mechanics without necessarily knowing why. Explainability is being able to quite literally explain what is happening.

Where machine learning and AI is concerned, “interpretability” and “explainability” are often used interchangeably, though it’s not correct for 100% of situations. While closely related, these terms denote different aspects of predictability and understanding one can have of complex systems, algorithms, and vast sets of data. See below:--

● Interpretability refers to the ability to observe cause-and-effect situations in a system, and, essentially, predict which changes will cause what type of shifts in the results (without necessarily understanding the nitty-gritty of it all).
● Explainability is basically the ability to understand and explain ‘in human terms’ what is happening with the model; how exactly it works under the hood.


The difference is subtle enough, but it’s there. While usually both can co-exist, some situations might require one and not the other: for example, when explaining what’s behind a predictive model to the higher-ups of the banking or the pharmaceutical industry, demonstrating the measures taken to minimize or eliminate the possibility of bias in the risk assessment models for their legal systems.



Important Properties Of Explainability
Portability: It defines the range of machine learning models where the explanation method can be used.
Expressive Power: It defines as the structure of an explanation that a method is able to generate.
Translucency: This describes as to how much the method of explanation depends on the machine learning model. Low translucency methods tend to have higher portability.
Algorithmic Complexity: It defines the computational complexity of a method where the explanations are generated.

Fidelity: High fidelity is considered as one of the important properties of an explanation as low fidelity lacks in explaining the machine learning model.



Interpretability
Interpretability is defined as the amount of consistently predicting a model’s result without trying to know the reasons behind the scene. It is easier to know the reason behind certain decisions or predictions if the interpretability of a machine learning model is higher.

Evaluation Of Interpretability
Application Level Evaluation: This is basically the real-task. It means putting the explanation into the product and the end user will do all the tests.
Human Level Evaluation: This is a simple task or can be termed as a simplified application level evaluation. In this case, the experiments are carried out by laypersons by making the experiments cheaper and testers can be found easily.
Function level evaluation: This is an approach where an anonymous person already evaluates the class of model. This approach is also known as a proxy task.

Understanding The Difference
You can distinguish the difference between these two by a simple instance. For instance, a school student doing a little experiment on titration, the result can be interpreted as what will be the next step as far as it can be done until the outcome is found out. This is interpretability. And the chemistry behind this experiment is the definition of explainability.



Black box AI systems for automated decision making, often based on machine learning over big data, map a user's features into a class predicting the behavioural traits of individuals, such as credit risk, health status, etc., without exposing the reasons why.

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. ... One way to gain explainability in AI systems is to use machine learning algorithms that are inherently explainable.




Why Does Machine Learning Need to Be Explainable?

Being able to present and explain extremely complex mathematical functions behind predictive models in understandable terms to human beings is an increasingly necessary condition for real-world AI applications.

As algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision-making grow among policymakers, regulators, and the general public. 

In such an environment, interpretability and explainability are crucial for achieving fair, accountable and transparent (FAT) machine learning, complying with the needs and standards for:---

1. Business adoption
It is paramount for any business predictions to be easily explained to a boss, a customer, or a commercial legal adviser. Simply speaking, when any justification for an important business decision is reduced to “the algorithm made us do it,” you’ll have a hard time making anyone — be it investors, CEOs, CIOs, end customers, or legal auditors — buy the fairness, reliability, and business logic of this algorithm.

2. Regulatory oversight
Applying regulations, such as the GDPR, regional and local laws, to machine learning models can only be fully achieved with the FAT principles at the core. For example, Article 22 Recital 71 of the GDPR specifically states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

In turn, Articles 13 and 15 stress repeatedly that data subjects have a right to the disclosure of “meaningful information about the logic involved” and of “the significance and the envisaged consequences” of automated decision-making.

To make a GDPR-compliant AI is to make an interpretable, explainable AI. In the world of rapidly developing and spreading laws regarding data, that can soon mean “to make any compliant AI is to make an interpretable, explainable AI.”

3. Minimizing bias
The problem of algorithmic bias and the dangers it can harbor when allowed into machine learning systems are well-known and documented. While the main reason behind biased AI is the poor quality of data fed into it, the lack of transparency in the proceedings and, as a result, inability to quickly detect bias are among the key factors here, as well.

Imagine the times when interpretable and explainable AI becomes the norm. Then the ability to understand not only the fundamental techniques used in a model but also particular cause-and-effect ties found in those specific algorithms would allow for faster and better bias detection. This has a potential to eliminate the problem itself, or at least to allow for a much quicker and more effective solution to it, which is one of the main socio-economic reasons behind the current push for both fair and ethical AI.

4. Model documentation
Regardless of the type and scope of a software development project, probably no one has ever described documentation keeping as fun. Yet it must be done, and predictive models are no exception.
Where AI, machine learning, and especially black-box deep learning are concerned, in some cases this usually tedious task can become impossible altogether. 

Basically speaking, black-box modeling can be great for dealing with data regardless of a particular mathematical structure of the model, but if you need to document the specifics — be it for a commercial, educational, or other project — you’re out of luck. This model would need to become both interpretable and explainable, in order for an efficient documentation to be created.

While questions of transparency and ethics may feel abstract for the data scientist on the ground, there are, in fact, a number of practical things that can be done to improve an algorithm’s interpretability and explainability.

When humans make decisions, they have the ability to explain their thought process behind it. They can explain the rationale; whether its driven by observation, intuition, experience or logical thinking ability. Basic ML algorithms like decision trees can be explained by following the tree path which led to the decision. But when it comes to complex AI algorithms, the deep layers are often incomprehensible by human intuition and are quite opaque.

Data scientists may have trouble explaining why their algorithm gave a decision and the laymen end-user may not simply trust the machine’s predictions without contextual proof and reasoning.

There need to be three steps which should be fulfilled by the system :---

1) Explained the intent behind how the system affects the concerned parties

2) Explain the data sources you use and how you audit outcomes

3) Explain how inputs in a model lead to outputs.

Interpret means to explain or to present in understandable terms. In the context of ML systems, interpretability is the ability to explain or to present in understandable terms to a human   Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. 

It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision   Interpretability is about the extent to which a cause and effect can be observed within a system. ... Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms



Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations will make black-box approaches difficult to use in Business, because they often are not able to explain why a machine decision has been made.

The neural networks employed by conventional AI must be trained on data, but they don’t have to understand it the way humans do. They “see” data as a series of numbers, label those numbers based on how they were trained and solve problems using pattern recognition. When presented with data, a neural net asks itself if it has seen it before and, if so, how it was labeled it previously.

In contrast, cognitive AI is based on concepts. A concept can be described at the strict relational level, or natural language components can be added that allow the AI to explain itself. A cognitive AI says to itself: “I have been educated to understand this kind of problem. You're presenting me with a set of features, so I need to manipulate those features relative to my education.”

The more information that is submitted to the model for regularity the better it gets. So dissimilar to customary data management and cleaning systems, Machine learning algorithms improve the situation with scale.

With regards to fueling particular functions, AI can do a large portion of the work for us. By concentrating on the machine learning deliberately getting cleverer about how it uses, rates and analyzes data, we can diminish coding-hours as well as stress less over the faulty data.

Machine learning methods are often based on neural networks, which can be basically seen as black boxes that turn input into output. Not being able to access the knowledge within the machine is a constant headache for developers, and many times for users as well

Researchers are studying other significant variables, like how much the attacker actually knows about the AI system. For example, in what we call “white-box” attacks, the adversary knows the model and its features. In “gray-box” attacks, they don’t know the model, but do know the features. In “black-box” attacks, they know neither the model nor the features. 

Even in a black-box scenario, adversaries remain undaunted. They can persistently use brute-force attacks to break through and manipulate the AI malware classifier. This is an example of what is called “transferability”—the use of one model to trick another model.

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implemention of the social right to explanation.  

Transparency rarely comes for free and that there are often trade-offs between the accuracy and the explainability of a solution

The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.  Another consideration is info-besity (overload of information), thus, full transparency may not be always possible or even required.

DeepLIFT (Deep Learning Important Features)

DeepLIFT is a useful model in the particularly tricky area of deep learning. It works through a form of backpropagation: it takes the output, then attempts to pull it apart by ‘reading’ the various neurons that have gone into developing that original output.

Essentially, it’s a way of digging back into the feature selection inside of the algorithm (as the name indicates).

Layer-wise relevance propagation
Layer-wise relevance propagation is similar to DeepLIFT, in that it works backwards from the output, identifying the most relevant neurons within the neural network until you return to the input (say, for example, an image).


DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. 

DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass

Interpretability is the degree to which a human can understand the cause of a decision

Boolean Decision Rules via Column Generation: This algorithm provides access to classes which implements a directly interpretable supervised learning method for binary classification that learns a 

Boolean rule in disjunctive normal form (DNF) or conjunctive normal form (CNF) using column generation (CG). For classification problems, Boolean Decision Rules tends to return simple models that can be quickly understood.

Generalised Linear Rule Models: Generalised Linear Rule Models are applicable for both classification and regression problems. For classification problems, Generalised Linear Rule Models can achieve higher accuracy while retaining the interpretability of a linear model. 

ProfWeight: This algorithm can be applied to the neural networks in order to produce instance weights that can be further applied to the training data to learn an interpretable model.

Teaching AI to Explain Its Decisions: This algorithm is an explainability framework that leverages domain-relevant explanations in the training dataset to predict both labels and explanations for new instances.  

Contrastive Explanations Method: This algorithm is the basic version for classification with numerical features can be used to compute contrastive explanations for image and tabular data.

Contrastive Explanations Method with Monotonic Attribute Functions: This algorithm is a Contrastive Image explainer which leverages Monotonic Attribute Functions. The main idea behind this algorithm is to explain images using high level semantically meaningful attributes that may either be directly available or learned through supervised or unsupervised methods

Disentangled Inferred Prior Variational Auto-Encoder (DIP-VAE): This algorithm is an unsupervised representation learning algorithm which usually takes a given feature and learns a new representation in a disentangled manner in order to make the resulting features more understandable. 


ProtoDash: This algorithm is a way of understanding a dataset with the help of prototypes. It provides exemplar-based explanations for summarising dataset as well as explaining predictions made by an AI model. It employs a fast gradient-based algorithm to find prototypes along with their (non-negative) importance weights.


Explainability may not be very important when you are classifying images of cats and dogs – but as ML models are being used for the more extensive and critical problems, XAI becomes extremely important if the ML model is predicting the presence of a disease like diabetes from a patient’s test results, doctors need substantial evidence as to why the decision was made before suggesting any treatment. .


Currently, AI models are evaluated using metrics such as accuracy or F1 score on validation data. Real-world data may come from a slightly different distribution than training data, and the evaluation metric may be unjustifiable. Hence, the explanation, along with a prediction, can transform an untrustworthy model into a trustworthy one. .


There are three crucial blocks to develop explainable AI system:--
.

Explanation interface
The explanation generated by the explainable model should be shown to humans in human-understandable formats. There are many state-of-the-art human-computer interaction techniques available to generate compelling explanations. Data visualization models, natural language understanding and generation, conversational systems, etc. can be used for the interface.

Psychological model of explanation--

Humans take most of the decisions unconsciously for which they don’t have any explanations. Hence, psychological theories can help developers as well as evaluators. More powerful explanations will be generated by considering psychological requirements. E.g. a user can rate on the clarity of the generated explanation, which will help to understand user satisfaction. And the model can be continuously trained depending on user rating.

Explainability can be a mediator between AI and society. It is also a useful tool for identifying issues in the ML models, artifacts in the training data, biases in the model, for improving model, for verifying results, and most importantly for getting an explanation. Even though explainable AI is complex, it will be one of the focused research areas in the future.

Distrust, unfairness, bias and ethical ramifications of automated ML decisions are now increasingly common.

Imagine an advanced fighter aircraft is patrolling a hostile conflict area and a bogie suddenly appears on radar accelerating aggressively at them. The pilot, with the assistance of an Artificial Intelligence co-pilot, has a fraction of a second to decide what action to take – ignore, avoid, flee, bluff, or attack.  
The costs associated with False Positive and False Negative are substantial – a wrong decision that could potentially provoke a war or lead to the death of the pilot.  What is one to do…and why?

A false positive state is when the IDS identifies an activity as an attack but the activity is acceptable behavior. A false positive is a false alarm. A false negative state is the most serious and dangerous state. This is when the IDS identifies an activity as acceptable when the activity is actually an attack.
A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.

In application security testing, false positives alone don’t determine the full accuracy. False positives are just one of the four aspects that determine its accuracy – the other three being ‘true positives,’ ‘true negatives,’ and ‘false negatives.’

False Positives (FP): Tests with fake vulnerabilities that were incorrectly reported as vulnerable

True Positives (TP): Tests with real vulnerabilities that were correctly reported as vulnerable

False Negatives (FN): Tests with real vulnerabilities that were not correctly reported as vulnerable

True Negatives (TN): Tests with fake vulnerabilities that were correctly not reported as vulnerable

Therefore, a true positive rate (TPR) is the rate at which real vulnerabilities were reported, correctly. A false positive rate (FPR) is the rate at which fake vulnerabilities were reported as real, incorrectly.

Explainable Artificial Intelligence (XAI) is critical for physicians, engineers, technicians, physicists, chemists, scientists and other specialists whose work is governed by the exactness of the model’s results, and who simply must understand and trust the models and modeling results. XAI is a legal mandate in regulated verticals such as banking, insurance, telecommunications and others. For AI to take hold in healthcare, it has to be explainable. .

There is no mature auditing framework in place for AI, nor any AI-specific regulations, standards or mandates. Precedents don’t exist. Auditability, explainability, transparency and replicability (reproducibility) are often suggested as means of avoiding bias.

Explainability is intrinsically challenging because explanations are often incomplete because they omit things that cannot be explained understandably. Algorithms are inherently challenging to explain. Take, for instance, algorithms using “ensemble” methodologies. Explaining how one model works is hard enough. Explaining how several models work both individually and together is exponentially more difficult.

Transparency is usually a good thing. However, it if it requires disclosing source code or the engineering details underpinning an AI application, it could raise intellectual property concerns. And again, transparency about something that may be unexplainable in laymen’s terms would be of limited use.




Many AI algorithms are really black boxes: partially, or not understood both by those who create them and those who interact with them. Obviously, this is problematic: there are risks both for the companies and organizations that deploy these AIs, and the people who interact with them. More explainable AIs seem to be in everyone's best interests. Nevertheless, good intentions and practice often clash: there are real, pragmatic reasons why many AIs are not engineered in such a way that they are easily explained.

A model can be a black box for one of two reasons: (a) the function that the model computes is far too complicated for any human to comprehend, or (b) the model may in actual fact be simple, but its details are proprietary and not available for inspection.

Machine learning is a subset of Artificial Intelligence (AI) that focuses on getting machines to make decisions by feeding them data.

It is paramount for any business predictions to be easily explained to a boss, a customer, or a commercial legal adviser. Simply speaking, when any justification for an important business decision is reduced to “the algorithm made us do it,” you’ll have a hard time making anyone — be it investors, CEOs, CIOs, end customers, or legal auditors — buy the fairness, reliability, and business logic of this algorithm.

Users need to know the “whys” behind the workings, such as why an algorithm reached its recommendations—from making factual findings with legal repercussions to arriving at business decisions, such as lending, that have regulatory repercussions—and why certain factors (and not others) were so critical in a given instance.

As domains like healthcare look to deploy artificial intelligence and deep learning systems, where questions of accountability and transparency are particularly important, if we’re unable to properly deliver improved interpretability, and ultimately explainability, in our algorithms, we’ll seriously be limiting the potential impact of artificial intelligence.

There are 8 underlying reasons why an AI solution can become hard or impossible to explain.
  
Reason 1: The way data is generated is not understood
The base resource that machine learning engineers work with is data. However, the exact meaning and source of this data is often nebulous, and prone to misinterpretation. Data might come from a CRM, be self-reported and collected through a survey, purchased from a third-party provider, ... To make matters worse, machine learning engineers often only have a label to work with, and no further details. For example, we could have a dataset that contains a user for each row, and one column named post_count. A seasoned machine learning engineer will immediately start asking questions: count of posts since when? Does this include deleted posts? What is the exact definition of a post? Sadly, while answering this for a single column is often doable (but resource-intensive), answering it for thousands of columns is both extremely time-consuming and complex.

This brings us to our second underlying reason...

Reason 2: The data given to an algorithm is feature-rich
In a quest to have more predictive power, and thanks to the ever growing computational power of our computers, most machine learning practicioners tend to work with very large, very feature rich datasets. With feature-rich, we mean that for every observation (e.g. a person whose personality we want to predict, our row in our previous example), we have many different types of data (e.g. timestamped posts, their interactions with other users, their signup date, ..., our columns in our previous example). It's quite common to have thousands (and many, many more) different types of data in many machine learning problems.

Reason 3: The way data is processed is complex
Machine learning engineers often don't just take the data as such and feed it to an algorithm, the process it ahead of time. Data can be enriched (creating additional data types from existing ones: such as turning a date into another variable that says if it's a national holiday or not), combined (such as reducing the output of many sensors to just a few signals) and linked (by getting data from other data sources). Each of these operations bring additional complexity in understanding and explaining the base data the algorithm is learning from.

Reason 4: The way additional training data is generated (augmentation) is complex
Many use cases of machine learning allow for the generation of additonal training data, called augmentation. Homever, these generative approaches to getting more and better training data can often be complex, and modify the learnings of the algorithm in subtle, unintuitive ways.

Reason 5: The algorithms that are used don't balance complexity and explanatory power (regularization)
It's often difficult to balance the predictive explanatory power of a model and the complexity of a model. Luckily, there are a slew of techniques available today to do just that for machine learning engineers, called "regularization" techniques. These techniques weigh the cost of adding complexity versus the additional explanatory power that this complexity bring, and attempt to strike a good balance. The under- or mis-application of regularization in models can lead to very, very complex models.

Reason 6: The algorithms that are used are allowed to learn unintuitive relationships (non-linearity)
Linear relationships are ones where an increase in one variable causes a set increase (or decrease) in another variable. For example, the relationship between signups to a new service and profits could be linear: for every new signup, your profit increases by a set amount. Some machine learning models can only learn linear relationships (such as the aptly named "linear regression"). These models tend to be easier to explain, but also miss out on a lot of nuance. For example, your profits might initially increase with every signup, but then decrease after a certain number of signups, because you need additional support staff for your service. While some models can learn these relationships, they are often much trickier to explain.

Reason 7: The algorithms that are used are combined (ensembling)
Many complex AI applications don't rely on a single algorithm, but a whole host of algorithms. This "chaining" of algorithms is called "ensembling". This practice is extremely common in machine learning today, but adds complexity: if a single algorithm is hard to explain, imagine having to explain the combined output of 50-100 algorithms working together.

Reason 8: There is no additional explanatory layer used

Rather than trying to make models explainable through simplicity (and as such, often sacrificing the explanatory power), another approach has emerged in the last couple of years that aim to add a glass layer on top of black-box models, that figuratively allow to peer inside the models. These models, such as Shapely Additive Explanations (SHAP), use both the data and the black box model to explain the prediction generated by the model in question.

Neural networks are, by design, non-deterministic. Like human minds, though on a much more limited scale, they can make inferences, deductions, or predictions without revealing how. That's a problem for an institution whose algorithms determine whether to approve an applicant's request for credit. 

Laws in the U.S. and elsewhere require credit reporting agencies to be transparent about their processes. That becomes almost impossible if the financial institutions controlling the data on which they report can't explain what's going on for themselves.

So if an individual's credit application is turned down, it would seem the processes that led to that decision belong to a mechanism that's opaque by design.

Machine learning: Improved ML through faster structured prediction. Examples include Boltzmann machines, quantum Boltzmann machines, semi-supervised learning, unsupervised learning and deep learning;




Again, Explainable AI (XAI), Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. 

XAI is an implemention of the social right to explanation. Some claim that transparency rarely comes for free and that there are often trade-offs between the accuracy and the explanaibility of a solution.

Left unchecked, lack of transparency can lead to biased outcomes that put people and businesses at risk. The answer to this is explainable AI.

As AI algorithms increase in complexity, it becomes more  difficult to make sense of how they work. In some cases, Interpretable and explainable AI will be essential for  business and the public to understand, trust and effectively manage ‘intelligent’ machines. Organisations  that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.

To benefit from AI, businesses have to consider not just the mechanics of production ML but also managing any customer and/or community concerns. Left unaddressed, these concerns can materialize in customer churn, corporate embarrassment, brand value loss, or legal risk.

Trust is a complex and expansive topic, but at its core, there is a need to understand and explain ML and feel confident that the ML is operating correctly, within expected parameters and free from malicious intrusion. In particular, the decisions made by the production ML should be explainable - i.e. a human-interpretable explanation must be provided. 

This is becoming needed in regulations such as the GDPR’s Right to Explanation Clause . Explainability is closely tied to fairness - the need to be convinced that the AI is not accidentally or intentionally rendering biased decisions.

Employed across industries, AI applications unlock smartphones using facial recognition, make driving decisions in autonomous vehicles, recommend entertainment options based on user preferences, assist the process of pharmaceutical development, judge the creditworthiness of potential homebuyers, and screen applicants for job interviews. 

AI automates, quickens, and improves data processing by finding patterns in the data, adapting to new data, and learning from experience. In theory, AI is objective—but in reality, AI systems are informed by human intelligence, which is of course far from perfect. Algorithmic Accountability Act The Potential for Bias in AI

As AI becomes ubiquitous in its applications across industries, so does its potential for bias and discrimination. Understanding the inherent biases in underlying data and developing automated decision systems with explainable results will be key to addressing and correcting the potential for unfair, inaccurate, biased, and discriminatory AI systems.

Facebook says it performs a public service by mining digital traces to identify people at risk for suicide. Google says its smart home can detect when people are getting sick. Though these companies may have good intentions, their explanations also serve as smoke screens that conceal their true motivation: profit.

Informing and influencing consumers with traditional advertising is an accepted part of commerce. However, manipulating and exploiting them through behavioral ads that leverage their medical conditions and related susceptibilities is unethical and dangerous. It can trap people in unhealthy cycles of behavior and worsen their health. Targeted individuals and society suffer while corporations and their advertising partners prosper.

Emergent medical data can also promote algorithmic discrimination, in which automated decision-making exploits vulnerable populations such as children, seniors, people with disabilities, immigrants, and low-income individuals. Machine learning algorithms use digital traces to sort members of these and other groups into health-related categories called market segments, which are assigned positive or negative weights.

 For instance, an algorithm designed to attract new job candidates might negatively weight people who use wheelchairs or are visually impaired. Based on their negative ratings, the algorithm might deny them access to the job postings and applications. In this way, automated decision-making screens people in negatively weighted categories out of life opportunities without considering their desires or qualifications. 

Because emergent medical data are mined secretly and fed into black-box algorithms that increasingly make important decisions, they can be used to discriminate against consumers in ways that are difficult to detect. On the basis of emergent medical data, people might be denied access to housing, jobs, insurance, and other important resources without even knowing it

In recent years, advances in computer science have yielded algorithms so powerful that their creators have presented them as tools that can help us make decisions more efficiently and impartially. But the idea that algorithms are unbiased is a fantasy; in fact, they still end up reflecting human biases. And as they become ever more ubiquitous, we need to get clear on what they should — and should not — be allowed to do.

We need an algorithmic bill of rights to protect us from the many risks AI is introducing into our lives ..  Algorithmic Accountability Act. If passed, it would require companies to audit their algorithms for bias and discrimination.

Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.

Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.

Related to transparency is the demand for explainability. All algorithmic systems should carry something akin to a nutritional label laying out what went into them

Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.

A demand for the right to consent has been gathering steam as more people realize that images of their faces are being used to power facial recognition technology. NBC reported that IBM had scraped a million photos of faces from the website Flickr — without the subjects’ or photographers’ permission. The news sparked a backlash.

People may have consented to having their photos up on Flickr, but they hadn’t imagined their images would be used to train a technology that could one day be used to surveil them. Some states, like Oregon and Washington, are currently considering bills to regulate facial recognition. 

Imagine you’re applying for a new job. Your prospective bosses inform you that your interview will be conducted by a robot — a practice that’s already in use today. Regardless of what they tout as the benefits of this AI system, you should have the right to give or withhold consent, Permission must be granted,” not taken for granted.”



Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes.



THIS BIAS HAS NOTHING TO DO WITH VARIANCE..

SO I WILL CALL IT BIAS2 ..


AI Bias vs. Human Bias – highlights how artificial intelligence (AI), just like humans, is subject to bias2. This is not because AI determines something to be true or false for any illogical reasons. It’s because latent human bias2 may exist in machine learning, starting with the creation of an algorithm to the interpretation of data and subsequent interactions.

As algorithms become more complicated, fears of undetected bias2, mistakes, and miscomprehensions creeping into decision-making grow among policymakers, regulators, and the general public

When one examines a data sample, it is imperative to check whether the sample is representative of the population of interest. A non-representative sample where some groups are over- or under-represented inevitably introduces bias2 in the statistical analysis. A dataset may be non-representative due to sampling error and non-sampling errors.

Whereas error makes up all flaws in a study’s results, bias2 refers only to error that is systematic in nature. Whenever a researcher conducts a probability survey they must include a margin of error and a confidence level. This allows any person to understand just how much effect random sampling error could have on a study’s results.

Bias2,  cannot be measured using statistics due to the fact that it comes from the research process itself. Because of its systematic nature, bias2 slants the data in an artificial direction that will provide false information to the researcher. For this reason, eliminating bias2 should be the number one priority of all researchers.


 Sampling errors refer to the difference between a population value and a sample estimate that exists only because of the sample that happened to be selected. Sampling errors are especially problematic when the sample size is small relative to the size of the population. For example, suppose we sample 100 residents to estimate the average US household income

Non-sampling errors are typically more serious and may arise from many different sources such as errors in data collection, non-response, and selection bias2. 

Typical examples include poorly phrased data-collection questions, web-only data collection that leave out people who don’t have easy access to the internet, over-representation of people that feel particularly strongly about a subject, and responses that may not reflect one’s true opinion.



In theory, AI is objective—but in reality, AI systems are informed by subjective human intelligence..  ML models are opaque and inherently biased  .. A machine learning algorithm gets its knowledge from data, and if data are somehow biased then the decisions made by the algorithm will be biased as well.

Machine learning systems are, by design, not rule-based. Indeed, their entire objective is to determine what the rules are or might be, when we don't know them to begin with. If human cognitive biases actually can imprint themselves upon machine learning, their only way into the system is through the data.

While algorithm bias2 occurs at the development stage, there are other places where it could affect the ML process as a whole, wherein established techniques can make a major difference. Once such touchpoint is the data sampling stage. In short, when the machine model interacts with a data sample, the intent is for that sample to fully replicate the problem space that the machine will ultimately operate within.

However, there are instances where the sample does not fully convey the entire environment and as such, the model is not entirely prepared to accommodate its new settings with optimal flexibility. Consider, for example, a bicycle that is designed to perform on both mountainous terrains and roadways with equal ease. Yet, it is only tested in mountainous conditions. In this case, the training data would have sample bias2 and the resulting model might not operate in both environments with equal optimization because its training was incomplete and incomprehensive.

To avoid this, developers can follow myriad techniques to ensure that the sample data they utilize is congruent with the realistic population at hand. This will require taking multiple samples from said populations and testing them to gauge their representativeness before using them at the sampling stage..

For example, if you want to use AI to make recommendations on who best to hire, feed the algorithm data about successful candidates in the past, and it will compare those to current candidates and spit out its recommendations.

Whether the AI algorithms are themselves biased is also an open question. Machine-learning algorithms haven’t been optimized for any definition of fairness .  They have been optimized to do a task.

Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. 

Algorithmic bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. 

The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the 2018 European Union's General Data Protection Regulation.

In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower sampling probability than others. It results in a biased sample, a non-random sample of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.


While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. 

Take hiring as an example: If you give a computer a data set with 10 Palestinian Muslim candidates and 300 white Jews candidates and ask it to predict the best person for the job, we all know what the results will be.. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Medical sources sometimes refer to sampling bias as ascertainment bias



NEW VACCINES AND NEW GMO FOOD ARE FIRST TRIED OUT IN THIRD WORLD NATIONS, USING THE POPULATION AS GUINEA PIGS ..USING SOME ARTIFICIAL INTELLIGENCE BIASED ALGORITHMS... 

KILL OFF PALESTINIANS  AND  ROMA GYPSIES  –DON’T WASTE MONEY ON THEM..

Data sets about CONSCIOUS humans are particularly susceptible to bias, while data about the physical world are less susceptible.  Human-generated data is the biggest source of bias

Neural networks use deep learning algorithms, creating connections organically as they evolve. At this stage, AI programs become far more difficult to screen for traces of bias, as they are not running off a strict set of initial data parameters.

Data provides the building blocks in the learning phase of AI. Neural networks, machine learning, deep learning – they all have one thing in common: They need huge amounts of data to become better. AI can only outgrow itself if fed with enormous amounts of data

Humans typically select the data used to train machine learning algorithms and create parameters for the machines to "learn" from new data over time. Even without discriminatory intent, the training data may reflect unconscious or historic bias. For example, if the training data shows that people of a certain gender or race have fulfilled certain criteria in the past, the algorithm may "learn" to select those individuals at the exclusion of others.

Four factors drive public distrust of algorithmic decisions:-- 
Amplification of Biases: Machine learning algorithms amplify biases – systemic or unintentional – in the training data.

Opacity of Algorithms: Machine learning algorithms are black boxes for end users. This lack of transparency – irrespective of whether it’s intentional or intrinsic6 – heightens concerns about the basis on which decisions are made.

Dehumanization of Processes: Machine learning algorithms increasingly require minimal-to-no human intervention to make decisions. The idea of autonomous machines making critical, life-changing decisions evokes highly polarized emotions.

Accountability of Decisions: Most organizations struggle to report and justify the decisions algorithms produce and fail to provide mitigation steps to address unfairness or other adverse outcomes. Consequently, end-users are powerless to improve their probability of success in the future.

What happens with all that data? Tech companies feed our digital traces into machine learning algorithms and, like modern day alchemists turning lead into gold, transform seemingly mundane information into sensitive and valuable health data.

Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns - a system for spotting skin cancer might be paying more attention to whether the photo was taken in a doctor’s office. ML doesn’t ‘understand’ anything - it just looks for patterns in numbers, and if the sample data isn’t representative, the output won’t be either. 

Meanwhile, the mechanics of ML might make this hard to spot The most obvious and immediately concerning place that this issue can come up is in human diversity, and there are plenty of reasons why data about people might come with embedded biases

AI bias’ or ‘machine learning bias’ problem: a system for finding patterns in data might find the wrong patterns, and you might not realise.


Questions persist on how to handle biased algorithms, our ability to contest automated decisions, and accountability when machines make the decisions.   In reality, machine learning models reproduce the inequalities that shape the data they’re fed.

Being able to present and explain extremely complex mathematical functions behind predictive models in understandable terms to human beings is an increasingly necessary condition for real-world AI applications.

The problem of algorithmic bias and the dangers it can harbor when allowed into machine learning systems are well-known and documented. While the main reason behind biased AI is the poor quality of data fed into it, the lack of transparency in the proceedings and, as a result, inability to quickly detect bias are among the key factors here, as well.

Imagine the times when interpretable and explainable AI becomes the norm. Then the ability to understand not only the fundamental techniques used in a model but also particular cause-and-effect ties found in those specific algorithms would allow for faster and better bias detection.

When the data are incomplete, incorrect, or outdated-- if there is insufficient data to make certain  conclusions, or the data are out of date, results will naturally be inaccurate. Unfortunately, biased data and biased parameters are the rule rather than the exception. Because data are produced by humans, the information carries all the natural human bias within it.

Researchers have  begun trying to figure out how to best deal with and mitigate bias, including whether it is possible to  teach ML systems to learn without bias;  however, this research is still in its nascent stages. For the  time being, there is no cure for bias in AI systems.

The use of historical data that is biased-- because ML systems use an existing body of data to identify patterns, any bias in that data is naturally reproduced.

When developers choose to include parameters that are proxies for known bias-- for example, although developers of an algorithm may intentionally seek to avoid racial bias by not including race as a parameter, the algorithm will still have racially biased results if it includes common proxies for race, like  income, education, or postal code.

When developers allow systems to conflate correlation with causation. Take credit scores as an example. People with a low income tend to have lower credit scores, for a variety of reasons. If an ML  system used to build credit scores includes the credit scores of your Facebook friends as a parameter, it will result in lower scores among those with low-income backgrounds, even if they have otherwise strong financial indicators, simply because of the credit scores of their friends.

Today, algorithmic decision-making is largely digital. In many cases it employs statistical methods. Before AI, algorithms were deterministic—that is, pre-programmed and unchanging. Because they are based in statistical modeling,  these algorithms suffer from the same problems as traditional statistics, such as poorly sampled data, biased data, and measurement errors.

Bias can  be perpetuated through a feedback loop if the model’s own biased predictions are repeatedly fed back into it, becoming its own biased source data for the next round of predictions. In the machine learning context, we no longer just face the risk of garbage in, garbage out—when there’s garbage in, more and more garbage may be generated through the ML pipeline if one does not monitor and address potential sources of bias.

One key to de-biasing data is to ensure that a representative sample is collected in the first place. Bias from sampling errors can be mitigated by collecting larger samples and adopting data collection techniques such as stratified random sampling.


Bias from non-sampling errors are much more varied and harder to tackle, but one should still strive to minimize these kinds of errors through means such as proper training, establishing a clear purpose and procedure for data collection, and conducting careful data validation.





Companies think AI is a neutral arbitrator because it’s a creation of science, but it’s not, It is a reflection of humans — warts, beauty, and all. This is a high-consequence problem.  Most AI systems need to see millions of examples to learn to do a task. 

But using real-world data to train these algorithms means that historical and contemporary biases against marginalized groups get baked into the programs.. It’s humans that are biased and the data that we generated that is training the AI to be biased. It’s a human problem that humans need to take ownership of.

There are ways, however, to try to maintain objectivity and avoid bias with qualitative data analysis:--
Use multiple people to code the data. ...
Have participants review your results. ...
Verify with more data sources. ...
Check for alternative explanations. ...

Review findings with peers.


AI is a two-edged sword. It can be used by the good and the bad. Biases can be amplified. Data biases that exist in the data that is piled up will lead to biases in understanding and outcomes of the AI systems .They don't have common sense yet. Computers are super intelligent, but in narrow areas They can create fake news, and can churn out fake images and fake narratives..


 IN REALITY G6 NATIONS ARE BEGGARS—AI  CONVERTS THEM INTO SUPER RICH NATIONS.

We are beginning to understand both the repercussions of using selective datasets and how AI algorithms can incorporate and exacerbate the unconscious biases of their developers. We are creating algorithms that are used to detect patterns in data, and we often use a top-down approach AI is not able to intuit to solve certain problems or explain how it reached a conclusion. Furthermore, if that data is flawed by systematic historical biases, those biases will be replicated at scale.

To borrow a phrase: bias in, bias out.

We have approached AI development from the top-down, largely dictated by the viewpoints of developed nations and first-world cultures. No surprise then that the biases we see in the output of these systems reflect the unconscious biases of these perspectives.

Bias2 can be thought of as errors caused by incorrect assumptions in the learning algorithm. Bias can also be introduced through the training data, if the training data is not representative of the population it was drawn from.

Diversifying data is certainly one step to alleviate those biases, as it would allow for more globalized inputs that may hold very different priorities and insights. But no amount of diversified data will fix all the issues if it is fed into a model with inherent biases,.

Rather than top-down approaches that seek to impose a model on data that may be beyond its contexts, we should approach AI as an iterative, evolutionary system. If we flip the current model to be built-up from data rather than imposing upon it, then we can develop an evidence-based, idea-rich approach to building scalable AI-systems. The results could provide insights and understanding beyond our current modes of thinking.

A “top-down” approach recommends coding values in a rigid set of rules that the system must comply with. ... The other approach is often called “bottom-up,” and it relies on machine learning (such as inverse reinforcement learning) to allow AI systems to adopt our values by observing human behavior in relevant scenarios.

The other advantage to such a bottom-up approach is that the system could be much more flexible and reactive. It could adapt as the data changes and as new perspectives are incorporated.

Consider the system as a scaffold of incremental insights so that, should any piece prove inadequate, the entire system does not fail. We could also account for much more diversified input from around the globe, developing iterative signals to achieve cumulative models to which AI can respond.

Biased AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.

This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.

While this is interesting and vital work, the potential for bias to derail drives for equality and fairness runs deeper, to levels which may not be so easy to fix with algorithms.

A "top-down" approach may not produce solutions to every problem, and may even stifle innovation.
Biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Injecting deliberate bias into algorithmic decision making could be devastatingly simple and effective. This might involve replicating or accelerating pre-existing factors that produce bias. Many algorithms are already fed biased data. Attackers could continue to use such data sets to train algorithms, with foreknowledge of the bias they contained. 

The plausible deniability this would enable is what makes these attacks so insidious and potentially effective. Attackers would surf the waves of attention trained on bias in the tech industry, exacerbating polarization around issues of diversity and inclusion.

The idea of “poisoning” algorithms by tampering with training data is not wholly novel. Top U.S. intelligence officials have warned (PDF) that cyber attackers may stealthily access and then alter data to compromise its integrity. Proving malicious intent would be a significant challenge to address and therefore to deter.

Bias is a systemic challenge—one requiring holistic solutions. Proposed fixes to unintentional bias in artificial intelligence seek to advance workforce diversity, expand access to diversified training data, and build in algorithmic transparency (the ability to see how algorithms produce results).

As with technological advances throughout history, we must continue to examine how we implement algorithms in society and what outcomes they produce. Identifying and addressing bias in those who develop algorithms, and the data used to train them, will go a long way to ensuring that artificial intelligence systems benefit us all, not just those who would exploit them.

However, because machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups.  

For example, automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of color.

Pre-existing human biases may creep in at different stages – framing of the problem, selection and preparation of input data, tuning of model parameters and weights, interpretation of the model outputs, etc. - either intentionally or unintentionally making the algorithms for decision-making biased

Algorithms and data must be externally audited for bias and made available for public scrutiny whenever possible. Workplace must be made more diverse to detect and prevent blind spots. Cognitive bias training must be required. 

Regulations must be relaxed to allow use of sensitive data to detect and alleviate bias. Effort should be made to enhance algorithm literacy among users. Research on algorithmic techniques for reducing human bias in models should be encouraged.

Bias is the difference between a model’s estimated values and the “true” values for a variable.

Machine learning bias, also known as algorithm bias or AI bias,  occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions in the machine learning process. Three types of bias can be distinguished: information bias, selection bias, and confounding.
 Three keys to managing bias when building AI--

Choose the right learning model for the problem. ...
Choose a representative training data set. ...

Monitor performance using real data.

Sample Bias/Selection Bias:   This type of bias rears its ugly head when the distribution of the training data fails to reflect the actual environment in which the machine learning model will be running.

If the training data covers only a small set things you're interested in,and then you test it on something outside that set, it will get it wrong. It'll be 'biased' based on the sample it's given. The algorithm isn't wrong; it wasn't given enough different types of data to cover the space it's going to be applied in. That's a big factor in poor performance for machine learning algorithms. You have to get the data right.

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities

Bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.

AI algorithms are built by humans; training data is assembled, cleaned, labeled and annotated by humans. Data scientists need to be acutely aware of these biases and how to avoid them through a consistent, iterative approach, continuously testing the model, and by bringing in well-trained humans to assist.

A “top-down” approach recommends coding values in a rigid set of rules that the system must comply with. It has the benefit of tight control, but does not allow for the uncertainty and dynamism AI systems are so adept at processing. 

The other approach is often called “bottom-up,” and it relies on machine learning (such as inverse reinforcement learning) to allow AI systems to adopt our values by observing human behavior in relevant scenarios. However, this approach runs the risk of misinterpreting behavior or learning from skewed data.

Top-Down is inefficient and slow but with a tight reign
Bottom-Up is flexible but risky and bias-prone.

Solution:  Hybridise – Top-Down for Basic Norms, Bottom-Up for Socialization

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.

Many of the standard practices in deep learning are not designed with bias detection in mind. Deep-learning models are tested for performance before they are deployed, creating what would seem to be a perfect opportunity for catching bias.

 But in practice, testing usually looks like this: computer scientists randomly split their data before training into one group that’s actually used for training and another that’s reserved for validation once training is done. That means the data you use to test the performance of your model has the same biases as the data you used to train it. Thus, it will fail to flag skewed or prejudiced results.

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset.

 One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake

The use of AI in areas like criminal justice can also have devastating consequences if left unchecked.

AI is currently used in a black-box manner. In layman’s terms, this means the only thing of value is its output and not its decision making process. The reason for this is simple: the decision making of most AI model boils down to mathematical optimization over a set of probabilities. 

“I optimized a mathematical function” is a bullshit explanation.  

Things have gotten so opaque that even seminal experts in a field are unable to explain why an AI model work.. The field has taken a toll for the worse when physicists are attempting to explain AI models with quantum mechanics.



One practical compromise between the needs of XAI and realities of current AI models is through the glass box algorithm. The glass box algorithm is a unique creature; it quantifies some sort of uncertainty in its predictions, so that the user may understand when its predictions are unreliable.
Investors using black box methods conceal their true risk under the guise of proprietary technology, leaving regulators, and investors without a true picture of operations which is needed to assess risk accurately.

Hedge funds and some of the world’s largest investment managers now routinely use a black box or black box like model to manage their complicated investment strategies.

Depending on what algorithms are used, it is possible that no one, including the algorithm’s creators, can easily explain why the model generated the results that it did

The same problem is relevant in the banking industry as well. If regulators pose a question: how AI has reached at a conclusion with regard to a banking problem, banks should be able to explain the same. 

For example, if an AI solution dealing with anti-money laundering compliance comes up with an anomalous behaviour or suspicious activity in a transaction, the bank using the solution should be able to explain the reason why the solution has arrived at that decision. Such an audit is not possible with a black box AI model

The main problem with a black box model is its inability to identify possible biases in the machine learning algorithms. Biases can come through prejudices of designers and faulty training data, and these biases lead to unfair and wrong decisions. Bias can also happen when model developers do not implement the proper business context to come up with legitimate outputs.

AI-powered algorithms are increasingly used for decisions that affect our daily lives. Therefore, if an algorithm runs awry, the consequences can be disastrous. For a company it can cause serious reputational damage and lead to fines of tens of millions of dollars

In the banking industry, which is subject to stricter regulatory oversight across the globe, an incorrect decision can cost billions of dollars for an institution. If a bank wants to employ AI, it is imperative for it to subject the particular solution to rigorous, dynamic model risk management and validation. 
The bank must ensure that the proposed AI solution has the required transparency depending on the use case.

Worst of all, it may hurt customers, for instance by unintentionally treating them unfairly if there are biases in the algorithm or training data. This may lead to a serious breach of trust, which can take decades to rebuild.

Black box AI complicates the ability for programmers to filter out inappropriate content and measure bias, as developers can't know which parts of the input are weighed and analyzed to create the output.

Explainable AI or interpretable AI or transparent AI deals with techniques in artificial intelligence which can make machine learning algorithms trustworthy and easily understandable by humans. Explainability has emerged as a critical requirement for AI in many cases and has become a new research area in AI.


It is mandatory that banks should take the necessary oversight to prevent their AI models from being a black box. As of now, the AI use cases are mostly in low-risk banking environments, where human beings still take the final decision with machines just providing valuable assistance in decision making.

 In future, banks will be under pressure to remove some of the human oversight for cost savings amid increasing scale of operations. At that point, banks cannot run with risky black box models that can lead to inefficiencies and risks. 

They need to ensure that their AI solutions are trustworthy and have the required transparency to satisfy internal and external audits. In short, the bright future of AI in banking could be assured only through explainable AI.

The first challenge in building an explainable AI system is to create a bunch of new or modified machine learning algorithms to produce explainable models. Explainable models should be able to generate an explanation without hampering the performance.

The best way to do so is to ensure levels of transparency in the algorithm’s innate structure. In particular, algorithms must be intrinsically traceable to give enough visibility without impairing its performance. With visibility, at the very least, humans will be able to stop and redirect AI decisions if the situation presents itself.


Many of the XAI algorithms developed to date are relatively simple, like decision trees, and can only be used in limited circumstances.



Imperative programming. All programming can be understood in the abstract sense as a kind of specification. Imperative programming is a specification that tells a computer the exact and detailed sequence of steps to perform. These also will include conditions to test, processes to execute and alternative paths to follow (i.e. conditions, functions and loops). All of the more popular languages we have heard of (i.e. JavaScript, Java, Python, C etc) are all imperative languages. When a programmer writes an imperative program, he formulates in his mind the exact sequence of task that need to be composed to arrive at a solution.


Declarative programming. This kind of programming does not burden the user with the details of the exact sequence of steps that must be performed. Rather, a user only needs to specify (or declare) the form of the final solution. The burden of figuring out the exact steps to execute to arrive at the specified solution is algorithmically discovered by the system. Spreadsheets are an example of this kind of programming. With Spreadsheets, you don’t specify how a computer should compute its results, rather you only need to specify the dependencies between the cells to compute a final result. 

You could have a long chain of dependencies and the spreadsheet will figure out which to calculate first. The query language SQL is also a well known example of declarative programming. A SQL processor optimizes the kinds of retrievals it needs to execute to arrive at table of data that satisfies the user specified query. Other examples of declarative programming is Haskell (i.e. functional programming) and Prolog (i.e. logic programming). Mathematics, of the symbolic computation kind, can also be classified as declarative programming.

Imperative code is where you explicitly spell out each step of how you want something done, whereas with declarative code you merely say what it is that you want done



Imperative - you instruct a machine what to do step by step. Example: assembly language.

Declarative - you instruct a machine what you want to get and it supposes to figure it how to do it. Example: SQL.






Generative programming is used to describe program generators) or alternatively “organic programming”. This kind of programming has at its origins methods in connectionist inspired artificial intelligence. It derives from methods coming from Deep Learning, evolutionary algorithms and reinforcement learning. This kind of programming is best visually demonstrated by what is known as Generative Adversarial Networks


Constraint programming, differential programming (i.e. Deep Learning) and generative programming share a common trait. The program or algorithm that discovers the solution is fixed. In other words, a programmer does not need to write the program that translates the specification into a solution. Unfortunately though, the fixed program is applicable only in narrow domains. This is known as the “No Free Lunch” theorem in machine learning. You can’t use a linear programming algorithm to solve an integer programming problem. Deep Learning however has a unique kind of general capability that the same kind of algorithm (i.e. stochastic gradient descent) appears to be applicable to many problems.



  1. https://timesofindia.indiatimes.com/india/500-indians-alerted-about-government-backed-phishing-google/articleshow/72285551.cms

    THE MAIN MOTTO OF PHISHING EMAILS IS: TRICKING USERS TO CLICK EMAILS OR LINKS AND CAUSE MONETARY LOSS TO THEM.

    PHISHING ATTACKS ARE MADE BY CYBERCRIMINALS TO GRAB SENSITIVE INFORMATION (I.E. BANKING INFORMATION, CREDIT CARD INFORMATION, STEALING OF CUSTOMER DATA AND PASSWORDS) AND MISUSE THEM.

    HACKERS SPREAD THEIR PHISHING NET TO CATCH DIFFERENT TYPES OF PHISH. BE IT A SMALL PHISH OR A BIG WHALE, THEY ARE ALWAYS AT A PROFIT.

    PHISHING ATTACKS ARE DONE BY CYBERCRIMINALS, WHO TRICK THE VICTIM, BY CONCEALING THEIR IDENTITY BY MASKING THEMSELVES AS A TRUSTED IDENTITY AND LURING THEM INTO OPENING DECEPTIVE EMAILS FOR STEALING SENSITIVE INFORMATION. THESE ATTACKS ARE SUCCESSFUL BECAUSE OF LACK OF SECURITY KNOWLEDGE, AMONGST THE MASSES. IN SHORT, PHISHING ATTACK IS A DISGUISED ATTACK MADE BY HACKER IN A VERY SOPHISTICATED WAY.

    ON THE CONTRARY PHISHING SCAMS ARE THOSE WHEREIN THOUSANDS OF USERS ARE TARGETED AT A TIME BY CYBERCRIMINALS. FOR E.G. FAKE GOOGLE MAIL’S LOGIN PAGE IS CREATED AND EMAILS ARE SENT STATING TO CHECK THEIR ACCOUNTS. HUGE SCAMS LEAD TO HUGE LOSSES. SURVEYS SHOW A PHISHING INCREASE OF 250 PER CENT APPROXIMATELY, AS PER MICROSOFT. CHECK OUT THE DETAILS.

    THERE ARE MANY TYPES OF PHISHING ATTACKS AND PHISHING SCAMS CARRIED OUT BY HACKERS. A FEW OF THEM ARE:

    EMAIL PHISHING:
    MANY BUSINESS OWNERS ARE UNAWARE ABOUT THE INSECURE AND FRAUD LINKS AND EMAILS. FOR E.G. THE VICTIM GETS AN E-MAIL FROM THE HACKER TO CHECK SOME UNKNOWN TRANSACTIONS IN THEIR BUSINESS BANK ACCOUNT, WITH A FAKE LINK ATTACHED TO A SITE WHICH IS ALMOST AS GOOD AS REAL. WITHOUT THINKING FOR A SECOND, THE VICTIM OPENS THE FAKE LINK AND ENTERS THE ACCOUNT DETAILS AND PASSWORDS. THAT’S IT. YOU ARE ATTACKED.

    SPEAR PHISHING:
    SPEAR PHISHING IS AN EMAIL ATTACK DONE BY A FOE PRETENDING TO BE YOUR FRIEND. TO MAKE THEIR ATTACK SUCCESSFUL, THESE FRAUDSTERS INVEST IN A LOT OF TIME TO GATHER SPECIFIC INFORMATION ABOUT THEIR VICTIMS; I.E. VICTIM’S NAME, POSITION IN COMPANY, HIS CONTACT INFORMATION ETC.

    THEY LATER CUSTOMISE THEIR EMAILS, WITH THE GATHERED INFORMATION, THUS TRICKING THE VICTIM TO BELIEVE THAT THE EMAIL IS SENT FROM A TRUSTWORTHY SOURCE.

    FAKE URL AND EMAIL LINKS ARE ATTACHED IN THE EMAIL ASKING FOR PRIVATE INFORMATION. SPEAR PHISHING EMAILS ARE TARGETED TOWARDS INDIVIDUALS AS WELL AS COMPANIES TO STEAL SENSITIVE INFORMATION FOR MAKING MILLIONS.

    DOMAIN SPOOFING:
    HERE THE ATTACKER FORGES THE DOMAIN OF THE COMPANY, TO IMPERSONATE ITS VICTIMS. SINCE THE VICTIM RECEIVES AN EMAIL WITH THE SAME DOMAIN NAME OF THE COMPANY, THEY BELIEVE THAT IT’S FROM TRUSTED SOURCES, AND HENCE ARE VICTIMISED.

    BEFORE A FEW YEARS THERE WERE ONLY 2 TYPES OF PHISHING ATTACKS.

    EMAIL PHISHING & DOMAIN SPOOFING. EITHER THE EMAIL NAME WAS FORGED, OR THE DOMAIN NAME WAS FORGED TO ATTACK VICTIMS. BUT AS TIME FLIES, CYBERCRIMINALS COME UP WITH VARIOUS TYPES OF ATTACKS WHICH ARE MENTIONED BELOW:
    WHALING:
    WHALING PHISHING ATTACK OR CEO FRAUD AS THE NAME SUGGESTS ARE TARGETED ON HIGH PROFILE INDIVIDUALS LIKE CEO, CFO, COO OR SENIOR EXECUTIVES OF A COMPANY. THE ATTACK IS ALMOST LIKE SPEAR PHISHING; THE ONLY DIFFERENCE IS THAT THE TARGETS ARE LIKE WHALES IN A SEA AND NOT FISH. HENCE THE NAME “WHALING” IS GIVEN FOR THESE PHISHING ATTACKS.

    FRAUDSTERS TAKE MONTHS TO RESEARCH THESE HIGH VIPS, THEIR CONTACTS AND THEIR TRUSTED SOURCES, FOR SENDING FAKE EMAILS TO GET SENSITIVE INFORMATION, AND LATER STEAL IMPORTANT DATA AND CASH THUS HAMPERING THE BUSINESS. SINCE THEY TARGET SENIOR MANAGEMENTS, THE BUSINESS LOSSES CAN BE HUGE WHICH MAKES WHALING ATTACKS MORE DANGEROUS.

    VISHING:
    VOIP (VOICE) + PHISHING = VISHING.

    TILL NOW PHISHING ATTACKS WERE MADE BY SENDING EMAILS. BUT WHEN ATTACKS ARE DONE BY TARGETING MOBILE NUMBERS, IT’S CALLED VISHING OR VOICE PHISHING.


    CONTINUED TO 2-
    ReplyDelete
    Replies
    1. CONTINUED FROM 1--

      IN VISHING ATTACKS, THE FRAUDSTERS CALL ON MOBILE, AND ASK FOR PERSONAL INFORMATION, POSING THEMSELVES AS A TRUST-WORTHY IDENTITY. FOR E.G. THEY MAY PRETEND TO BE A BANK EMPLOYEE, EXTRACT BANK ACCOUNT NUMBERS, ATM NUMBERS OR PASSWORDS, AND ONCE YOU HAVE HANDED THAT INFORMATION, IT’S LIKE GIVING THESE THIEVES, ACCESS TO YOUR ACCOUNTS AND FINANCES.

      SMISHING:
      SMS + PHISHING = SMISHING.

      JUST LIKE VISHING, MODE OF SMISHING ATTACKS IS ALSO RELATED TO MOBILES. HERE THE ATTACKER SENDS A SMS MESSAGE TO THE TARGET PERSON, TO OPEN A LINK OR AN SMS ALERT. ONCE THEY OPEN THE FAKE MESSAGE OR ALERT, THE VIRUS OR MALWARE IS INSTANTLY DOWNLOADED IN THE MOBILE. IN THIS WAY, THE ATTACKER CAN GET ALL THE DESIRED INFORMATION STORED ON YOUR MOBILE, USEFUL FOR STEALING YOUR MONEY.

      CLONE PHISHING:
      CLONE MEANS DUPLICATE OR IDENTICAL. GIVING JUSTICE TO THE NAME, CLONE PHISHING IS WHEN AN EMAIL IS CLONED BY THE FRAUDSTER, TO CREATE ANOTHER IDENTICAL AND PERFECT EMAIL TO TRAP EMPLOYEES.

      SINCE IT’S A PERFECT REPLICA OF THE ORIGINAL ONE, FRAUDSTERS TAKE ADVANTAGE OF ITS LEGITIMATE LOOK AND ARE SUCCESSFUL IN THEIR MALICIOUS INTENTIONS.

      SEARCH ENGINE PHISHING:
      THIS IS A NEW TYPE OF PHISHING WHEREIN THE FRAUDSTER MAKES WEB SITE COMPRISING OF ATTRACTIVE BUT FAKE PRODUCTS, FAKE SCHEMES OR FAKE OFFERS TO ATTRACT CUSTOMERS. THEY EVEN TIE-UP WITH FRAUDULENT BANKS FOR FAKE INTEREST SCHEMES. THEY GET THEIR WEBSITE INDEXED BY SEARCH ENGINES AND LATER WAIT FOR THEIR PREY.

      ONCE A CUSTOMER VISITS THEIR PAGE AND ENTERS THEIR PERSONAL INFORMATION TO PURCHASE PRODUCT, OR FOR ANY OTHER PURPOSE, THEIR INFORMATION GOES IN THE HANDS OF FRAUDSTERS, WHO CAN CAUSE THEM HUGE DAMAGES.

      WATERING HOLE PHISHING:
      IN THIS TYPE OF PHISHING, THE ATTACKER KEEPS A CLOSE WATCH ON THEIR TARGETS. THEY OBSERVE THE SITES WHICH THEIR TARGETS USUALLY VISIT AND INFECT THOSE SITES WITH MALWARE. IT’S A WAIT AND WATCH SITUATION, WHEREIN THE ATTACKER WAITS FOR THE TARGET TO RE-VISIT THE MALICIOUS SITE. ONCE THE TARGETED PERSON OPENS THE SITE AGAIN, MALWARE IS INFECTED IN THE COMPUTER OF THE PERSON, WHICH GRABS ALL THE REQUIRED PERSONAL DETAILS OR CUSTOMER INFORMATION LEADING TO DATA BREACH.

      THOUGH THE CYBERHACKERS WHO TARGET PHISHING ATTACKS ON INDIVIDUALS OR COMPANIES ARE MASTER MINDS, THERE ARE CERTAIN PRECAUTIONARY MEASURES, WHICH CAN PREVENT THEM FROM SUCCEEDING. LET’S HAVE A LOOK.

      PRECAUTIONS & PREVENTIONS OF PHISHING ATTACKS:--
      RE-CHECK URL BEFORE CLICKING UNKNOWN OR SUSPICIOUS LINKS
      DO NOT OPEN SUSPICIOUS EMAILS OR SHORT LINKS
      CHANGE PASSWORDS FREQUENTLY
      EDUCATE AND TRAIN YOUR EMPLOYEES FOR IDENTIFYING AND CEASING PHISHING ATTACKS
      RE-CHECK FOR SECURED SITES; I.E. HTTPS SITES
      INSTALL LATEST ANTI-VIRUS SOFTWARE, ANTI-PHISHING SOFTWARE AND ANTI-PHISHING TOOLBARS
      DON’T INSTALL ANYTHING FROM UNKNOWN SOURCES
      ALWAYS OPT FOR 2-FACTOR AUTHENTICATION
      TRUST YOUR INSTINCTS
      UPDATE YOUR SYSTEMS WITH LATEST SECURITY MEASURES
      INSTALL WEB-FILTERING TOOLS FOR MALICIOUS EMAILS
      USE SSL SECURITY FOR ENCRYPTION
      REPORT PHISHING ATTACKS AND SCAMS TO APWG (ANTI-PHISHING WORKING GROUP)

      AI PROVIDES A LEVEL OF PROTECTION IN THE CYBERSECURITY REALM THAT IS UNFEASIBLE FOR HUMAN OPERATORS.. GOOGLE USES MACHINE LEARNING TO WEED OUT VIOLENT IMAGES, DETECT PHISHING AND MALWARE, AND FILTER COMMENTS. THIS SECURITY AND FILTERING ARE OF AN ORDER OF MAGNITUDE AND THOROUGHNESS THAT NO HUMAN-BASED EFFORT COULD EQUAL.

      ONE OF THE MOST NOTORIOUS PIECES OF CONTEMPORARY MALWARE – THE EMOTET TROJAN – IS A PRIME EXAMPLE OF A PROTOTYPE-AI ATTACK. EMOTET’S MAIN DISTRIBUTION MECHANISM IS SPAM-PHISHING, USUALLY VIA INVOICE SCAMS THAT TRICK USERS INTO CLICKING ON MALICIOUS EMAIL ATTACHMENTS.

      THE EMOTET AUTHORS HAVE RECENTLY ADDED ANOTHER MODULE TO THEIR TROJAN, WHICH STEALS EMAIL DATA FROM INFECTED VICTIMS. THE INTENTION BEHIND THIS EMAIL EXFILTRATION CAPABILITY WAS PREVIOUSLY UNCLEAR, BUT EMOTET HAS RECENTLY BEEN OBSERVED SENDING OUT CONTEXTUALIZED PHISHING EMAILS AT SCALE.


      CONTINUED TO 3-

    2. CONTINUED FROM 2--

      THIS MEANS IT CAN AUTOMATICALLY INSERT ITSELF INTO PRE-EXISTING EMAIL THREADS, ADVISING THE VICTIM TO CLICK ON A MALICIOUS ATTACHMENT, WHICH THEN APPEARS IN THE FINAL, MALICIOUS EMAIL. THIS INSERTION OF THE MALWARE INTO PRE-EXISTING EMAILS GIVES THE PHISHING EMAIL MORE CONTEXT, THEREBY MAKING IT APPEAR MORE LEGITIMATE.

      EMOTET IS A TROJAN THAT IS PRIMARILY SPREAD THROUGH SPAM EMAILS (MALSPAM). THE INFECTION MAY ARRIVE EITHER VIA MALICIOUS SCRIPT, MACRO-ENABLED DOCUMENT FILES, OR MALICIOUS LINK. ... EMOTET IS POLYMORPHIC, WHICH MEANS IT CAN CHANGE ITSELF EVERY TIME IT IS DOWNLOADED TO EVADE SIGNATURE-BASED DETECTION.

      ONCE EMOTET HAS INFECTED A HOST, A MALICIOUS FILE THAT IS PART OF THE MALWARE IS ABLE TO INTERCEPT, LOG, AND SAVE OUTGOING NETWORK TRAFFIC VIA A WEB BROWSER LEADING TO SENSITIVE DATA BEING COMPILED TO ACCESS THE VICTIM'S BANK ACCOUNT(S). EMOTET IS A MEMBER OF THE FEODO TROJAN FAMILY OF TROJAN MALWARE.

      ONCE ON A COMPUTER, EMOTET DOWNLOADS AND EXECUTES A SPREADER MODULE THAT CONTAINS A PASSWORD LIST THAT IT USES TO ATTEMPT TO BRUTE FORCE ACCESS TO OTHER MACHINES ON THE SAME NETWORK. ... THE EMAILS TYPICALLY CONTAIN A MALICIOUS LINK OR ATTACHMENT WHICH IF LAUNCHED WILL RESULT IN THEM BECOMING INFECTED WITH TROJAN.EMOTET..

      A BANKER TROJAN IS A MALICIOUS COMPUTER PROGRAM DESIGNED TO GAIN ACCESS TO CONFIDENTIAL INFORMATION STORED OR PROCESSED THROUGH ONLINE BANKING SYSTEMS. BANKER TROJAN IS A FORM OF TROJAN HORSE AND CAN APPEAR AS A LEGITIMATE PIECE OF SOFTWARE UNTIL IT IS INSTALLED ON AN ELECTRONIC DEVICE.

      EVERY DAY, ARTIFICIAL INTELLIGENCE ENABLES WINDOWS DEFENDER AV TO STOP COUNTLESS MALWARE OUTBREAKS IN THEIR TRACKS.

      YET THE CRIMINALS BEHIND THE CREATION OF EMOTET COULD EASILY LEVERAGE AI TO SUPERCHARGE THIS ATTACK. ., BY LEVERAGING AN AI’S ABILITY TO LEARN AND REPLICATE NATURAL LANGUAGE BY ANALYSING THE CONTEXT OF THE EMAIL THREAD, THESE PHISHING EMAILS COULD BECOME HIGHLY TAILORED TO INDIVIDUALS.

      THIS WOULD MEAN THAT AN AI-POWERED EMOTET TROJAN COULD CREATE AND INSERT ENTIRELY CUSTOMIZED, MORE BELIEVABLE PHISHING EMAILS. CRUCIALLY, IT WOULD BE ABLE TO SEND THESE OUT AT SCALE, WHICH WOULD ALLOW CRIMINALS TO INCREASE THE YIELD OF THEIR OPERATIONS ENORMOUSLY.

      SPEAR PHISHING AGAIN---
      IN SPEAR PHISHING (TARGETED PHISHING), EMAILS WITH INFECTED ATTACHMENTS OR LINKS ARE SENT TO INDIVIDUALS OR ORGANISATIONS IN ORDER TO ACCESS CONFIDENTIAL INFORMATION. WHEN OPENING THE LINK OR ATTACHMENT, MALWARE IS RELEASED, OR THE RECIPIENT IS LED TO A WEBSITE WITH MALWARE THAT INFECTS THE RECIPIENT'S COMPUTER.

      DURING THE 2016 US PRESIDENTIAL CAMPAIGN, FANCY BEAR – A HACKER GROUP AFFILIATED WITH RUSSIAN MILITARY INTELLIGENCE ( SIC ) – USED SPEAR PHISHING TO STEAL EMAILS FROM INDIVIDUALS AND ORGANISATIONS ASSOCIATED WITH THE US DEMOCRATIC PARTY.

      THE ONLINE ENTITIES DCLEAKS AND GUCCIFER 2.0 LEAKED THE DATA VIA MEDIA OUTLETS AND WIKILEAKS TO DAMAGE HILLARY CLINTON'S CAMPAIGN. IN JULY 2018, SPECIAL COUNSEL ROBERT MUELLER INDICTED RUSSIAN INTELLIGENCE OFFICERS ALLEGED TO BE BEHIND THE ATTACK ( SIC) . ANOTHER STATE-SPONSORED RUSSIAN HACKER GROUP, COZY BEAR, HAS USED SPEAR PHISHING TO TARGET NORWEGIAN AND DUTCH AUTHORITIES. THIS PROMPTED THE DECISION TO COUNT THE VOTES FOR THE 2017 DUTCH GENERAL ELECTION BY HAND.

      AI-BASED SYSTEMS ARE ABLE TO ADAPT TO CONTINUOUSLY CHANGING THREATS AND CAN MORE EASILY HANDLE NEW AND UNSEEN ATTACKS. THE PATTERN AND ANOMALY SYSTEMS CAN ALSO HELP TO IMPROVE OVERALL SECURITY BY CATEGORIZING ATTACKS AND IMPROVING SPAM AND PHISHING DETECTION.

      RATHER THAN REQUIRING USERS TO MANUALLY FLAG SUSPICIOUS MESSAGES, THESE SYSTEMS CAN AUTOMATICALLY DETECT MESSAGES THAT DON'T FIT THE USUAL PATTERN AND QUARANTINE THEM FOR FUTURE INSPECTION OR AUTOMATIC DELETION. THESE INTELLIGENT SYSTEMS CAN ALSO AUTONOMOUSLY MONITOR SOFTWARE SYSTEMS AND AUTOMATICALLY APPLY SOFTWARE PATCHES WHEN CERTAIN PATTERNS ARE DISCOVERED.

      capt ajit vadakayil
      ..





  1. THE HONGKONG PROTESTS ARE FUNDED AND CONTROLLED BY THE JEWISH OLIGARCHY..

    EVEN CHINESE DO NOT KNOW THAT JEW MAO AND JEW MAURICE STRONG WERE DEEP STATE AGENTS...

    JEWS WHO HAVE MONOPOLISED THE MAFIA AND CRIME IN HONGKONG DO NOT WANT TO BE EXTRADITED TO THE CHINESE MAINLAND..

    MACAU GAMBLING IS FAR MORE THAN LAS VEGAS.. DRUG MONEY IS LAUNDERED HERE.

    ROTHSCHILD CONTROLLED PORTUGAL LEGALIZED GAMBLING IN MACAU IN 1850..

    ROTHSCHILD RULED INDIA..NOT THE BRITISH KING OR PARLIAMENT.. HE GREW OPIUM IN INDIA AND SOLD IT IN CHINA.. HIS DRUG MONEY WAS LAUNDERED IN HONGKONG HSBC BANK.

    KATHIAWARI JEW GANDHI WAS ROTHSCHILDs AGENT WHEN IT CAME TO SUPPORTING OPIUM CULTIVATION IN INDIA..

    http://ajitvadakayil.blogspot.com/2019/07/how-gandhi-converted-opium-to-indigo-in.html

    INDIAN FARMERS WHO REFUSED TO CULTIVATE OPIUM WERE SHIPPED OFF ENMASSE AS SLAVES ABROAD WITH FAMILY..

    http://ajitvadakayil.blogspot.com/2010/04/indentured-coolie-slavery-reinvented.html

    INDIAN AND AMERICAN OLIGARCHY ( CRYPTO JEWS ) WERE ALL DRUG RUNNERS OF JEW ROTHSCHILD..

    http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html

    http://ajitvadakayil.blogspot.com/2010/12/dirty-secrets-of-boston-tea-party-capt.html

    DRUG CARTELS OF COLOMBIA/ MEXICO USE HONGKONG TO LAUNDER THEIR DRUG MONEY..

    GAMBLING TOURISM IS MACAU'S BIGGEST SOURCE OF REVENUE, MAKING UP MORE THAN 54% OF THE ECONOMY. VISITORS ARE MADE UP LARGELY OF CHINESE NATIONALS FROM MAINLAND CHINA AND HONG KONG.

    HONGKONG IS NOW FLOODED WITH DRUGS .. DUE TO HIGH STRESS AT WORK, PEOPLE ARE ADDICTED .. HOUSE RENT IN HONGKONG IS VERY HIGH DUE TO THE JEWISH OLIGARCHS WHO CONTROL HONGKONG.

    IN 2012, HSBC HOLDINGS PAID US$ 1.9 BILLION TO RESOLVE CLAIMS IT ALLOWED DRUG CARTELS IN MEXICO AND COLOMBIA TO LAUNDER PROCEEDS THROUGH ITS BANKS. HSBC WAS FOUNDED BY ROTHSCHILD.

    CHINA'S EXCESSIVELY STRICT FOREIGN EXCHANGE CONTROLS ARE INDIRECTLY BREEDING MONEY LAUNDERING, PROVIDING A HUGE DEMAND FOR UNDERGROUND KOSHER MAFIA BANKS.

    FACTORY MANUFACTURERS CONVERT HONG KONG DOLLARS AND RENMINBI WITH UNDERGROUND BANKS FOR CONVENIENCE WHILE CASINOS IN MACAU OFFER RECEIPTS TO GIVE LEGITIMACY TO SUSPECT CURRENCY FLOWS.

    INDIA WAS NO 1 EXPORTED OF PRECURSOR CHEMICALS LIKE EPHEDRINE TO MEXICO FOR PRODUCING METH.. TODAY CHINA ( GUANGDONG ) HAS TAKEN OVER POLE POSITION..

    http://ajitvadakayil.blogspot.com/2017/02/breaking-bad-tv-serial-review-where.html

    EL CHAPO AND HIS DEPUTY IGNACIO "NACHO" CORONEL VILLARREAL USED HONG KONG TO LAUNDER BILLIONS OF DOLLARS..TO GET SOME IDEA WATCH NETFLIX SERIES “NARCOS MEXICO” AND “EL CHAPO”.

    BALLS TO THE DECOY OF "FREEDOM " FOR HONGKONG CITIZENS.. IT IS ALL ABOUT FREEDOM FOR JEWISH MAFIA TO USE HONGKONG TO LAUNDER DRUG MONEY.

    JEW ROTHSCHILD COULD SELL INDIAN OPIUM IN CHINA ONLY BECAUSE THE CHINESE MAFIA AND SEA PIRATES WAS CONTROLLED BY HIM AND JEW SASSOON.

    COLOMBIAN/ MEXICAN DRUG CARTEL KINGS FEAR EXTRADITION TO USA.. SAME NOW WITH HONGKONG MONEY LAUNDERING MAFIA..

    https://ajitvadakayil.blogspot.com/2019/11/paradox-redemption-victory-in-defeat.html

    THE 2019 HONG KONG PROTESTS HAVE BEEN LARGELY DESCRIBED AS "LEADERLESS".. BALLS, IT IS 100% CONTROLLED BY JEWS

    PROTESTERS COMMONLY USED LIHKG, ( LIKE REDDIT ) AN ONLINE FORUM, AN OPTIONALLY END-TO-END ENCRYPTED MESSAGING SERVICE, TO COMMUNICATE AND BRAINSTORM IDEAS FOR PROTESTS AND MAKE COLLECTIVE DECISIONS ..

    THE KOSHER WEBSITE IS WELL-KNOWN FOR BEING THE ULTIMATE PLATFORM FOR DISCUSSING THE STRATEGIES FOR THE LEADERLESS ANTI-EXTRADITION BILL PROTESTS IN 2019..

    CONTINUED TO 2-


























    1. CONTINUED FROM 1-

      HONGKONG PROTESTERS USE LIHKG TO MICROMANAGE STRIKE STRATEGIES , CALL FOR BACKUP OR ARRANGE LOGISTICS SUPPLIES FOR THOSE ON THE FRONT LINES OF CLASHES WITH POLICE.

      LIHKG CALLS ON RESIDENTS TO SKIP WORK AND CLASSES AND VANDALISE. HONGKONGERS STICK TO LIHKG AS POSTS ARE PREDOMINANTLY IN THEIR NATIVE TONGUE, CANTONESE.

      LIHKG IS A SAFE HAVEN FOR THESE PROTESTING PEOPLE CONTROLLED BY JEWSIH OLIGARCHS.

      AN ACCOUNT CAN ONLY BE CREATED WITH AN EMAIL ADDRESS PROVIDED BY AN INTERNET SERVICE PROVIDER OR HIGHER EDUCATION INSTITUTION, MEANING THE USER CANNOT HIDE THEIR IDENTITY FROM LIHKG.

      THE JEWISH OLIGARCHS KNOW THEIR PRIVATE ARMY. THE FORUM DOES NOT REQUIRE USERS TO REVEAL ANY PERSONAL INFORMATION, INCLUDING THEIR NAMES, SO THEY CAN REMAIN ANONYMOUS.

      LIHKG IS ALSO FERTILE GROUND FOR DOXXING PEOPLE NOT SUPPORTIVE OF THE MOVEMENT AGAINST THE EXTRADITION BILL. ONE POLICE OFFICER FOUND HIMSELF A TARGET OF PUBLIC MOCKERY WHEN HIS NAME AND PICTURE WERE LEAKED, ALONG WITH PRIVATE TINDER CONVERSATIONS REQUESTING SEXUAL FAVOURS IN A POLICE STATION.

      THE PHRASE “BE WATER, MY FRIEND”, ORIGINALLY SAID BY MARTIAL ARTS LEGEND BRUCE LEE, HAS BECOME A MANTRA FOR PROTESTERS, WHO HAVE TAKEN A FLUID APPROACH TO THEIR RALLIES.

      THE PHRASE HAS BEEN POPULARISED ON LIHKG AS A WAY TO PROVIDE ENCOURAGEMENT AND UNITE CITIZENS.

      INDIAN JOURNALISTS ARE ALL STUPID POTHOLE EXPERTS, RIGHT ?

      capt ajit vadakayil
      ..
  2. POOR AJIT DOVAL AND RAW

    THESE ALICES IN WONDERLAND DONT EVEN KNOW THAT URBAN NAXALS/ KASHMIRI SEPARATISTS / SPONSORING DEEP STATE NGOs ARE USING TELEGRAM FOR THEIR DESH DROHI PURPOSES..

    TELEGRAM WITH 210 MILLION ACTIVE USERS IS A CLOUD-BASED INSTANT MESSAGING AND VOICE OVER IP SERVICE. TELEGRAM CLIENT APPS ARE AVAILABLE FOR ANDROID, IOS, WINDOWS PHONE, WINDOWS NT, MACOS AND LINUX. USERS CAN SEND MESSAGES AND EXCHANGE PHOTOS, VIDEOS, STICKERS, AUDIO AND FILES OF ANY TYPE.

    THE DEEP STATE USES TELEGRAM FOR REGIME CHANGE.. TELEGRAM IS DUBBED AS A "JIHADI MESSAGING APP".

    ISIS WHICH WAS FUNDED ARMED AND CONTROLLED BY JEWISH DEEP STATE USED TELEGRAM..

    https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia

    LAUNCHED IN 2013, BY A ANTI-PUTIN RUSSIAN JEW , TELEGRAM COMPANY HAS MARKETED THE APP AS A SECURE MESSAGING PLATFORM IN A WORLD WHERE ALL OTHER FORMS OF DIGITAL COMMUNICATION SEEM TRACKABLE.

    IT HAS FEATURES SUCH AS END-TO-END ENCRYPTION (WHICH PREVENTS ANYONE EXCEPT THE SENDER AND RECEIVER FROM ACCESSING A MESSAGE), SECRET CHATROOMS, AND SELF-DESTRUCTING MESSAGES.

    USERS ON TELEGRAM CAN COMMUNICATE IN CHANNELS, GROUPS, PRIVATE MESSAGES, OR SECRET CHATS. WHILE CHANNELS ARE OPEN TO ANYONE TO JOIN (AND THUS USED BY TERRORIST GROUPS TO DISSEMINATE PROPAGANDA), SECRET CHATS ARE VIRTUALLY IMPOSSIBLE TO CRACK BECAUSE THEY’RE PROTECTED BY A SOPHISTICATED FORM OF ENCRYPTION.

    THE COMBINATION OF THESE DIFFERENT FUNCTIONS IN A SINGLE PLATFORM IS WHY GROUPS LIKE ISIS USE TELEGRAM AS A “COMMAND AND CONTROL CENTER”.. THEY CONGREGATE ON TELEGRAM, THEN THEY GO TO DIFFERENT PLATFORMS. THE INFORMATION STARTS IN THE APP, THEN SPREADS TO TWITTER, FACEBOOK.

    SECRET CHATS ARE PROTECTED BY END-TO-END ENCRYPTION. HOW THIS WORKS IS THAT EVERY USER IS GIVEN A UNIQUE DIGITAL KEY WHEN THEY SEND OUT A MESSAGE. TO ACCESS THAT MESSAGE, THE RECEIVER HAS TO HAVE A KEY THAT MATCHES THE SENDER’S EXACTLY, SO THAT MESSAGES FROM ANY ONE USER CAN ONLY BE READ BY THE INTENDED RECIPIENT.

    THIS MAKES IT ALMOST IMPOSSIBLE FOR MIDDLEMEN SUCH AS POLICE OR INTELLIGENCE AGENCIES TO ACCESS THE FLOW OF INFORMATION BETWEEN THE SENDER AND RECEIVER.

    EVEN IF POLICE CAN IDENTIFY WHO IS SPEAKING TO WHOM, AND FROM WHERE, THEY HAVE NO WAY OF KNOWING WHAT THEY’RE SAYING TO EACH OTHER. IN FACT, BECAUSE THE ENCRYPTION HAPPENS DIRECTLY BETWEEN THE TWO USERS, EVEN TELEGRAM ( BALLS , THEY KNOW ) ITSELF HAS NO WAY OF KNOWING WHAT’S IN THESE MESSAGES

    BEFORE A USER SENDS A MESSAGE IN A SECRET CHAT, THEY CAN CHOOSE TO SET A SELF-DESTRUCT TIMER ON IT, WHICH MEANS THAT SOME TIME AFTER THE MESSAGE HAS BEEN READ, IT AUTOMATICALLY AND PERMANENTLY DISAPPEARS FROM BOTH DEVICES.

    COMPARED WITH OTHER SOCIAL MEDIA PLATFORMS, TELEGRAM HAS EXTREMELY LOW BARRIERS TO ENTRY. ALL USERS HAVE TO DO TO SET UP AN ACCOUNT IS PROVIDE IS A CELLPHONE NUMBER, TO WHICH THE APP THEN SENDS AN ACCESS CODE.

    IT’S COMMON PRACTICE FOR TERRORISTS TO SUPPLY ONE CELLPHONE NUMBER TO SET UP THEIR ACCOUNT BUT USE ANOTHER TO ACTUALLY OPERATE THE ACCOUNT.

    THE SIM CARD YOU USE TO OPEN YOUR TELEGRAM ACCOUNT AND THE SIM CARD YOU ACTUALLY USE ON THE PHONE WITH THE APPLICATION DON’T HAVE TO THE SAME.

    NOT ONLY DOES THIS MAKE IT HARDER FOR LAW ENFORCEMENT OFFICIALS TO TRACK DOWN TERRORISTS THROUGH TELEGRAM, IT ALSO MAKES IT EASIER FOR TERRORISTS TO SET UP A NEW ACCOUNT ONCE THEY DISCOVER THEIR PREVIOUS ONE HAS BEEN EXPOSED TO THE POLICE.

    ANOTHER ATTRACTIVE FEATURE OF THE APP IS THAT IT’S REALLY QUITE HARD TO GET BOOTED OFF IT.

    TELEGRAM’S MESSAGING SERVICE IS POPULAR BECAUSE IT OFFERS A “SECRET CHAT” FUNCTION ENCRYPTED WITH TELEGRAM’S PROPRIETARY MTPROTO PROTOCOL.

    capt ajit vadakayil
    ..

  1. What has happened in Hong kong will be studied refined and applied globally. Chilling scenarios. Hope GOI also learns from this.
    1. “MONEY LAUNDERING” COVERS ALL KINDS OF METHODS USED TO CHANGE THE IDENTITY OF ILLEGALLY OBTAINED MONEY (I.E. CRIME PROCEEDS) SO THAT IT APPEARS TO HAVE ORIGINATED FROM A LEGITIMATE SOURCE.

      THE TECHNIQUES FOR LAUNDERING FUNDS VARY CONSIDERABLY AND ARE OFTEN HIGHLY INTRICATE.

      IN HONG KONG, CRIME PROCEEDS ARE GENERATED FROM VARIOUS ILLEGAL ACTIVITIES. THEY CAN BE DERIVED FROM DRUG TRAFFICKING, SMUGGLING, ILLEGAL GAMBLING, BOOKMAKING, BLACKMAIL, EXTORTION, LOAN SHARKING, TAX EVASION, CONTROLLING PROSTITUTION, CORRUPTION, ROBBERY, THEFT, FRAUD, COPYRIGHT INFRINGEMENT, INSIDER DEALING AND MARKET MANIPULATION.

      WHEN CRIME PROCEEDS ARE LAUNDERED, CRIMINALS WOULD THEN BE ABLE TO USE THE MONEY WITHOUT BEING LINKED EASILY TO THE CRIMINAL ACTIVITIES FROM WHICH THE MONEY WAS ORIGINATED.

      THE MONEY LAUNDERING MAFIA IN HONGKONG IS JEWISH CONTROLLED BY THE DEEP STATE..

      HONG KONG IS A MAJOR PIPELINE THROUGH WHICH INTERNATIONAL FRAUDSTERS, GLOBAL DRUG-TRAFFICKING CARTELS, PEOPLE-SMUGGLING GANGS AND ONLINE RACKETEERS FUNNEL THEIR ILL-GOTTEN GAINS.

      ONLINE FRAUD, INVESTMENT FRAUD, DRUGS, LOAN-SHARKING, BOOKMAKING, ILLEGAL GAMBLING, TAX EVASION AND CORRUPTION WERE ALL CRIMES ASSOCIATED WITH THE MONEY-LAUNDERING CASES IN HONGKONG

      HSBC WAS FOUNDED BY JEW ROTHSCHILD TO LAUNDER OPIUM DRUG MONEY

      http://ajitvadakayil.blogspot.com/2019/07/how-gandhi-converted-opium-to-indigo-in.html

      IN A LANDMARK CASE, THE HSBC BANK AGREED TO PAY A US$1.9 BILLION IN FINES IN 2012, AFTER ADMITTING IT KNOWINGLY MOVED HUNDREDS OF MILLIONS FOR MEXICAN DRUG CARTELS AND ILLEGALLY SERVED CLIENTS IN IRAN, MYANMAR, LIBYA, SUDAN AND CUBA IN VIOLATION OF US SANCTIONS.

      UNDER THE TERMS OF THE SETTLEMENT, FEDERAL PROSECUTORS AGREED TO DROP ALL CHARGES AFTER FIVE YEARS IF THE BANK PAID THE FINE, TOOK REMEDIAL ACTION AND AVOIDED COMMITTING NEW VIOLATIONS.

      http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html

      HE DEEP STATE ENSURED THAT AUTHORITIES FAILED TO PROSECUTE ANY SBC SENIOR EXECUTIVES AND ALLOWED THE BANK ITSELF TO WALK AWAY WITH NO CRIMINAL RECORD.

      MOST OF THE PARSI JUDGES AND LAWYERS IN INDIA ARE DESCENDANTS OF DRUG RUNNERS IN THE PAYROLL OF SASSOON AND ROTHSCHILD.

      THOUSANDS OF FILIPINO DOMESTIC WORKERS IN HONG KONG DUPED INTO PAYING FOR BOGUS JOBS IN CANADA AND BRITAIN HAVE BEEN FRAUD VICTIMS AS WELL AS UNWITTING CONTRIBUTORS TO A MONEY-LAUNDERING SCHEME THAT AUTHORITIES HAVE IGNORED

      PEOPLE LINKED TO A JEWISH MAID AGENCY UNDER SCRUTINY USED INTERNATIONAL BANKS LOCALLY TO REPEATEDLY TRANSFER MILLIONS OF HONG KONG DOLLARS IN SMALL SUMS TO BURKINA FASO, MALAYSIA, NIGERIA AND TURKEY.

      INSTEAD OF DOING HIS JOB, AJIT DOVAL IS SITTING NEXT TO MODI IN ALL HIS FOREIGN JAUNTS.. AND BOTH ARE BABES IN THE WOODS WHEN IT COMES TO WORLD INTRIGUE..

TOP MILITARY BOSSES AND NATIONAL SECURITY ADVISORS CANNOT BE BRAIN DEAD ANY MORE..   MOST OF THEM CANNOT ABSORB NEW DIGITAL TECHNOLOGY..

THE GREASE AND TACKLE AGE OF GEN PATTON / FIELD MARSHALL MANEKSHAW TYPE BLUSTER AND SWAGGER IS NOW OVER..

WARS MUST BE WON BEFORE THEY ARE FOUGHT..   

AJIT DOVAL CANNOT REST ON HIS PAST LAURELS SECURED BY BEING A DEEP ASSET INSIDE PAKISTAN..    

WE DONT GET IMPRESSED BY THE FACT THAT HE SITS BESIDES MODI,  WHEN HE HAS HIS ENDLESS FOREIGN JAUNTS.. 

AJIT DOVAL HAS FAILED TO ADVISE MODI THAT HE MUST HEED MORE THAN 300 CRITICAL SUGGESTIONS SENT BY BLOGGER CAPT AJIT VADAKAYIL, AFTER DROPPING HIS FAALTHU HUMONGOUS EGO.

 DONT MAKE ME SAY ANYTHING MORE .. 


PAKISTAN IS MERRILY HACKING ISRO/ DRDO AND KUDANKULAM NUCLEAR PLANT. 

Visual cryptography (VC) is a process where a secret image is encrypted into shares which refuse to divulge information about the original secret image. ... Encryption provides security by hiding the content of secret information; while watermarking hides the existence of secret information.



Visual cryptography is an algorithm used for encrypting digital media like images, text etc in which the decryption can be performed by visual mechanical operations rather than using a computer.  VC  allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image.



The basis of the technique is the superposition (overlaying) of two semi-transparent layers. Imagine two sheets of transparency covered with a seemingly random collection of black pixels. Individually, there is no discernable message printed on either one of the sheets. 

Overlapping them creates addition interference to the light passing through (mathematically the equivalent of performing a Boolean OR operation with the images), but still it just looks like a random collection of pixels. Mysteriously, however, if the two grids are overlaid correctly, at just the right position, a message magically appears! The patterns are designed to reveal a message.

One image contains random pixels and the other image contains the secret information. It is impossible to retrieve the secret information from one of the images. Both transparent images or layers are required to reveal the information. The easiest way to implement Visual Cryptography is to print the two layers onto a transparent sheet.

When the random image contains truely random pixels it can be seen as a one-time pad system and will offer unbreakable encryption. In the overlay animation you can observe the two layers sliding over each other until they are correctly aligned and the hidden information appears.

 To try this yourself, you can copy the example layers 1 and 2, and print them onto a transparent sheet or thin paper. Always use a program that displays the black and white pixels correctly and set the printer so that all pixels are printed accurate (no diffusion or photo enhancing etc). You can also copy and past them on each other in a drawing program like paint and see the result immediately, but make sure to select transparent drawing and align both layers exactly over each other.


Each pixel of the images is divided into smaller blocks. There are always the same number white (transparent) and black blocks. If a pixel is divided into two parts, there are one white and one black block. If the pixel is divided into four equal parts, there are two white and two black blocks. The example images from above uses pixels that are divided into four parts.


If Visual Cryptography is used for secure communications, the sender will distribute one or more random layers 1 in advance to the receiver. If the sender has a message, he creates a layer 2 for a particular distributed layer 1 and sends it to the receiver. 

The receiver aligns the two layers and the secret information is revealed, this without the need for an encryption device, a computer or performing calculations by hand. The system is unbreakable, as long as both layers don't fall in the wrong hands. When one of both layers is intercepted it's impossible to retrieve the encrypted information.

Visual cryptography (VC) is an optical image encryption technique allowing the secret image to be recovered when multiple visual key images are overlapped. Conventionally, the visual key images are printed on transparent sheets and they have to be placed at the same location for overlapping.



Image encryption can be defined in such a way that it is the process of encoding secret image with the help of some encryption algorithm in such a way that unauthorized users can't access it.

Encryption is a process which uses a finite set of instruction called an algorithm to convert original message, known as plain text, into cipher text, its encrypted form. Cryptographic algorithms normally require a set of characters called a key to encrypt or decrypt data





"Watermarking" is the process of hiding digital information in a carrier signal; the hidden information should, but does not need to, contain a relation to the carrier signal. Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners

A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such signal.

Like traditional physical watermarks, digital watermarks are often only perceptible under certain conditions, i.e. after using some algorithm. If a digital watermark distorts the carrier signal in a way that it becomes easily perceivable, it may be considered less effective depending on its purpose

The digital watermarking is defined as the marker which can be enclosed within a signal that can bear the noise. It is used to be aware with the copyright. Invisible modification of the least significant bits in the file can be the other way of digital watermarking


There are two types of digital watermarking:--
Visible Digital Watermarking: Visible data is embedded as the watermark. This can be a logo or a text that denotes a digital medium's owner.
Invisible Digital Watermarking: The data embedded is invisible or, in case of audio content, inaudible.

What is the difference between watermarking and steganography?

These signals could be either videos or pictures or audios; steganography is changing the image in a way that only the sender and the intended recipient are able to detect the message sent through it. Watermarking is of two types; visible watermarking and invisible watermarking. Steganography is typically invisible.


Downtime: Any technical glitch due to any reason like a power outage, low internet speed or connectivity, maintenance of data centres can result in downtime which can be really taxing for the business.

Vendor lock-in: Shifting between clouds can be challenging due to the inherent differences in the vendor platform requirements. Migration can also lead to issues related to support, complexities of configuration and other additional costs. Such transfers often also make the data vulnerable to different security concerns due to compromises and changes made to facilitate the migration.

Limited Control: As your data is on remote servers managed by service providers, your control over it becomes limited especially for businesses which seek enhanced control over their back-end infrastructure.

Since the cloud is available through a shared responsibility model, your vendor won’t handle every single aspect of security. Not even the most sophisticated cloud vendor is completely resistant to data breaches

Most public cloud hosting services won’t be able to guarantee protection, particularly those that host all kinds of websites..

Every time your cloud computing service provider experiences a technical glitch, you might find yourself locked out from accessing your own information.

Hackers and malware are not the only ones who may target a cloud service provider. Cloud computing risks are also presented by insider threats.

The risk of government intrusion also increases when you use a cloud service. Ask yourself, if big brother is more likely to snoop on your email server or an email server used by a hundred companies and maintained by Microsoft?

Saying you store your data “on the cloud” compared to “on a server” isn’t exactly true. Cloud-based storage systems still use servers to hold data, but users don’t physically access them. Cloud storage providers don’t build specific servers for each user; the server space is shared between different customers as needed. You may be putting your data at risk if others using your servers upload potentially anomalous or hazardous information..

Data loss is an event where information is either temporarily unavailable or permanently lost or destroyed. This can occur through accidental deletion, overwriting, or malicious actions by users or external hackers who purposely delete data.

EXAMPLE:  Code Spaces was a company that offered source code and project management services to developers. It was built mostly on Amazon Web Services (AWS) using server and storage instances. In June 2014, a hacker gained access to the company’s AWS control panel and demanded ransom payment. When Code Spaces didn’t comply, the hacker deleted important files including EBS snapshots, S3 buckets, and more. Code Spaces was forced to shut down.

Though cloud service providers have improved their security controls in the last few years, ransomware attacks, such as the one described above, have also grown stronger and have doubled year-over-year, leaving businesses vulnerable.

Attackers today can easily evade network perimeter security and perform internal reconnaissance to locate and encrypt shared network files. By encrypting files that are accessed by many business applications across the network, attackers achieve an economy of scale faster and far more damaging than encrypting files on individual devices.

The Vectra 2019 Spotlight Report on Ransomware finds that the most significant ransomware threat — in which hackers steal your data and hold it for ransom — is malicious encryption of shared network files in cloud service providers. 

Cybercriminals are targeting organizations that are most likely to pay larger ransoms in order to regain access to files encrypted by ransomware. The costs of downtime due to operational paralysis, inability to recover backed-up data, and reputational damage are particularly catastrophic for organizations that store their data in the cloud.


Ransomware is a type of malicious software, or malware, designed to deny access to a computer system or data until a ransom is paid. Ransomware typically spreads through phishing emails or by unknowingly visiting an infected website. Ransomware can be devastating to an individual or an organization..

Having data stored "on the cloud" does not mean the data is safe. ... Essentially, this allows you to "mount" the cloud drive and then "unmount" it afterward, physically detaching it from the system. This can significantly reduce the risk of ransomware spreading to your backups.

A data breach in which the data is held for ransom is not the same as a Ransomware attack. ... A data breach however is a security incident in which sensitive or confidential data is copied and stolen from the organisation, it can then be used in a number of ways both for financial gain and to cause harm

Ransomware is a particularly nasty and scary form of malware that blocks and encrypts user data, which is then held for ransom. It can block access to your personal information, or threaten to disable your devices unless you pay for the password to decrypt and unlock your data.

This can be very profitable for online criminals, and there is no guarantee that users who pay a ransom will get full access to their systems again. Plus, if payment is demanded via credit card, for example, criminals may then have access to your card details, enabling them to commit further theft and fraud.

Ransomware is malware that, when downloaded to a device, scrambles or deletes all data until a ransom is paid to restore it. Ransomware is becoming more and more common, with research suggesting that in 2019 a new organization will be hit by a ransomware attack every 14 seconds.

Let’s take the most famous example of ransomware for example; the WannaCry ransomware attack. WannaCry was a piece of malware that infected over 230,000 computers across 150 companies, within a single day. It encrypted thousands of files and requested $300 worth of bitcoin payments to restore them, per device.

More recently, 22 cities in Texas have been hit by ransomware attacks, with attackers demanding $2.5 million to restore encrypted files, leading to a federal investigation. Ransomware is an especially prevalent in financial organizations, with 90% experiencing an attack in the last year.

Ransomware begins with the malicious software being downloaded onto an endpoint device, like a desktop computer, laptop or smartphone. This usually happens because of user error, or ignorance of security risks. A common method of distributing malware is a phishing attack. This involves an attacker attaching an infected document or URL to an email, while disguising it as being legitimate to trick users into opening it, which will install the malware on their device.

Another popular method of spreading ransomware is using a ‘trojan horse’ virus style. This involves disguising ransomware as legitimate software online, and then infecting devices after users install this software.

Ransomware typically works very quickly. In seconds, the malicious software will take over critical process on your device, and search for files to be encrypted, meaning all of the data within them is scrambled. The ransomware will likely delete any files it cannot encrypt.

The ransomware will then infect any other hard-drives or USB devices connected to the infected host machine. Any new devices or files added to your device will also be encrypted after this point. Then, the ransomware will begin sending out signals to all of the other devices on your network, to attempt to infect them as well.

There are different types of ransomware. Some threaten to release the encrypted data to the public, which may be damaging to companies who need to protect customer or business data. There is also scareware, that floods the computer with pop-up and demand a ransom to solve the issue. The same principle is always involved – a malicious program infects the computer and a payment is requested to remove it.

Why is Ransomware so Effective?
Ransomware can be hugely damaging to businesses, causing loss of productivity and financial cost.  Most obviously there is the loss of files and data, which may represent hundreds of hours of work, or customer data that is critical to the smooth running of your organization.  There is also the loss of productivity as machines will be unusable. 

According to Kaspersky it takes organizations at least a week to recover their data in most cases. Then of course there is the financial loss of needing to replace infected machines, pay for an IT company to remediate against the attack and put protection in place to stop it happening again.

For these reasons many businesses feel they have no choice but to pay the ransom, although it is highly recommended that they do not. Ransomware generates over $25 million in revenue for hackers each year, which demonstrates how effective it is to extort money from organizations. So why is malware so effective?

Targets Human Weaknesses
By targeting people with phishing attacks, attackers can bypass traditional security technologies with ransomware. Email is a weak point in many businesses’ security infrastructure, and hackers can exploit this by using phishing emails to trick users into opening malicious files and attachments. By using trojan horse viruses, hackers also target human error by causing them to inadvertently to download malicious files.

The major issue here is a lack of awareness about security threats from most users, with many people unaware of what threats look like, and what they should avoid downloading or opening on the internet or in emails. This lack of security awareness helps ransomware to spread much more quickly.

Ransomware attacks are growing by a record amount, with attackers developing increasingly sophisticated malware. Many businesses do not have the strong defences needed in place to block these attacks, because they can be expensive and complicated to deploy and use. It’s often hard for IT teams to convince company executives that they need strong security defences until it’s too late and systems have already been compromised.

Out of Date Hardware and Software
Alongside not having strong defences against attacks, many organizations also rely too heavily on hardware and software that is out of date. Over time, attackers discover security vulnerabilities. Technology companies often push out security updates, but for many organizations they have no way to verify that users are installing this updates. Many organizations also rely heavily on older computers that are no longer supported, meaning they are open to vulnerabilities.

This is one of the main reasons the WannaCry virus was so successful. It targeted many large organizations like the NHS, which for most part uses decades old machines on operating systems that are no longer regularly supported with updates. The exploit WannaCry used to infect systems was actually discovered two months before the attack took place and was patched by Microsoft, but devices were not updated, and the attack still rapidly spread.

How Can You Stop Ransomware?
The best way for businesses to stop ransomware attacks is to be proactive in your security approach and ensure that you have strong protections in place before ransomware can infect your systems. Here are some tips for the best protections to put in place to stop ransomware attacks:

Strong, Reputable Endpoint Anti-Virus Security
One of the most important ways to stop ransomware is to have a very strong endpoint security solution. These solutions are installed on your endpoint devices, and block any malware from infecting your systems. They also give admins the ability to see when devices have been compromised, and ensure that security updates have been installed.

Email Security, Inside and Outside the Gateway
As ransomware is commonly delivered through email, email security is crucial to stop ransomware. Secure Email gateway technologies filter email communications with URL defences and attachment sandboxing to identify threats and block them from being delivered to users. This can stop ransomware from arriving on endpoint devices and block users from inadvertently installing ransomware onto their device. 

Ransomware is also commonly delivered through phishing. Secure email gateways can block phishing attacks, but there is also Post-Delivery Protection technologies, which use machine learning and AI algorithms to detect phishing attacks, and display warning banners within emails to alert them that an email may be suspicious. This helps users to avoid phishing emails which may contain a ransomware attack.

Web Filtering & Isolation Technologies
DNS Web filtering solutions stop users from visiting dangerous websites and downloading malicious files. This helps to block viruses that spread ransomware from being downloaded from the internet, including trojan horse viruses that disguise malware as legitimate business software. 


DNS filters can also block malicious third party adverts. Web filters should be configured to aggressively block threats, and to stop users from visiting dangerous or unknown domains. Utilizing Isolation can also be an important tool to stop ransomware downloads. Isolation technologies completely remove threats away from users by isolating browsing activity in secure servers and displaying a safe render to users. 

This can help to prevent ransomware as any malicious software is executed in the secure container and does not affect the users themselves. The main benefit of Isolation is that it doesn’t impact the user’s experience whatsoever, delivering high security efficacy with a seamless browsing experience.

Security Awareness Training
The people within your organization are often your biggest security risk. In recent years there has been a huge growth in Security Awareness Training platforms, which train users about the risks they face using the internet at work and at home. Awareness Training helps to teach users what threats within email look like, and best security practices they should follow to stop ransomware, such as making sure their endpoints are updated with the latest security software. 

Security Awareness Training solutions typically also provide phishing simulation technologies. This means admins can create customized simulated phishing emails, and send them out to employees to test how effectively they can detect attacks. Phishing simulation is an ideal way to help view your security efficacy across the organization, and is a useful tool to help identify users that need more security training to help stop the spread of ransomware.

Data Backup and Recovery
If a ransomware attack succeeds and your data is compromised, the best way to protect your organization is to be able to restore the data you need quickly and minimize the downtime. The best way to protect data is to ensure that it is backed up in multiple places, including in your main storage area, on local disks, and in a cloud continuity service. In the event of a ransomware attack, backing up data means you will be able to mitigate the loss of any encrypted files and regain functionality of systems.


The best Cloud Data Backup and Recovery platforms will allow businesses to recover data in the case of a disaster, will be available anytime, and will be easily integrated with existing cloud applications and endpoint devices, with a secure and stable global cloud infrastructure.  Cloud data backup and recovery is an important tools to remediating against Ransomware.

Don’t Let Ransomware Damage Your Organization

By following the above steps, you can begin to protect your organization against damaging ransomware attacks. If you want more help finding out how you can protect your organization against ransomware, get in touch. Our security Experts can help you to identify your business needs and suggest the right software to help improve your security issues, with product quotes and demos.
The idea behind ransomware, a form of malicious software, is simple: Lock and encrypt a victim’s computer or device data, then demand a ransom to restore access.

In many cases, the victim must pay the cybercriminal within a set amount of time or risk losing access forever. And since malware attacks are often deployed by cyberthieves, paying the ransom doesn’t ensure access will be restored.

Ransomware holds your personal files hostage, keeping you from your documents, photos, and financial information. Those files are still on your computer, but the malware has encrypted your device, making the data stored on your computer or mobile device inaccessible.

While the idea behind ransomware may be simple, fighting back when you’re the victim of a malicious ransomware attack can be more complex. And if the attackers don’t give you the decryption key, you may be unable to regain access to your data or device.

Knowing the types of ransomware out there, along with some of the dos and don’ts surrounding these attacks, can go a long way toward helping protect yourself from becoming a victim of ransomware.

Types of ransomware--
Ransomware attacks can be deployed in different forms. Some variants may be more harmful than others, but they all have one thing in common: a ransom. Here are seven common types of ransomware.

Crypto malware. This form of ransomware can cause a lot of damage because it encrypts things like your files, folders, and hard-drives. One of the most familiar examples is the destructive 2017 WannaCry ransomware attack. It targeted thousands of computer systems around the world that were running Windows OS and spread itself within corporate networks globally. Victims were asked to pay ransom in Bitcoin to retrieve their data.

Lockers. Locker-ransomware is known for infecting your operating system to completely lock you out of your computer or devices, making it impossible to access any of your files or applications. This type of ransomware is most often Android-based.

Scareware. Scareware is fake software that acts like an antivirus or a cleaning tool. Scareware often claims to have found issues on your computer, demanding money to resolve the problems. Some types of scareware lock your computer. Others flood your screen with annoying alerts and pop-up messages.

Doxware. Commonly referred to as leakware or extortionware, doxware threatens to publish your stolen information online if you don’t pay the ransom. As more people store sensitive files and personal photos on their computers, it’s understandable that some people panic and pay the ransom when their files have been hijacked.

RaaS. Otherwise known as “Ransomware as a service,” RaaS is a type of malware hosted anonymously by a hacker. These cybercriminals handle everything from distributing the ransomware and collecting payments to managing decryptors — software that restores data access — in exchange for their cut of the ransom.

Mac ransomware. Mac operating systems were infiltrated by their first ransomware in 2016. Known as KeRanger, this malicious software infected Apple user systems through an app called Transmission, which was able to encrypt its victims’ files after being launched.

Ransomware on mobile devices. Ransomware began infiltrating mobile devices on a larger scale in 2014. What happens? Mobile ransomware often is delivered via a malicious app, which leaves a message on your device that says it has been locked due to illegal activity.

The origins of ransomware

How did ransomware get started? While initially targeting individuals, later ransomware attacks have been tailored toward larger groups like businesses with the intent of yielding bigger payouts. Here are some notable dates on the ransomware timeline that show how it got its start, how it progressed, and where ransomware is now.

PC Cyborg, also known as the AIDS Trojan, in the late 1980s. This was the first ransomware, released by AIDS researcher Joseph Popp. Popp carried out his attack by distributing 20,000 floppy disks to other AIDS researchers. Little did the researchers know, these disks contained malware that would encrypt their C: directory files after 90 reboots and demand payment.

GpCode in 2004. This threat implemented a weak form of RSA encryption on victims’ personal files until they paid the ransom.

WinLock in 2007. Rather than encrypting files, this form of ransomware locked its victims out of their desktops and then displayed pornographic images on their screens. In order to remove the images, victims had to pay a ransom with a paid SMS.

Reveton in 2012. This so-called law enforcement ransomware locked its victims out of their desktops while showing what appeared to be a page from an enforcement agency such as the FBI. This fake page accused victims of committing crimes and told them to pay a fine with a prepaid card.

CryptoLocker in 2013. Ransomware tactics continued to progress, especially by 2013 with this military-grade encryption that used key storage on a remote server. These attacks infiltrated over 250,000 systems and reaped $3 million before being taken offline.

Locky in 2016. So-called Locky ransomware used social engineering to deliver itself via email. When it was first released, potential victims were enticed to click on an attached Microsoft Word document, thinking the attachment was an invoice that needed to be paid. But the attachment contained malicious macros. More recent Locky ransomware has evolved into the use of JavaScript files, which are smaller files that can more easily evade anti-malware products.

WannaCry in 2017. These more recent attacks are examples of encrypting ransomware, which was able to spread anonymously between computers and disrupt businesses worldwide.

Sodinokibi in 2019. The cybercriminals who created this ransomware used managed service providers (MSPs) like dental offices to infiltrate victims on a larger scale.

Ransomware remains a popular means of attack, and continues to evolve as new ransomware families are discovered.

Who are the targets of ransomware attacks?

Ransomware can spread across the Internet without specific targets. But the nature of this file-encrypting malware means that cybercriminals also are able to choose their targets. This targeting ability enables cybercriminals to go after those who can — and are more likely to — pay larger ransoms.

Dos and don’ts of ransomware--

Ransomware is a profitable market for cybercriminals and can be difficult to stop. Prevention is the most important aspect of protecting your personal data. To deter cybercriminals and help protect yourself from a ransomware attack, keep in mind these eight dos and don’ts.

1. Do use security software. To help protect your data, install and use a trusted security suite that offers more than just antivirus features. For instance, Norton 360 With LifeLock Select can help detect and protect against threats to your identity and your devices, including your mobile phones.

2. Do keep your security software up to date. New ransomware variants continue to appear, so having up-to-date internet security software will help protect you against cyberattacks.

3. Do update your operating system and other software. Software updates frequently include patches for newly discovered security vulnerabilities that could be exploited by ransomware attackers.

4. Don’t automatically open email attachments. Email is one of the main methods for delivering ransomware. Avoid opening emails and attachments from unfamiliar or untrusted sources. Phishing spam in particular can fool you into clicking on a legitimate-looking link in an email that actually contains malicious code. The malware then prevents you from accessing your data, holds that data hostage, and demands ransom.

5. Do be wary of any email attachment that advises you to enable macros to view its content. Once enabled, macro malware can infect multiple files. Unless you are absolutely sure the email is genuine and from a trusted source, delete the email.

6. Do back up important data to an external hard drive. Attackers can gain leverage over their victims by encrypting valuable files and making them inaccessible. If the victim has backup copies, the cybercriminal loses some advantage. Backup files allow victims to restore their files once the infection has been cleaned up. Ensure that backups are protected or stored offline so that attackers can’t access them.

7. Do use cloud services. This can help mitigate a ransomware infection, since many cloud services retain previous versions of files, allowing you to “roll back” to the unencrypted form.

8. Don’t pay the ransom. Keep in mind, you may not get your files back even if you pay a ransom. A cybercriminal could ask you to pay again and again, extorting money from you but never releasing your data.

With new ransomware variants appearing, it’s a good idea to do what you can to minimize your exposure. By knowing what ransomware is and following these dos and don’ts, you can help protect your computer data and personal information from being ransomware’s next target.



Cloud Backups are Not Safe from Ransomware

PureLocker is a piece of ransomware that is being used in targeted attacks against company servers, and seems to have links with notorious cybercriminal groups.


This malware, which encrypts its victims’ servers in order to demand a ransom, has been analyzed by researchers at Intezer and IBM X-Force. They called it PureLocker because it is written in the programming language PureBasic. 

This choice of language is unusual, but offers the attackers several advantages, such as the fact that cybersecurity providers often struggle to generate trustworthy detection signatures for malicious software written in this language.  PureBasic is also easily transferable between Windows, Linux and OX-X, which greatly facilitates attacks on other platforms.







BELOW: I WILL EXPLAIN BUG ALGORITHMS IN GREATER DETAIL LATER..  THIS IS CHAATNE KE VAASTE.




Swarms of drones with limited resources can effectively search an environment, using  an algorithm to make it all work – without guidance from a central computer.

Small robots to spread out autonomously after they are released, video as much of an unknown environment as possible, then return to a central base with images for later analysis.  

Critical military areas in India has already been mapped by foreign forces for attack without GPS.. and we don’t have a clue.. 

Modi is using Pakistan specific and digitally blind Ajit Doval as arm candy during his endless foreign jaunts.. 

We ask, is there a job description for a NSA  ?  

Does this critical chair require a resourceful leader who is digitally savvy..  

Does he have to pass any exams set by experts and constantly update himself to mentally hone himself ?


Below: Signals Intelligence (SIGINT) is intelligence-gathering by interception of signals, whether communications between people (communications intelligence) or from electronic signals not directly used in communication (electronic intelligence).   

Signals intelligence is a subset of intelligence collection management.  As sensitive information is often encrypted, signals intelligence in turn involves the use of cryptanalysis to decipher the messages. Traffic analysis—the study of who is signaling whom and in what quantity—is also used to integrate information again


EVEN TODAY WE DON’T KNOW WHO INDIAs FORIEGN PAYROLL DEEP STATE AGENTS ARE..    

NSA KOH KUCHCH NAHIN POTHA.. NAY –PATHA.. EVEN TODAY THEY DONT KNOW ROTHSCHILD RULED INDIA..



FROM THE YEAR 2012 TO 2016

IF YOU GOOGLE FOR "WORST JOURNALIST "

MY POST BELOW WOULD COME ON PAGE 1 AS ITEM 1 , AMONG NEARLY 70 MILLION POSTS

http://ajitvadakayil.blogspot.com/2012/08/indias-worst-journalist-barkha-dutt.html

TILL I BACKED TRUMP AGAINST ROTHSCHILDs CANDIDATE HILLARY

NOW THE POST IS SUNK.. HARDLY ANYBODY GOES BEYOND THE FIRST TEN PAGES ON GOOGLE SEARCH...

CHECK OUT BARKHA DUTTs CONVERSATIONS -- WHEELING AND DEALING..

https://www.youtube.com/watch?v=Pon2a09gYK4

capt ajit vadakayil
..





Or, all that is reuired is a 74 year old grease and tackle field agent ( with dyed hair ) who was deep in Pakistan decades ago with a circumcised willy ? 

Is ego massage more important ?

Each of my 300 critical messages for fortunes of Bharatmata to Modi have been copied to Ajit Doval too..   Result ? Zilch !

NSA is not military specific , and that too in pathetic reactive mode.

If we cend a highly technical complaint to NIA/ ED/ NSA / IB/ PMO / CBI/ CYBER CELL , we are asked to go to police station and file FIR with a khaini munching pandu havaldar who does not know English.. 

Despite my 33 part post on SHELL COMPANIES money laundering is going on merrily.

Despite my 13 part post on BLOCKCHAIN/ BITCON capital flight from Surat is still going on..  

If you go to a gargantuan bellied Pandu havaldar, with red paan juice dripping from the corner of his mouth and complain about Blockchain, he will beat you up..

Let us dance !


Small robots and drones used advanced navigation techniques such as camera-based SLAM (simultaneous localisation and mapping).

The algorithm is the ‘swarm gradient bug algorithm’ (SGBA), which maximises the area covered by having robots travel in different directions away from the departure point, while following walls and avoiding objects as they go

Bug algorithms are a class of algorithms that just react to objects as they come into sensor range.
Bug algorithms, do not make maps of the environment but deal with obstacles on the fly. In principle, detailed maps are very convenient, because they allow a robot to navigate from any point in the map to any other point, along an optimal path. 

However, the costs of making such a map on tiny robots is prohibitive. The proposed bug algorithm leads to less efficient paths but has the merit that it can even be implemented on tiny robots.

Once on the move, the drones head in their preferred direction, navigated by analysing sequential images from a down-facing camera (‘visual odometry’) which is modified by wall-following using laser raging ( Crazyflies include laser ranging). Laser ranging is also used to avoid static objects.

Triggered by low battery charge, the robots return to base where stored camera images can be viewed. Return navigation is through locking on to a radio beacon (2.4GHz) located at the nominal base and tracking along the signal gradient.

Swarms of small and cheap robots would be able to perform tasks that are currently out of reach of large, individual robots. For instance, a swarm of small flying drones would be able to explore a disaster site much quicker than a single larger drone.

In a proof-of-concept simulated search-and-rescue situation, the swarm was introduced into a building within which with two dummies had been left

Within six minutes, the six drones had explored ~80% of open rooms and found both ‘victims’







BELOW: HOW MANY MILITARY DRONES POWERED BY AI WILL BE FAIR TO EUROPEAN ROMA GYPSIES ?  OR TO PALESTINIANS IN ISRAEL?










I HAVE SOURCES WITHIN HSBC IN THEIR FRAUD CONTROL DEPT , ABROAD..

BALLS TO ROTHSCHILD FOUNDED BANK HSBC..

http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html

THEY ARE ADOPTING BLOCKCHAIN TO HIDE THEIR PAST CRIMES AND COVER THEIR TRACKS..

I WILL PUT A SEPARATE POST ON HSBC WHICH IS WORSE THAN BCCI..

HSBC, EUROPE'S BIGGEST BANK, PAID A $1.9 BILLION FINE IN 2012 TO AVOID PROSECUTION FOR ALLOWING AT LEAST $881 MILLION IN PROCEEDS FROM THE SALE OF ILLEGAL DRUGS. IN ADDITION TO FACILITATING MONEY LAUNDERING BY DRUG CARTELS, EVIDENCE WAS FOUND OF HSBC MOVING MONEY FOR SAUDI BANKS TIED TO TERRORIST GROUPS

https://en.wikipedia.org/wiki/Dirty_Money_(2018_TV_series)


  1. SOMEBODY ASKED ME

    WHAT IS THIS "NATIONAL PRAYER BREAKFAST " HELD IN WASHINGTON DC USA EVERY YEAR ?

    IT IS A DEEP STATE EVENT..

    JEW PRESIDENT EISENHOWER WAS THE FIRST TO ATTEND IT.

    AFTER THAT EVERY US PRESIDENT HAS ATTENDED IT.. IF YOU DONT ATTEND THE JEWISH DEEP STATE WILL ELIMINATE YOU..

    https://en.wikipedia.org/wiki/National_Prayer_Breakfast

    I MAY WRITE A FULL POST ABOUT THIS.. ABOUT BASTARD JEW DOUGLAS COE, THE C STREET GANG, THE FRATERNITY OF FELLOWSHIP ( THE CHOSEN PEOPLE ),

    PRAYER IS TO JEW JESUS WHO NEVER EXISTED..

    http://ajitvadakayil.blogspot.com/2019/09/istanbul-deep-seat-of-jewish-deep-state.html

    THE JEWISH DEEP STATE MERGED GOD AND POWER SINCE THE DAYS OF JEW BENJAMIN FRANKLIN..

    http://ajitvadakayil.blogspot.com/2012/11/snuff-movies-freemason-benjamin.html

    JEWISH EXCEPTIONALISM IS ROOTED IN THE TRUTH THAT MIDGET KING DAVID, WAS A PEEPING TOM WHEN KERALA NAMBOODIRI WOMAN BATH SHEBA TOOK A NAKED BATH .. HER HUSBAND URAIH WAS BASTARD DAVIDs BEST FRIEND AND ARMY COMMANDER WHO MADE HIM KING..

    BASTARD MIDGET DAVID GOT URAIH MURDERED AND USURPED HIS WIFE BATHSHEBA..

    SEE IF YOU ARE THE CHOSEN ONE YOU CAN DO ANYTHING..

    THE FELLOWSHIP FRATERNITY IS ALL ABOUT "CHOSEN PEOPLE"

    https://en.wikipedia.org/wiki/Douglas_Coe

    FELLOWSHIP IS A CRYPTO JEW ORGANISATION.. CONTROLLED BY THE DEEP STATE..

    https://en.wikipedia.org/wiki/The_Fellowship_(Christian_organization)

    ALL IN GOOD TIME..

    capt ajit vadakayil
    ..



    PUT ABOVE COMMENT IN WEBSITES OF--
    TRUMP
    PUTIN
    AMBASSADORS TOO FROM RUSSIA/ USA
    EXTERNAL AFFAIRS MINISTER/ MINISTRY
    ALICE AJIT DOVAL
    RAW
    PMO
    ALICE PM MODI


  1. SOMEBODY CALLED ME UP AND SAID-

    CAPTAIN PLEASE WRITE A POST ON THE "NATIONAL PRAYER BREAKFAST " MEET HELD IN USA EVERY YEAR.. ONLY YOU HAVE THE CEREBRAL WHEREWITHAL TO WRITE ABOUT IT..

    INDEED

    WE HAD EVELYN SHARMA ATTENDING IN 2017.. SHE IS A BOLLYWOOD BIMBETTE WITH A GERMAN PASSPORT, A GERMAN JEW MOTHER AND A PNJAAABI PUTTAR FATHER..

    IN WHAT WAY THIS BIMBETTE REPRESENTS INDIA IS A MATTER OF DEBATE..

    LET ME SHOW A VIDEO WHERE SHE IS JIVING TO HONEY SINGH IN BIKINI..

    https://www.youtube.com/watch?v=MXJCnccDLA0

    WHO IS HONEY SINGH?

    HE IS THE DARLING OF PNJAAABI PUTTARS AND PNJAAABI KUDIS WHO WANT TO MIGRATE TO KNEDAAA.

    ONE POOR SHIELA DIXIT WAS CAUGHT ON STAGE JIVING TO THIS FILTHY SONG BELOW.. SHE GOT THE VIDEO DELETED LATER

    https://www.youtube.com/watch?v=gc3JsSq3bFE

    FOR PEOPLE WHO DO NOT KNOW PNJAAABI , CHECK OUT THE ENGLISH TRANSLATION IN THE LINK BELOW--IT IS ALL ABOUT CUNT AND PRICK..

    THIS IS NOW OUR NEW INDIAN PNJAAABI CULTURE..

    https://www.musixmatch.com/lyrics/Yo-Yo-Honey-Singh/Choot-Vol-1/translation/english

    THEN WHO ELSE ATTENDED ?

    WIFE OF CM FADNAVIS..AMRUTA ..

    WHY?

    IN 2015, COUPLE OF JEWS GOT KILLED IN PARIS.. THE NEXT DAY FADNAVIS LIT UP VT STATION IN FRENCH FLAG COLOURS.. MILLIONS OF MUSLIMS DEAD IN LIBYA/ SYRIA/ IRAQ , HE COULD NOT CARE LESS.

    FADNAVIS BABY FUCKED IT UP TOTALLY BY DISPLAYING THE NETHERLAND FLAG.. RED ON TOP, WHITE IN BETWEEN, BLUE AT BOTTOM.. AKKAL THODA JAAST HAI NAH ?

    STILL HE MUST BE REWARDED BY THE JEWISH DEEP STATE , RIGHT?

    AMRUTA FADNAVIS HOLDS THE POST OF VICE-PRESIDENT – CORPORATE HEAD (WEST INDIA) WITH AXIS BANK.

    NO WONDER AT THE POLICE HQ AT MUMBAI ( CRAWFORD MARKET ) , PRIVATE ROTHSCHILD AXIS BANK ATM HAS BEEN RECESSED INTO THE POLICE GOVT PROPERTY.. OFFICE OF COMMISSIONER OF POLICE, CRIME BRANCH BUILDING, OPP CRAWFORD MARKET, MUMBAI ..

    AXIS BANK IS ROTHSCHILDs MIGHTY BANK —IT HAS NOTHING TO DO WITH WEE INDIAN UTI BANK AS PER WIKIEPRDIA PROPAGANDA..

    https://www.ndtv.com/india-news/devendra-fadnavis-promoted-amruta-fadnaviss-bank-axis-bank-at-cost-of-state-banks-says-plea-2092631

    BELOW AMRUTA SINGS AT "UMANG " WHICH IS SOMETHING SIMILAR LIKE THIS "MEET".. HERE JEWISH BOLLYWOOD MAFIA ( PAKISTANI ISI SPONSORED ) WHEELS AND DEALS WITH MUMBAI POLICE..

    IF ANY CRYING BOLLYWOOD STAR WANTS TO FILE A CASE OF DEFAMATION AGAINST A BLOGGER ( FOR TELLING TRUTHS ) , ALL HE NEED TO DO IS TO CALL UP HIS PET POLICE TOP COP..

    https://www.youtube.com/watch?v=NS_MZDM1Jbs

    WET YOUR BEAKS ( GALA GHEELA ) WITH THE FOLLOWING WIKIPEDIA POSTS..FIRST.. BEFORE YOU READ MY POST..

    https://en.wikipedia.org/wiki/National_Prayer_Breakfast

    https://en.wikipedia.org/wiki/The_Fellowship_(Christian_organization)

    https://en.wikipedia.org/wiki/Douglas_Coe

    https://en.wikipedia.org/wiki/Abraham_Vereide

    https://en.wikipedia.org/wiki/C_Street_Center

    THIS IS A JEWISH DEEP STATE MAFIA BREAKFAST.. THIS MAFIA CREATED THE RED NAXAL CORRIDOR IN INDIA..

    http://ajitvadakayil.blogspot.com/2012/09/bauxite-mining-naxalite-menace-joshua.html

    INDIAN COLLEGIUM JUDGES IN DEEP STATE PAYROLL PLAYED KOSHER BALL.. THERE ARE REWARDS IF YOU INJURE BHARATMATA..

    THIS PRAYER BREAKFAST IS ABOUT WEAPONIZING JESUS ( WHO NEVER EXISTED ).. PRAYER GETS MURDERED IN THIS BREAKFAST MEETING, AND HOW !..

    capt ajit vadakayil
    ..




THIS POST IS NOW CONTINUED TO PART 8, BELOW--




PSSSSTT--

WHEN A WOMAN FEELS THAT A MAN CAN IMPALE HER , AND LIFT HER OFF  THE GROUND USING SHEER PP ( PRICK POWER ) AND CRY  - “LOOK MAA NO HANDS” – SHE IS YOURS..

WHEN YOU ASK A WOMAN , WHAT TYPE OF MAN SHE PREFERS— SHE WILL GIVE HAJAAAAR BULLSHIT —SENSE OF HUMOUR/ POETICAL/ HUGE BANK BALANCE/ NATTY LOOKS / SENSITIVE/ CHIVALROUS ,  BLAH BLAH FUCKIN’ BLAH

MY LEFT BALL !


IN HER WILDEST DARK WET DREAMS SHE JUST NEEDS THE VIRILE CAVEMAN WITH SILVER HAIR..





THIS POST IS NOW CONTINUED TO PART 8, BELOW--






CAPT AJIT VADAKAYIL
..