THIS POST IS CONTINUED FROM PART 12, BELOW--
CAPT AJIT VADAKAYIL SAYS AI MUST MEAN “INTELLIGENCE AUGUMENTATION “ IN FUTURE ..
Let this be IA
Let this be IA
OBJECTIVE AI CANNOT HAVE A VISION,
IT CANNOT PRIORITIZE,
IT CANT GLEAN CONTEXT,
IT CANT TELL THE MORAL OF A STORY ,
IT CANT RECOGNIZE A JOKE, OR BE A JUDGE IN A JOKE CONTEST
IT CANT DRIVE CHANGE,
IT CANNOT INNOVATE,
IT CANNOT DO ROOT CAUSE ANALYSIS ,
IT CANNOT MULTI-TASK,
IT CANNOT DETECT SARCASM,
IT CANNOT DO DYNAMIC RISK ASSESSMENT ,
IT IS UNABLE TO REFINE OWN KNOWLEDGE TO WISDOM,
IT IS BLIND TO SUBJECTIVITY,
IT CANNOT EVALUATE POTENTIAL,
IT CANNOT SELF IMPROVE WITH EXPERIENCE,
IT CANNOT UNLEARN
IT IS PRONE TO CATASTROPHIC FORGETTING
IT DOES NOT UNDERSTAND BASICS OF CAUSE AND EFFECT,
IT CANNOT JUDGE SUBJECTIVELY TO VETO/ ABORT,
IT CANNOT FOSTER TEAMWORK DUE TO RESTRICTED SCOPE,
IT CANNOT MENTOR,
IT CANNOT BE CREATIVE,
IT CANNOT THINK FOR ITSELF,
IT CANNOT TEACH OR ANSWER STUDENTs QUESTIONS,
IT CANNOT PATENT AN INVENTION,
IT CANNOT SEE THE BIG PICTURE ,
IT CANNOT FIGURE OUT WHAT IS MORALLY WRONG,
IT CANNOT PROVIDE NATURAL JUSTICE,
IT CANNOT FORMULATE LAWS
IT CANNOT FIGURE OUT WHAT GOES AGAINST HUMAN DIGNITY
IT CAN BE FOOLED EASILY USING DECOYS WHICH CANT FOOL A CHILD,
IT CANNOT BE A SELF STARTER,
IT CANNOT UNDERSTAND APT TIMING,
IT CANNOT FEEL
IT CANNOT GET INSPIRED
IT CANNOT USE PAIN AS FEEDBACK,
IT CANNOT GET EXCITED BY ANYTHING
IT HAS NO SPONTANEITY TO MAKE THE BEST OUT OF SITUATION
IT CAN BE CONFOUNDED BY NEW SITUATIONS
IT CANNOT FIGURE OUT GREY AREAS,
IT CANNOT GLEAN WORTH OR VALUE
IT CANNOT UNDERSTAND TEAMWORK DYNAMICS
IT HAS NO INTENTION
IT HAS NO INTUITION,
IT HAS NO FREE WILL
IT HAS NO DESIRE
IT CANNOT SET A GOAL
IT CANNOT BE SUBJECTED TO THE LAWS OF KARMA
ON THE CONTRARY IT CAN SPAWN FOUL AND RUTHLESS GLOBAL FRAUD ( CLIMATE CHANGE DUE TO CO2 ) WITH DELIBERATE BLACK BOX ALGORITHMS, JUST FEW AMONG MORE THAN 60 CRITICAL INHERENT DEFICIENCIES.
HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE, REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK, A GUT WHICH INTUITS, 30 TRILLION BODY CELLS WHICH HOLD MEMORY, A VAGUS NERVE , AN AMYGDALA , 73% WATER IN BRAIN FOR MEMORY, 10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.
SINGULARITY , MY ASS !
1
https://ajitvadakayil.blogspot.com/2019/08/what-artificial-intelligence-cannot-do.html
2
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do.html
3
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do_29.html
4
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do.html
5
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_4.html
6
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_25.html
7
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_88.html
8
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_15.html
9
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_94.html
10
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do.html
11
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_1.html
12
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do.html
SOMEBODY CALLED ME UP AND ASKED ME..
CAPTAIN—
WHO IS MUHAMMAD IBN MUSA AL-KHWARIZMI WHOM MODERN HISTORIANS ARE CALLING THE “FATHER OF COMPUTER SCIENCE” AND THE “FATHER OF ALGORITHMS”??.
LISTEN –
ARAB MUHAMMAD IBN MUSA AL-KHWARIZMI WAS A BRAIN DEAD FELLOW WHOSE ENTIRE WORK WAS SOLD TO HIM TRANSLATED INTO ARABIC BY THE CALCIUT KING FOR GOLD.
THE CALICUT KING MADE HIS MONEY BY NOT ONLY SELLING SPICES –BUT KNOWLEDGE TOO.
HE MAMANKAM FEST HELF AT TIRUNAVAYA KERALA BY THE CALICUT KING EVERY 12 YEARS WAS AN OCCASION WHERE KNOWLEDGE WAS SOLD FOR GOLD.
http://ajitvadakayil.blogspot.com/2019/10/perumal-title-of-calicut-thiyya-kings.html
EVERY ANCIENT GREEK SCHOLAR ( PYTHAGORAS/ PLATO/ SOCRATES ETC ) EXCEPT ARISTOTLE STUDIED AT KODUNGALLUR UNIVERSITY.. THE KERALA SCHOOL OF MATH WAS PART OF IT.
OUR ANCIENT BOOKS ON KNOWLEDGE DID NOT HAVE THE AUTHORs NAME AFFIXED ON THE COVER AS WE CONSIDERED BOOKS AS THE WORK OF SOULS , WHO WOULD BE BORN IN ANOTHER WOMANs WOMB AFTER DEATH.
THE GREEKS TOOK ADVANTAGE OF THIS , STOLE KNOWLEDGE FROM KERALA / INDIA AND PATENTED IT IN THEIR OWN NAMES, WITH HALF BAKED UNDERSTANDING .
WHEN THE KING OF CALICUT CAME TO KNOW THIS, HE BLACKBALLED GREEKS FROM KODUNGALLUR UNIVERSITY .. AND SUDDENLY ANCIENT GREEK KNOWLEDGE DRIED UP LIKE WATER IN THE HOT DESERT SANDS.
LATER THE CALICUT KING SOLD TRANSLATED INTO ARABIC KNOWLEDGE TO BRAIN DEAD ARABS LIKE MUHAMMAD IBN MUSA AL-KHWARIZMI FOR GOLD..
THESE ARAB MIDDLE MEN SOLD KNOWLEDGE ( LIKE MIDDLEMEN FOR SPICES) TO WHITE MEN FOR A PREMIUM.
FIBONACCI TOOK HIS ARABIC WORKS TO ITALY FROM BEJAYA , ALGERIA.
http://ajitvadakayil.blogspot.com/2010/12/perfect-six-pack-capt-ajit-vadakayil.html
EVERY VESTIGE OF ARAB KNOWLEDGE IN THE MIDDLE AGES WAS SOLD IN TRANSLATED ARABIC BY KODUNGALLUR UNIVERSITY FOR GOLD..
FROM 800 AD TO 1450 AD KODUNGALLUR UNIVERSITY OWNED BY THE CALICUT KING EARNED HUGE AMOUNT OF GOLD FOR SELLING READY MADE TRANSLATED KNOWLEDGE ..
THIS IS TIPU SULTANS GOLD WHO STOLE IT FROM NORTH KERALA TEMPLE VAULTS.. ROTHSCHILD BECAME THE RICHEST MAN ON THIS PLANET BY STEALING TIPU SUTANs GOLD IN 1799 AD.
http://ajitvadakayil.blogspot.com/2011/10/tipu-sultan-unmasked-capt-ajit.html
WHEN TIPU SULTAN WAS BLASTING TEMPLE VAULTS, LESS THAN 1% OF THE GOLD WAS SECRETLY TRANSFERRED TO SOUTH KERALA ( TRADITIONAL ENEMIES ) OF THE CALICUT KING. LIKE HOW SADDAM HUSSAIN FLEW HIS FIGHTER JETS TO ENEMY IRAN .
THIS IS THE GOLD WHICH WAS UNEARTHED FROM PADMANABHASWAMY TEMPLE..
http://ajitvadakayil.blogspot.com/2013/01/mansa-musa-king-of-mali-and-sri.html
ALGORITHMS ARE SHORTCUTS PEOPLE USE TO TELL COMPUTERS WHAT TO DO. AT ITS MOST BASIC, AN ALGORITHM SIMPLY TELLS A COMPUTER WHAT TO DO NEXT WITH AN “AND,” “OR,” OR “NOT” STATEMENT.
THE ALGORITHM IS BASICALLY A CODE DEVELOPED TO CARRY OUT A SPECIFIC PROCESS. ALGORITHMS ARE SETS OF RULES, INITIALLY SET BY HUMANS, FOR COMPUTER PROGRAMS TO FOLLOW.
A PROGRAMMING ALGORITHM IS A COMPUTER PROCEDURE THAT IS A LOT LIKE A RECIPE (CALLED A PROCEDURE) AND TELLS YOUR COMPUTER PRECISELY WHAT STEPS TO TAKE TO SOLVE A PROBLEM OR REACH A GOAL.
THERE IS NO ARTIFICIAL INTELLIGENCE WITHOUT ALGORITHMS. ALGORITHMS ARE, IN PART, OUR OPINIONS EMBEDDED IN CODE.
ALGORITHMS ARE AS OLD AS DANAVA CIVILIZATION ITSELF – THIEF GREEK EUCLID’S ALGORITHM BEING ONE OF THE FIRST EXAMPLES DATING BACK SOME 2300 YEARS
EUCLID JUST PATENTED MATH HE LEARNT IN THE KERALA SCHOOL OF MATH IN HIS OWN NAME.. EUCLID IS A THIEF LIKE PYTHAGORAS WHO LEARNT IN THE KERALA SCHOOL OF MATH.
http://ajitvadakayil.blogspot.com/2011/01/isaac-newton-calculus-thief-capt-ajit.html
ALGEBRA DERIVED FROM BRAIN DEAD AL-JABR, ONE OF THE TWO OPERATIONS HE USED TO SOLVE QUADRATIC EQUATIONS.
ALGORISM AND ALGORITHM STEM FROM ALGORITMI, THE LATIN FORM OF HIS NAME.
CONTINUED TO 2--
'ADS' (algorithmic decision systems), rely on
the analysis of large amounts of personal data to infer correlations or, more
generally, to derive information deemed useful to make decisions.
Human intervention in the decision-making may vary,
and may even be completely out of the loop in entirely automated systems. In
many situations, the impact of the decision on people can be significant, such
as access to credit, employment, medical treatment, or judicial sentences,
among other things.
Entrusting ADS to make
or to influence such decisions raises a variety of ethical, political, legal,
or technical issues, where great care must be taken to analyse and address them
correctly.
If they are neglected, the expected benefits of these systems may be
negated by a variety of different risks for individuals (discrimination, unfair
practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society
as a whole (manipulation, threat to democracy, etc.).
ADS may undermine the
fundamental principles of equality, privacy, dignity, autonomy and free will,
and may also pose risks related to health, quality of life and physical integrity. That ADS can
lead to discrimination has been extensively documented in many areas, such as the judicial system,
credit scoring, targeted advertising and employment.
Discrimination may
result from different types of biases arising from the training data, technical
constraints, or societal or individual biases.
ADS create new 'security vulnerabilities' that can
be exploited by people with malicious intent.
Since ADS play a
pivotal role in the workings of society, for example in nuclear power stations,
smart grids, hospitals and cars, hackers able to compromise these systems have
the capacity to cause major damage.
ADS such as those used
for predictive policing, may become overwhelming and oppressive. ADS can be misused
by states to control people, for example by identifying political opponents.
More generally, interest groups or states may be tempted to use these technologies to control and influence
citizen behaviour. These technologies can also be used to distort information
to damage the integrity of democratic discourse and the reputation of the government
or political leaders.
The two main forms of
understandability considered are transparency and explainability:---
Transparency is
defined as the availability of the ADS code with its design documentation, parameters
and the learning dataset when the ADS relies on machine learning (ML). Transparency does not necessarily mean
availability to the public. It also encompasses cases in which the code is
disclosed only to specific actors, for example for audit or certification.
Explainability is
defined as the availability of explanations about the ADS. In contrast to transparency,
explainability requires the delivery of information beyond the ADS itself. Explanations
can be of different types (operational, logical or causal); they can be either global
(about the whole algorithm) or local (about specific results); and they can
take different forms (decision trees, histograms, picture or text highlights,
examples, counterexamples, etc.).
The strengths and
weaknesses of each explanation mode should be
assessed in relation to the recipients of the explanation (e.g.
professional or individual), their level
of expertise, and their objectives (to challenge a decision, take actions to
obtain a decision, verify compliance with legal obligations, etc.).
Accountability is
another key desideratum often put forward in the context of ADS. In accordance with
previous work in this area, we see accountability as an overarching principle
characterised by the obligation to justify one's actions and the risk of
sanctions if justifications are inadequate.
Accountability can
therefore be seen as a requirement on a process (obligation to provide justification),
which applies to both intrinsic and extrinsic requirements for ADS (each case corresponding
to specific types of 'justification').
Safety: is an important
issue to consider, especially when ADS are embedded in physical systems whose
failure may cause fatal damage.
While many ADS failures
can be addressed with ad-hoc solutions, there is a strong need to define a
unified approach to prevent ADS from causing unintended harm. A minimum requirement
should be to perform extensive testing and evaluation before any large-scale deployment.
It is also important to provide accountability, including the possibility of independent
audits and to ensure a form of human oversight.
Integrity and
availability: Increasingly, ADS will be used in critical contexts. It is
therefore important to guarantee that they are secure against malicious
adversaries. ADS should not jeopardise integrity and availability. Since most
ADS rely heavily on machine learning algorithms, it is important to consider
their security properties in the context of these algorithms. Adversaries can
threaten the integrity or availability of ADS in different ways, i.e., by
polluting training datasets with fake data, attacking the machine learning (ML)
algorithm itself or exploiting the generated model (the ADS) at run-time.
Confidentiality and
privacy: An adversary may seek to compromise the confidentiality of an ADS. For
example, they may try to extract information about the training data or
retrieve the ADS model itself. These attacks raise privacy concerns as training
data is likely to contain personal data. They may also undermine intellectual
property since the ADS model and the training data may be proprietary and
confidential to the owner. It can involve anonymising the training datasets and
the generated models i.e. designing privacy-preserving ADS.
Fairness (absence of
undesirable bias): ADS are often based on machine learning algorithms that are
trained using collected data. This process includes multiple potential sources
of unfairness. Unfair treatment may result from the content of the training
data, the way the data is labelled or the feature selection. As shown in this
study, there are different definitions of fairness, and others will be proposed
in the future.
Many definitions of
fairness are actually incompatible. Explainability:
Three main approaches can be followed to implement the requirements of explainability:--
The black box
approach: this approach analyses the behaviour of the ADS without 'opening the
hood', i.e. without any knowledge of its code. Explanations are constructed
from observations of the relationships between the inputs and outputs of the
system. This is the only possible approach when the operator or provider of the
ADS is uncollaborative (does not agree to disclose the code). Example of this
category of approach include LIME (local interpretable model-agnostic
explanations),
The white box approach: in contrast to the
black box approach, this approach assumes that analysis of the ADS code is
possible. An example of early work in this direction is the Elvira system for
the graphical explanation of Bayesian networks
Two options are possible to achieve
explainability by design: (1) relying on
an algorithmic technique which, by design, meets the intelligibility requirements while providing sufficient
accuracy, or (2) enhancing an accurate algorithm with explanation facilities so that it can
generate, in addition to its nominal results (e. g. classification), a faithful and intelligible explanation for
these results.
Higher levels of
accuracy and precision may reduce intelligibility. In addition, their evaluation
is a difficult (and often partly subjective) task.
Legal instruments:
Technical solutions are necessary but cannot solve all the issues raised by ADS by themselves. They must be associated with
other types of measures and in particular legal
requirements for transparency, explainability or accountability.
'ADS' (algorithmic
decision systems), systems often rely on the analysis of large amounts of
personal data to infer correlations or, more
generally, to derive information deemed useful to make decisions.
Human
intervention in the decision-making may vary, and may even be completely out of
the loop in entirely automated systems.
In many situations, the impact of the decision on people can be significant,
such as: access to credit, employment,
medical treatment, judicial sentences, etc.
Entrusting ADS to make or to influence
such decisions raises a variety of different ethical, political, legal, or
technical issues, where great care must be taken to analyse and address them
correctly. If they are neglected, the expected
benefits of these systems may be counterbalanced by the variety of risks
for individuals (discrimination, unfair practices, loss of autonomy, etc.), the
economy (unfair practices, limited access
to markets, etc.) and society as a whole (manipulation, threat to democracy,
etc.).
Different requirements
such as transparency, explainability, data protection and accountability are often
presented as ways to limit these risks but they are generally ill-defined,
seldom required by law, and difficult to implement.
Decision-making
algorithms are increasingly used in areas such as access to information, e-commerce,
recommendation systems, employment, health, justice, policing, banking and insurance.
They also give rise to
a variety of risks, such as discrimination, unfairness, manipulation or privacy
breaches.
The need to scrutinise
the use of algorithms for decision-making and whether algorithmic
decisionmaking can be done in a transparent and accountable way.
An algorithm is an unambiguous procedure to
solve a problem or a class of problems. It is typically composed of a set of
instructions or rulesthat take some input data and return outputs.
An algorithm can be hand-coded, by a programmer,
or generated automatically from data, as in machine learning.
Algorithms are
harnessing volumes of macro- and micro-data to influence decisions affecting
people in a range of tasks
A distinction is
sometimes drawn between predictive and prescriptive ADS, but the frontier
between the two categories is often fuzzy.
The difference between predictive analytics and prescriptive analytics
is the outcome of the analysis.
Predictive analytics provides you with the raw material for making informed decisions, while prescriptive analytics provides you with data-backed decision options that you can weigh against one another.
Predictive analytics provides you with the raw material for making informed decisions, while prescriptive analytics provides you with data-backed decision options that you can weigh against one another.
Predictive analytics
transforms all the scattered knowledge you have relating to how and why
something happened into models, suggesting future actions. By integrating
various techniques including data mining, modelling, machine learning (ML) and
artificial intelligence (AI), predictive analytics tools transform the data at
hand into focused marketing action.
While descriptive
analytics helps us learn more about the past, predictive analytics looks into
the future, answering the “What will happen if…” questions.
In marketing, that
translates to:---
Anticipating consumer
behaviours before they happen
Optimising your
marketing campaigns around specific factors that are proven to impact sales
Finding the most
valuable leads in your CRM and pitching them with offers that will generate the
highest conversions
ADS that aim at
improving general knowledge or technology: ADS in this class use algorithms to generate new knowledge,
generally through the analysis of complex
phenomena. Algorithms are crucial in this context since they can be used
to analyse very large datasets to
extract knowledge.
They can, for example, help improve climate forecasts,
detect diseases or discover new viruses.
These ADS are used to
make decisions which have a global impact (or an impact on society) rather than
on specific individuals.
ADS that aim at
improving or developing new digital services: Applications of this category are
used to help make predictions, recommendations or decisions in various areas
such as information, finance, planning,
logistics, etc. These services aim at optimising one or several specific criteria, such as time, energy,
cost, relevance of information, etc
ADS integrated within
cyber physical systems: Within this context, ADS are used to provide autonomy to physical objects by limiting
human supervision. Examples are autonomous cars, robots or weapons. Autonomous
cars are being experimented with all over the world.
Algorithms should
replace, or at least assist, users in the way they operate vehicles and should make decisions on behalf of 'drivers'. The
goals are essentially to make roads safer and optimize connection times. Similarly, autonomous robots
are being developed to help or replace humans
in performing difficult physical tasks at work or in the home.
Examples include
robots used in factory chains, domestic robots that provide services to humans,
or robots on the battlefield. A variety
of autonomous weapons are under development to assist soldiers in action and to
limit collateral damage.
Challenging ADS
decisions: Another major issue with opaque ADS is that they make it difficult
to challenge a decision based on their results.
IF AI IS USED BY JUDGES
IN COURTS, WHY HAVE HUMAN JUDGES.. LET US REPLACE THESE CUNTS BY ROBOTS.. LET
US HAVE FUN
Most ADS operate as
'black boxes' and therefore lack transparency, making their efficiency
debatable. Since autonomous weapons embed many algorithms, they are prone
to cyber-attacks. If they were actually deployed, the risk of malfunctioning, error
or misuse should first be carefully addressed.
.
Understanding
algorithmic decision-making: . disadvantage is that they can amplify biases and
errors and make it more difficult to allocate liabilities.
In contrast with
transparency, explainability requires the delivery of information beyond the
ADS itself.:--
– Explanations can be
of three different types: operational (informing how the system actually
works), logical (informing about the logical relationships between inputs and
results) or causal (informing about the causes for the results).
– Explanations can be
either global (about the whole algorithm) or local (about specific results).
Accountability is
another key desideratum that is often put forward in the context of ADS..
An adversary can
threaten the integrity or availability of such ADS in different ways:--
• by attacking the
training dataset, for example, by injecting fake data,
• by attacking the ML
algorithm, or
• by exploiting the
generated model (the ADS) at run-time.
Attackers may want to
retrieve some of the data used to train the system. Two main types of scenarios
can be considered:--
• 'White box' attacks
rely on the assumption that the attacker has access to the model and tries to
learn about the training data by 'inverting' it.
• 'Black box' attacks
do not assume access to the model: an adversarial client can only submit ueries
to the model and make predictions based on the answers.
In contrast to the
'black box' approach, 'white box' explanation systems do rely on the analysis
of the ADS code. In addition to the type of explanations that they can
generate, 'white box' solutions differ in terms of the ADS they can handle
(Bayesian networks, neural networks of limited depth, deep neural networks, etc.), their way to
handle continuous data (e.g. through discretisation) and their complexity.
An example of early work in this direction is the Elvira system for the graphical explanation of Bayesian networks.
An example of early work in this direction is the Elvira system for the graphical explanation of Bayesian networks.
An adversary can
threaten the integrity and availability of an ADS by polluting its training
dataset, attacking its underlying algorithm or exploiting the generated model
at run-time.
'Hand-coded' ADS code can be audited, but the task is not
always easy since they generally consist of complex modules made of a large number of code lines
developed by groups of engineers. ADS that are based on machine learning are even more challenging to
understand, and therefore to explain, since their models are generated automatically from
training data. Data have many properties and features, and each of them can influence the generated
models.
One of the most widely
discussed and commented regulations passed during recent years is the European General Data Protection Regulation (GDPR).
In particular, it
introduces:--
• new rights for
individuals (such as the right to portability, stricter rules for information
and consent, enhanced erasure rights, etc.),
• new obligations for
data controllers (data protection impact assessments, data protection by design
and default, data breach notifications, etc.),
• new action levers
such as collective actions and higher sanctions,
• better coordination
mechanisms between supervisory authorities and a new body, the European
Data Protection Board
(EDPB), which replaces former Article 29 Working Party and which has extensive
powers and binding decisions in particular for dispute resolution between
national supervisory authorities.
Explaining or detecting
biases in ADS should be considered lawful and should not be limited by trade
secret or more generally by intellectual property right laws.
The development of a
body of experts in ADS, with the ability
to cover both technical and ethical aspects, should also be encouraged.
These experts could be
integrated into development teams or serve in ADS evaluation bodies.
Because ADS are used to
make decisions about people, it is of
prime importance that all everyone involved have a minimum of knowledge about the underlying
processes, their potential and the limitations
of the technologies. As , digital literacy is essential for citizens to
be able to exercise their rights in the digital society.
Enhancing the level of
understanding of the technologies involved in ADS is necessary, but not sufficient
since many issues raised by ADS are subjective and may be approached in
different ways depending on individual
perceptions and political views.
If an algorithm is designed to preclude
individuals from taking responsibility within a decision, then the designer of
the algorithm should be held accountable for the ethical implications of the
algorithm in use.
In the context of
algorithmic decision-making, an accountable decision-maker must provide its decision-subjects with reasons and
explanations for the design and operation of its automated decision-making system..
.
Algorithm impact
assessments AIAs strive to achieve four initial goals:--
Respect the public’s
right to know which systems impact their lives and how they do so by publicly
listing and describing algorithmic systems used to make significant decisions
affecting identifiable individuals or groups, including their purpose, reach,
and potential public impact;
Ensure greater accountability
of algorithmic systems by providing a meaningful and ongoing opportunity for
external researchers to review, audit, and assess these systems using methods
that allow them to identify and detect problems;
Increase public
agencies’ internal expertise and capacity to evaluate the systems they procure,
so that they can anticipate issues that might raise concerns, such as disparate
impacts or due process violations; and
Ensure that the public
has a meaningful opportunity to respond to and, if necessary, dispute an
agency’s approach to algorithmic accountability. Instilling public trust in
government agencies is crucial — if the AIA doesn’t adequately address public
concerns, then the agency must be challenged to do better.
Rights become dangerous things
if they are unreasonably hard to exercise or
ineffective in results, because they give the illusion that
something has been done while in fact things are no
better.'
Algorithmic Impact
Assessments must set forth a reasonable and practical definition of automated
decision making In order for AIAs to be
effective, agencies must publish their definition as part of a public notice and
comment process whereby individuals, communities, researchers, and policymakers
could respond, and if necessary challenge, the definition’s scope. This would
allow push back when agencies omit essential systems that raise public
concerns.
Algorithmic Impact
Assessments should provide a comprehensive plan for giving external researchers
meaningful access to examine specific systems and gain a fuller account of
their workings. Algorithmic Impact Assessments must include an evaluation of
how a system might impact the public, and show how they plan to address any
issues, should they arise.
Algorithmic Impact
Assessment process should provide a path
for the public to pursue cases where agencies have failed to comply with the
Algorithmic Impact Assessment requirement, or where serious harms are occurring
In many situations, the
impact of the decision on people can be significant, such as on access to
credit, employment, medical treatment, judicial sentences, among other things.
Entrusting ADS to make or to influence such decisions raises a variety of
different ethical, political, legal, or technical issues, where great care must
be taken to analyse and address them correctly.
If they are neglected, the
expected benefits of these systems may be negated by the variety of risks for
individuals (discrimination, unfair practices, loss of autonomy, etc.), the
economy (unfair practices, limited access to markets, etc.), and society as a
whole (manipulation, threat to democracy, etc.).
ADS may undermine the
fundamental principles of equality, privacy, dignity, autonomy and free will,
and may also pose risksrelated to health, quality of life and physical
integrity. That ADS can lead to discrimination has been extensively documented in
many areas, such as the judicial system, credit scoring, targeted advertising
and employment.
The risk of
discrimination related to the use of ADS should be compared with the risk of
discrimination without the use of ADS.
.
ADS create new
'security vulnerabilities' that can be exploited by people with malicious intent.
ADS such as those used
for predictive policing, may become overwhelming and oppressive like in Israel
where even blockchain is used to grab Palestinian land..
ADS can be misused by states to control
people, for example by identifying
political opponents. More generally, interest groups or states may be tempted
to use these technologies to control and influence citizen behaviour. These
technologies can also be used to distort information to damage the integrity of
democratic discourse and the reputation of the government or political leaders.
.
Transparency is defined
as the availability of the ADS code with its design documentation, parameters
and the learning dataset when the ADS relies on machine learning (ML).
Transparency does not
necessarily mean availability to the public. It also encompasses cases in which
the code is disclosed only to specific actors, for example for audit or
certification.
.
Explainability is
defined as the availability of explanations about the ADS. In contrast to transparency,
explainability requires the delivery of information beyond the ADS itself.
Explanations can be of
different types (operational, logical or causal); they can be either global
(about the whole algorithm) or local (about specific results); and they can
take different forms (decision trees, histograms, picture or text highlights,
examples, counterexamples, etc.).
The strengths and weaknesses of each
explanation mode should be assessed in relation to the recipients of the
explanation (e.g. professional or individual), their level of expertise, and their
objectives (to challenge a decision, take actions to obtain a decision, verify
compliance with legal obligations, etc.)...
Since most ADS rely heavily on
machine learning algorithms, it is important to consider their security
properties in the context of these algorithms. Adversaries can threaten the
integrity or availability of ADS in different ways, i.e., by polluting training
datasets with fake data, attacking the machine learning (ML) algorithm itself
or exploiting the generated model (the ADS) at run-time. We argue that existing
protection mechanisms remain preliminary and require more research.
ADS are often based on
machine learning algorithms that are trained using collected data. This process
includes multiple potential sources of unfairness. Unfair treatment may result
from the content of the training data, the way the data is labeled or the
feature selection..
The black box approach:
this approach analyses the behaviour of the ADS without 'opening the hood',
i.e. without any knowledge of its code. Explanations are constructed from observations
of the relationships between the inputs and outputs of the system. This is the
only possible approach when the operator or provider of the ADS is
uncollaborative (does not agree to disclose the code). Examples of this
category of approach include LIME (local interpretable model-agnostic
explanations), .
The white box approach:
in contrast to the black box approach, this approach assumes that analysis of
the ADS code is possible. An example of early work in this direction is the
Elvira system for the graphical explanation of Bayesian networks.
The Elvira system
is a tool to construct model based decision support systems. The models
supported are based on probabilistic uncertainty. It is a tool to construct model based decision support
systems.
Bayesian networks are a type of Probabilistic Graphical Model that can
be used to build models from data and/or expert opinion. They can be used for a
wide range of tasks including prediction, anomaly detection, diagnostics,
automated insight, reasoning, time series prediction and decision making under
uncertainty.
Other solutions based
on neural networks have been proposed more recently.
The constructive
approach: in contrast to the first two approaches, which assume that the ADS already
exists, the constructive approach is to design ADS taking explainability
requirements into account ('explainability by design').
Two options are
possible to achieve explainability by design: (1) relying on an algorithmic
technique which, by design, meets the intelligibility requirements while
providing sufficient accuracy, or (2) enhancing an accurate algorithm with explanation
facilities so that it can generate, in addition to its nominal results (e. g.
classification), a faithful and intelligible explanation for these results.
Higher levels of
accuracy and precision may reduce intelligibility. In addition, their evaluation
is a difficult (and often partly subjective) task.
(1) ADS should not be
deployed without a prior algorithmic impact assessment (AIA) unless it is clear
they have no significant impact on individuals lives; and
(2) the certification
of ADS should be mandatory in certain sectors. AIA should not only focus on the
risks of using an ADS: they should also assess the risks of not using an ADS.
An algorithm is an
unambiguous procedure to solve a problem or a class of problems.
It is typically
composed of a set of instructions or rulesthat take some input data and return
outputs.
As an example, a
sorting algorithm can take a list of numbers and proceed iteratively, first
extracting the largest element of the
list, then the largest element of the rest of the list, and so on, until the
list is empty.
An algorithm can be
hand-coded, by a programmer, or generated automatically from data, as in machine learning.
ADS can lead to
discrimination has been documented in many areas,such as the justice system,
targeted advertisements and employment.
It should be noted that these
discriminations do not necessarily arise from deliberate choices: they may
result from different types of bias, for example bias in training data (in
which case, the algorithm reproduces and systematises already existing
discriminations), societal or individual bias (e.g. of designers or programmers
of the ADS), or bias arising from technical constraints11 (e.g. limitations of
computers or difficulty to formalise the non-formal).
Credit scoring is one
of the domains most studied, because the use of ADS in this context can have significant
impact on individuals' lives. For The use of certain ADS can also lead to
discrimination against underprivileged or minority neighbourhoods.
For example, some
geo-navigational applications are designed to avoid 'unsafe neighbourhoods', which could lead to a form
of redlining and 'reinforce existing harmful and negative stereotypes about poor communities
and communities of colour'.
A major issue with opaque ADS is that they make it difficult to challenge a decision based on their results. This is in contradiction with rights of defence and principles of
adversarial proceedings.
The COMPAS
(Correctional Offender Management Profiling for Alternative Sanctions)
algorithm, which is used by judges to predict whether defendants should be
detained or released on bail pending trial, was found to be biased against
African-Americans
AI is having an impact
on democracy and governance as computerized systems are being deployed to drive
objectivity in government functions.
algorithms, which are a set of step-by-step instructions that computers
follow to perform a task, have become more sophisticated and pervasive tools
for automated decision-making
Algorithms
are harnessing volumes of macro- and micro-data to influence decisions
affecting people in a range of tasks
Bias in algorithms can emanate from unrepresentative or incomplete
training data or the reliance on flawed information that reflects historical
inequalities.
If left unchecked, biased algorithms can lead to decisions which
can have a collective, disparate impact on certain groups of people even
without the programmer’s intention to discriminate. The exploration of the
intended and unintended consequences of algorithms is necessary
Algorithms with too
much data, or an over-representation, can skew the decision toward a particular
result.
BUT HEY, WHITE JEWS
HAVE A NEAR MONOPOLY ON SIGNATURE BASED SADISTIC SERIAL KILLING ( LIKE OF JEW
TED KACZYNSKI ) ..
BUT THIS WILL NEVER
FIND ITS WAY INTO ADS.. SUCH IS THE STRANGLEHOLD OF JEWS IN AI JUDICIAL SYSTEMS
The point is that most
ADS used in this context are risk-assessment tools: based on a number of
factors about the defendants' criminal history, sociological data or
demographic features, they provide an estimation of their risk of recidivism.
As a result, they privilege one objective
(incapacitation, defined as prevention from reoffending) to the
detriment of other traditional justifications of punishment in law, such as
retribution (taking into account the severity of the crime), rehabilitation
(social reintegration) and deterrence.
ADS can contribute to
making administration decisions more efficient, transparent and accountable, provided however that they are themselves transparent and accountable.
Most ADS operate as
'black boxes' and therefore lack transparency, making their efficiency debatable.
Existing machine
learning technologies enable a high degree of automation in labour-intensive activities
such as satellite imagery analysis. A more ambitious and controversial use of
ADS in this context is to build autonomous weapon systems. A number of
countries are increasing their studies and development of such systems as they
perform increasingly elaborate functions, including identifying and killing
targets ( using drones ) with little or no human oversight or control.
The results of ADS are
often difficult to explain. This can reduce consumer trust and creates four
main risks:--
• There may be 'hidden'
biases derived from the data provided to train the system. This can be
difficult to detect and correct. In some cases, these biases can be
characterised as discriminations and be sanctioned in court.
• It can be difficult,
if not impossible, to prove that the system will always provide correct outputs,
especially for scenarios that were not represented in the training data. This
lack of verifiability can be a concern in mission-critical applications.
• In case of failure,
it might be very difficult, given the models' complexity, to diagnose and correct
the errors and to establish responsibilities. .
• Finally, as
previously mentioned, malicious adversaries can potentially attack the systems
by poisoning the training data or identifying adversarial examples. These attacks
can be difficult to detect and prevent.
ADS can be audited systematically.
Their they can amplify biases and errors
and make it more difficult to allocate liabilities.
Several explanation
modes can be distinguished:--
-- Explanations can be
of three different types: operational (informing how the system actually works), logical (informing
about the logical relationships between inputs
and results) or causal (informing about the causes for the results).
– Explanations can be
either global (about the whole algorithm) or local (about specific results).
– Explanations can take
different forms (decision trees, histograms, picture or text highlights,
examples, counterexamples, etc.).
The strengths and
weaknesses of each explanation mode should be assessed in relation to the recipients of the explanations (e.g.
professional or individual), their level of expertise and their objectives
(understanding the results to make a decision, challenging a decision, verifying
compliance with legal obligations, etc.).
Acountability can be
seen as a requirement on a process (obligation to provide justification),
Many papers use terms
such as transparency, explainability, interpretability, accountability or fairness
with different meanings or without defining them properly (and often without
introducing clear distinctions between them).
In data mining and
machine learning, interpretability is defined as the ability to explain or to
provide the meaning in understandable terms to a human. These definitions
assume implicitly that the concepts expressed in the understandable terms
composing an explanation are self-contained and do not need further
explanations.
Interpretability
typically means that the model can be explained, a quality which is imperative
in almost all real applications where a human is responsible for consequences
of the model.'
ADS need to be trained
to be able to solve complex tasks.
For example, the cleaning robot should learn to handle candy
wrappers differently from a dropped
diamond ring
An adversary can
threaten the integrity or availability of such ADS in different ways:--
• by attacking the
training dataset, for example, by injecting fake data,
• by attacking the ML
algorithm, or
• by exploiting the
generated model (the ADS) at run-time.
The attacks on the ML
algorithm, sometimes called 'logic attacks', require the adversary to have physical
access to the systems where the algorithm is running. These attacks are not
specific to ADS and can be mitigated by
various security measures, such as access control or hardware security.
.
The goal of an attack
on the training phase is to influence the generated model by compromising its
integrity or availability. Integrity attacks alter the generated model towards
a specific goal, for example to maliciously obtain a loan or to go through an
intrusion detection system (IDS).
ML classifier, the goal
of an integrity attack could be to assign an incorrect class to a legitimate
input.
In contrast,
availability attacks tend to affect the quality, performance or access to the
system. The final goal may be to create sufficient errors to make the ADS
unusable.
Although their goals are different,
these attacks are similar in nature and are typically performed by altering or
poisoning the training dataset by
injecting adversarial data (injection attacks) or by removing or modifying existing records (modification attacks). The
modification can be performed, in a supervised setting, by modifying the data
labels99 or the data itself.
Note that these attacks
require that the adversaries have access to the pre-processed training dataset.
If this is not possible, the adversary can poison or inject the training data
before pre-processing.
Attacks on the
execution phase do not intend to modify the ADS generated model, but instead
seek to exploit some of its weaknesses. The idea is to compute some inputs,
called adversarial examples, which will trigger the desired, incorrect,
outputs.
When the ADS is a
classifier, the adversary seeks to have the perturbed inputs assigned to incorrect
classes.
Most of the previous
attacks assume 'white box' scenarios, in which attackers have access to the
internal workings of the model. However, the 'black box' scenario is probably a
more realistic threat model.
For
example, an attacker who wants to attack an image recognition system or a spam filter rarely has access to the internals of
the model. Instead, they often have access to the system as an oracle, i.e. it can query the ADS with
their own inputs and can observe the generated outputs.
Attacks on 'black box'
systems, also called 'black box' attacks, are more challenging but not impossible.
A key property in this respect is adversarial example transferability, i.e. the
property that can be exploited whereby adversarial examples crafted for a given
classifier are likely to be misclassified by other models trained for the same
task.
An adversary may want
to compromise the confidentiality of an ADS for example by trying to extract information
about the training data or by retrieving the ADS model itself. These attacks
raise privacy oncerns, since training data often contain personal data.
They
may also undermine intellectual property, as the ADS model and the training
data can be proprietary and confidential to their owner.
Attackers may want to
retrieve some of the data used to train the system.
Two main types of scenarios
can be considered:--
• 'White box' attacks
rely on the assumption that the attacker has access to the model and tries to
learn about the training data by 'inverting' it.
• 'Black box' attacks
do not assume access to the model: an adversarial client can only submit queries
to the model and make predictions based on the answers.
ADS are often based on
machine learning algorithms trained on collected data. There are multiple potential
sources of unfairness in this process.
Unfair treatment can
result, for example, from the content of the training data, the way the data is
labelled or the feature selection.
Biased training data.
If the training data contains biases or historical discriminations, the ADS
will inherit them and incorporate them into its future decisions.
ADS, and more generally
machine learning algorithms, are systems trained to recognise and leverage
statistical patterns in data. However, they are not perfect and perform
classification or prediction errors.
The accuracy rate of an ADS is often
related to the size of the training dataset--large training dataset leads to
less errors, and less data leads to worse predictions. ADS are often
complex systems that are difficult to understand. 'Hand-coded' ADS code can be
audited, but the task is not always easy since they generally consist of
complex modules made of a large number of code lines developed by groups of
engineers.
ADS that are based on machine learning are even more challenging to
understand, and therefore to explain, since their models are generated
automatically from training data. Data have many properties and features, and
each of them can influence the generated models.
Bias is ‘an inclination
of prejudice towards or against a person, object, or position’
Some algorithms collect
their own data based on human-selected criteria, which can also reflect the
bias of human designers.
Known as 'ADS'
(algorithmic decision systems), these systems often rely on the analysis of
large amounts of personal data to infer correlations or, more generally, to
derive information deemed useful to make decisions.
Algorithmic bias
describes systematic and repeatable errors in a computer system that create
unfair outcomes, such as privileging one arbitrary group of users over others.
Bias can emerge due to many factors, including but not limited to the design of
the algorithm or the unintended or unanticipated use or decisions relating to
the way data is coded, collected, selected or used to train the algorithm.
Algorithmic bias is found across platforms, including but not limited to search
engine results and social media platforms, and can have impacts ranging from
inadvertent privacy violations to reinforcing social biases of race, gender,
sexuality, and ethnicity. The study of algorithmic bias is most concerned with
algorithms that reflect "systematic and unfair" discrimination.
This
bias has only recently been addressed in legal frameworks, such as the 2018
European Union's General Data Protection Regulation. Bias can enter into
algorithmic systems as a result of pre-existing cultural, social, or
institutional expectations; because of technical limitations of their design;
or by being used in unanticipated contexts or by audiences who are not
considered in the software's initial design.
Commercial algorithms
are proprietary, and may be treated as trade secrets.Treating algorithms as
trade secrets protects companies, such as search engines, where a transparent
algorithm might reveal tactics to manipulate search rankings. This makes it
difficult for researchers to conduct interviews or analysis to discover how
algorithms function. such secrecy can also obscure possible unethical methods
used in producing or processing algorithmic output
The General Data
Protection Regulation (GDPR), the European Union's revised data protection
regime that was implemented in 2018, addresses "Automated individual decision-making,
including profiling" in Article 22. These rules prohibit
"solely" automated decisions which have a "significant" or
"legal" effect on an individual, unless they are explicitly
authorised by consent, contract, or member state law. Where they are permitted,
there must be safeguards in place, such as a right to a human-in-the-loop, and
a non-binding right to an explanation of decisions reached. While these
regulations are commonly considered to be new, nearly identical provisions have
existed across Europe since 1995, in Article 15 of the Data Protection
Directive.
The United States has
no general legislation controlling algorithmic bias, approaching the problem
through various state and federal laws that might vary by industry, sector, and
by how an algorithm is used. Many policies are self-enforced or controlled by
the Federal Trade Commission.In 2016, the Obama administration released the
National Artificial Intelligence Research and Development Strategic Plan,which
was intended to guide policymakers toward a critical assessment of algorithms.
It recommended researchers to "design these systems so that their actions
and decision-making are transparent and easily interpretable by humans, and
thus can be examined for any bias they may contain, rather than just learning
and repeating these biases". Intended only as guidance, the report did not
create any legal precedent.
The “Algorithmic
Accountability Act of 2019” was introduced in the U.S. House of Representatives
on April 10, 2019 and referred to the House Committee on Energy and Commerce.
The bill requires an assessment of the risks posed by automated decision
systems to the privacy or security of personal information of consumers and the
risks that the systems may result in or contribute to inaccurate, unfair,
biased or discriminatory decisions impacting consumers.
Governance and
accountability issues relate to who creates the ethics standards for AI, who
governs the AI system and data, who maintains the internal controls over the
data and who is accountable when unethical practices are identified. The
internal auditors have an important role to play in this regard. They should
assess risk, determine compliance with regulations and report their findings
directly to the audit committee of the board of directors.
On July 31, 2018, a
draft of the Personal Data Bill was presented in India The draft proposes standards for the storage,
processing and transmission of data. While it does not use the term algorithm,
it makes for provisions for "...harm resulting from any processing or any
kind of processing undertaken by the fiduciary". It defines "any
denial or withdrawal of a service, benefit or good resulting from an evaluative
decision about the data principal" or "any discriminatory
treatment" as a source of harm that could arise from improper use of data.
It also makes special provisions for people of "Intersex status
Bias can be introduced
in many ways, including the following.--
• It can be present in
the (training/validation/test) input dataset. For instance, a common form of
bias is human bias (when data are labelled according to a person’s own view and
therefore reflect the bias of that person).
• It can be introduced
via the online learning process, when new, biased data are fed to the model in
real time.
• Bias may also occur
when ML models make non-linear connections between disparate data sources,
despite those sources being validated individually for certain characteristics/variables.
• It can be introduced
into the model during the development phase through inadvertent coding of
biased rules, for example (algorithmic bias).
Most commonly, data
contain bias when they are not representative of the population in question.
This can lead to
discrimination, for example when a class of people less represented in the
training dataset receives less or more
favourable outcomes simply because the system has learnt from only a few examples and is not able to generalise
correctly. However, discrimination can exist without bias or direct
discrimination; it can result from sensitive attributes serving as input
variables, regardless of bias.
Techniques exist to
prevent or detect bias (active or passive de-biasing). For example, controls can
be implemented during the data preparation and feature engineering phases to
prevent or detect bias and discrimination.
Furthermore, statistical analysis
(e.g. data skewness analysis) can be applied to the training dataset to verify
that the different classes of the target population are equally represented (under-represented
classes can be incremented by oversampling or overrepresented classes can be
reduced in size). In addition, techniques (and libraries) exist to test models
against discriminatory behaviour (e.g. using crafted test datasets that could
lead to discrimination).
5 most common types of
bias:---
Confirmation bias.
Occurs when the person performing the data analysis wants to prove a
predetermined assumption. ...
Selection bias. This
occurs when data is selected subjectively. ...
Outliers. An outlier is
an extreme data value. ...
Overfitting and underfitting. ...
Confounding variables
A confounding variable is an “extra” variable that you didn't account for. They can ruin an experiment and give you useless results.
In a distribution of
variables, outliers lie far from the majority of the other data points as the
corresponding values are extreme or abnormal. The outliers contained in sample
data introduce bias into statistical estimates such as mean values, leading to
under- or over-estimated resulting values .
An outlier is a value that escapes normality and can (and probably will)
cause anomalies in the results obtained through algorithms and analytical
systems
Usually, the presence
of an outlier indicates some sort of problem. An outlier may be due to
variability in the measurement or it may indicate experimental error; the latter
are sometimes excluded from the data set. ... Outliers can occur by chance in
any distribution, but they often indicate either measurement error or that the
population has a heavy-tailed distribution..
Confirmation bias is
the tendency to search for, interpret, favor, and recall information in a way
that confirms or strengthens one's prior personal beliefs or hypotheses
Bias can happen in ways
that do not involve humans in the loop. In fact, algorithms used for
inferencing or training may be working perfectly well and not recognize bias
because those aberrations are so small that they escape detection.
But bias also can be
cumulative, and in some cases exponential. As such, it can cause issues much
further down the line, making it difficult to trace back to where the problem
originated.
Biasing can start wherever
that data is generated.
Every measurement has
accuracy levels and tolerances, but they shift over time. Sensors have
variability, and that variability changes over time. So you have to figure out
where it is in the calibration cycle and keep trying to correct for
variability.
That’s part of the picture. But every piece of data has some
variability in it, too. So if you add in all of that data randomly, you could
multiply the variability
Recommendations for
building a robust and responsive AI and data ethics capability: --
“Appoint chief data/AI
officers with ethics as part of their responsibilities.”
“Assemble
organizationally high-level ethics advisory groups.”
“Incorporate privacy
and ethics-oriented risk and liability assessments into decision-making or
governance structures.”
“Provide training and
guidelines on responsible data practices for employees.”
“Develop tools,
organizational practices/structures, or incentives to encourage employees to
identify potentially problematic data practices or uses.”
“Use a data
certification system or AI auditing system that assesses data sourcing and AI
use according to clear standards.”
“Include members
responsible for representing legal, ethical, and social perspectives on
technology research and project teams.”
“Create ethics
committees that can provide guidance not only on data policy, but also on
concrete decisions regarding collection, sharing, and use of data and AI.”
An AI ethics committee
should seek to address the following concerns:--
“Whether the project
under review advances organizational aims and foundational values to an extent
that it justifies any organizational and social risks or costs.”
“Whether the project is
likely to violate any hard constraints, such as legal requirements or
fundamental organizational commitments/principles.”
“Whether an impartial
citizen would judge that the organization has done due diligence in considering
the ethical implications of the project.”
“Whether it is possible
to secure the sought benefits in a way that better aligns with organizational
values and commitments and without any significant additional undue burden or
costs.”
“Whether reputational
risks could be significant enough to damage the brand value in the concerned
market or in other places where the organization operates.”
Tesla and SpaceX
founder Elon Musk issued a warning: “Mark my words, AI is far more dangerous than nukes.” “Unless we learn how to prepare for, and
avoid, the potential risks,AI could be the worst event in the history of our civilization.”
AI has the potential to
hurt people in mass numbers, which puts unique responsibilities on the field.
This incredible power to do harm at scale means those of us in the AI industry
have a responsibility to put societal interest above profit.
Malicious use of AI,
could threaten digital security (e.g. through criminals training machines to
hack or socially engineer victims at human or superhuman levels of
performance), physical security (e.g. non-state actors weaponizing consumer
drones), and political security (e.g. through privacy-eliminating surveillance,
profiling, and repression, or through automated and targeted disinformation
campaigns)
Digitization is a
building block toward artificial
intelligence because it can facilitate the availability of the “Big Data” on which machine learning is based. Next on the spectrum would be for governments to rely on what we call here algorithmic
tools—that is, traditional, human-created
statistical models, indices, or scoring systems that are then used as
decision tools.
These traditional
algorithmic or statistical tools rely on humans to select the specific variables to be included in a decision aid
and the precise mathematical relationships
between those variables. Only the final step on the spectrum—machine
learning— constitutes what we will consider artificial intelligence, because
learning algorithms essentially work “on
their own” to process data and discover optimal
mathematical relationships between them.
This autonomous
self-discovery is what gives machine-learning algorithms not only their name
but also their frequent superior performance in terms of accuracy over traditional
algorithmic tools. Of course, even with
machine learning, humans must specify the objective that the learning algorithm is supposed to forecast or
optimize, and humans must undertake a number of steps to “train” the algorithm
and refine its operation.
Yet these learning
algorithms are different than traditional statistical tools because the precise
ways that data are combined and analyzed are neither determined in advance by a human analyst nor easily explainable after
the fact.
For this reason, machine learning algorithms are often described as
“black-box” algorithms because they do not afford a ready way of characterizing
how they work—other than that they can be
quite accurate in achieving the objectives they have been designed to achieve.
Biased algorithms can
lead to decisions which can have a collective, disparate impact on certain
groups of people .
AI is still an emerging
technology. Data is analyzed
through deep uses of algorithmic
programming.
There are lots of examples where the data and humans building
the algorithms had an inherent bias that was
built into the AI algorithm. In Israel
Palestinians are always at the receiving end in a most slimy and evil manner..
Israel was the first to
screw a part of the population ( Palestinians ) with blockchain by grabbing
their ancestral lands.
Israel is the first to
screw a part of its citizens ( Palestinians ) using black box algorithms for
justice
Most AI algorithms are
essentially black box pattern detectors. They can detect specific patterns
quickly, but they don’t give you the causality of why those patterns happen. We
can’t see why an algorithm has taken a decision; it’s not explained in a way
that can be understood by a human.
And we are making
decisions based on these ‘black boxes’ without knowing why. We are losing
control over the way decisions are being made – and this is a major issue.
An AI model is
considered to be traceable if (a) its decisions, and b) the datasets and
processes that yield the AI model’s decision
(including those of data gathering, data labelling and the lgorithms used), are documented in an easily
understandable way.
Augmented intelligence
unites the strengths of people and machines when prospecting value from data.
Namely, you can augment human instinct with smart algorithms that provide fast,
data-driven predictive insights. These insights can help people redesign
functions, detect patterns, find strategic opportunities, and turn data into
action.
Decision-making and
actions will improve, provided you have feedback loops built in for continuous
improvement. Feedback loops are important for improving upon algorithms and in
making sure that when things do not happen as expected, there are mechanisms in
place to understand why.
Auditability refers to
the readiness of an AI system to undergo an assessment of its algorithms, data
and design processes
To facilitate
auditability, organisations
can consider keeping a comprehensive record of data
provenance, procurement, preprocessing, lineage, storage and security. The
record could also include qualitative input about data representations, data sufficiency, source integrity, data
timelines, data relevance, and unforeseen
data issues encountered across the workflow.
1. Accountability:
Ensure that AI actors are responsible and accountable for the proper functioning
of AI systems and for the respect of AI ethics and principles, based on their
roles, the context, and consistency with the state of art.
2. Accuracy: Identify,
log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that
expected and worst-case implications can be understood and can inform
mitigation procedures.
3. Auditability: Enable
interested third parties to probe, understand, and review the behaviour of the
algorithm through disclosure of information that enables monitoring, checking or criticism.
4. Explainability:
Ensure that automated and algorithmic decisions and any associated data driving
those decisions can be explained to end-users and other stakeholders in non-technical
terms.
5. Fairness:
a. Ensure that
algorithmic decisions do not create discriminatory or unjust impacts across different demographic lines (e.g.
race, sex, etc.).
b. To develop and
include monitoring and accounting mechanisms to avoid unintentional discrimination when
implementing decision-making systems.
c. To consult a
diversity of voices and demographics when developing systems, applications and
algorithms.
Build trust by ensuring
that designers and operators are responsible and accountable for their systems,
applications and algorithms, and to ensure that such systems, applications and algorithms operate in a
transparent and fair manner.
To make available
externally visible and impartial avenues of redress for adverse individual or societal effects of an
algorithmic decision system, and to designate a
role to a person or office who is responsible for the timely remedy of
such issues.
Algorithm audits are
conducted if it is necessary to discover the actual operations of algorithms comprised in models. This would
have to be carried out at the request of a
regulator (as part of a forensic investigation) having jurisdiction over
the organisation or by an AI technology provider to assist its customer
organisation which has to respond to a regulator’s request.
Conducting an
algorithm audit requires technical expertise which may require engaging external experts. The
audit report may be beyond the understanding
of most individuals and organisations.
The expense and time required to
conduct an algorithm audit should be
weighed against the expected benefits obtained from the audit report. Ultimately, algorithm audits
should normally be used when it is reasonably
clear that such an audit will yield clear benefits for an investigation.
Explainability is just
one element of transparency. Transparency consists in making data, features, algorithms and training methods
available for external inspection and constitutes a basis for building trustworthy models.
Online Dispute
Resolution (ODR) has arisen in recent years as a tool for resolving
disagreements among parties using technology, growing in part out of prior developments in the field of
Alternative Dispute Resolution (ADR). ADR is
a term that refers to a range of methods such as mediation and
arbitration that aim to settle disputes without the use of litigation and the
court system.
eBay and PayPal have
developed ODR systems to handle the millions of disputes that regularly arise
on their platforms from and among users.
Online dispute
resolution (ODR) is a branch of dispute resolution which uses technology to
facilitate the resolution of disputes between parties. It primarily involves
negotiation, mediation or arbitration, or a combination of all three.
In this
respect it is often seen as being the online equivalent of alternative dispute
resolution (ADR). However, ODR can also
augment these traditional means of resolving disputes by applying innovative
techniques and online technologies to the process.
ODR is a wide field,
which may be applied to a range of disputes; from interpersonal disputes
including consumer to consumer disputes (C2C) or marital separation; to court
disputes and interstate conflict
Today, the term ODR is
even more expansive. In its current form, ODR covers the use of any technology
to assist parties in the dispute resolution process. Consider three different
hypothetical mediation proceedings:
A mediation conducted
through a video conferencing service rather than in person.
An in-person mediation
where the parties utilize technology tools to assist in analyzing their
positions.
A mediation run
entirely by an artificial intelligence (AI) service.
All three mediations
are a form of ODR. ODR can be as straightforward as using a webcam, or as
complex as a machine learning algorithm that guides disputants to an optimal
settlement. Even the examples above merely scratch the surface of what is
possible through ODR. As legal technology improves and expands, ODR is becoming
increasingly popular and useful.
key advantages and
challenges that face all forms of ODR. First, the advantages:--
Reduced
Cost—Cost-saving is already a core advantage of alternative dispute resolution.
Instead of engaging in costly and time-intensive litigation, ADR allows parties
to minimize costs and save time in resolving their disputes. ODR enhances these
benefits. Using telecommunication technology, for instance, eliminates travel
expenses, allows for quicker communication, and gives the parties more
flexibility in scheduling. Technology tools and AI can help parties understand
their position and range of options, leading to more efficient resolution. As a
set of tools, ODR can reduce costs significantly as compared to in-person ADR.
Increased Access to
Justice—ODR allows for increased access to the legal system in multiple ways.
As discussed, ODR originated as a way to address the high number of disputes
that arose out of e-commerce transactions. Instead of relying on traditional
forms of dispute resolution, companies have incorporated ODR systems into their
websites in order to give customers a direct and efficient way to resolve their
disputes. For high-volume and low-value disputes, ODR may be the only practical
means for a consumer to resolve a dispute.
ODR also allows increased access to the legal system for traditional disputes. A direct byproduct of reduced costs is that more parties can afford to utilize ODR. Courts have begun to implement ODR tools as well. The convenience of ODR makes it significantly easier for users to engage with the court system.
ODR also allows increased access to the legal system for traditional disputes. A direct byproduct of reduced costs is that more parties can afford to utilize ODR. Courts have begun to implement ODR tools as well. The convenience of ODR makes it significantly easier for users to engage with the court system.
Accuracy—The increased
convenience of ODR can also lead to better accuracy. Most simply, ODR helps
parties and decision-makers reach more accurate outcomes by providing better
access to information. ODR can also help avoid implicit biases on factors such
as race and socioeconomic status. One goal of AI is to avoid the inevitable
biases that are present in human decision-making. Although these tools are
still in their infancy, there is hope that ODR can provide more equitable
remedies.
ODR faces several
challenges as well: --
Fairness—Many of the
advantages of ODR are not foolproof. For example, although ODR systems provided
by e-commerce companies may increase consumer access to remedies, these systems
almost certainly lead to better results to the company paying for them.
Although ODR can be more cost-effective, these systems are not costless. Since
there is currently no regulation on these systems, outcomes may become stacked
in favor of the implementing party.
It is important to pay attention to who is covering the costs of the system, as they are the most likely to benefit from them. Similarly, ODR is not a silver bullet to provide optimally accurate resolutions. For instance, AI can actually exacerbate issues of implicit bias. Developers of these systems need to continually monitor the results to ensure that ODR tools are not creating more inequities than they are solving.
It is important to pay attention to who is covering the costs of the system, as they are the most likely to benefit from them. Similarly, ODR is not a silver bullet to provide optimally accurate resolutions. For instance, AI can actually exacerbate issues of implicit bias. Developers of these systems need to continually monitor the results to ensure that ODR tools are not creating more inequities than they are solving.
Privacy and
Security—The introduction of technology inevitably introduces privacy and
security risks. No tool is foolproof and therefore information shared through
ODR solutions may be at higher risk of exposure than with traditional in-person
ADR. Trust is essential in order for parties to resolve a dispute, and that
trust extends to the tools and processes they are utilizing. Developers of ODR
solutions need to pay particular attention to these concerns.
Impersonality—Disputes
are an inherently emotional and trying experience. Mediators and arbitrators do
more than act as robots crunching information and outputting settlement ranges.
The neutral third-party has to navigate a complex emotional setting and provide
an environment to help both parties feel comfortable with the proposed
solutions. The benefits of human interaction can be lost as ODR increasingly
relies on technology. This risk is particular salient with AI solutions.
Amazon has developed
algorithms that can resolve a consumer complaint about a defective product
without requiring any human intervention.
Realizing that they
could not afford to hire enough human mediators to resolve all of these disputes or arrange for parties to
video conference with each other, these companies
leveraged the extensive amounts of data they had collected on consumer behavior and usage.
Their ODR systems aim to
prevent or amicably resolve as many disputes as possible and to decide the
remainder quickly. To do so, they generally
first diagnose the problem, working directly with the complainant; they then
move to direct negotiations (aided by technology) and ultimately allow the company
to decide the case if the parties are not able to amicably resolve matters on their own.
As the success of these systems
inspired other firms to develop similar
and increasingly sophisticated programs, algorithms have become a more prominent
dispute resolution solution, allowing companies to automate away many (if not all) of the steps of decision-making
process.
Some courts have also
begun experimenting with ODR as a mechanism to
attempt to resolve lawsuits without requiring the use of judicial decision-making
adopted some form of “court ODR” in cases involving small claim civil matters,
traffic violations, outstanding warrant cases,
and low-conflict family court cases..
What counts as an ODR system can vary from a simple website that facilitates
entering pleas for traffic tickets online to an
online portal for engaging in asynchronous negotiations. These are not
mandatory systems in any jurisdiction of
which we are aware, but instead they are offered as an option to avoid appearing in court. In
jurisdictions with these systems, parties
are notified of the ODR option via mailings or websites.
Parties can access the ODR
system at any time, and with the more interactive systems they can communicate
and negotiate with each other, obtain legal information and suggested resolutions from the system, and easily
manage electronic documents—all without having
to see the inside of a courtroom.
These systems can usually reach resolution in a dispute faster and at lower
cost to the parties and are far more accessible
than traditional court-centered adjudication. ODR provides an emerging avenue
for litigants and courts to engage in dispute
resolution outside of the presence of a courtroom and absent a human judge.
Court ODR systems, as
well as the private-sector iterations that inspired them, have increasingly
adopted automated processes and rely on algorithmic tools to aid in reaching
what some observers characterize as fair and low-cost solutions to the parties’ disputes.
Court systems could take these algorithms to the next
“level” of autonomy by integrating artificial
intelligence into ODR processes, allowing for increasingly automated
forms of decision-making for petty ego
related cases –like when a rich celebrity ( like a crying bollywood superstar )
wants to harass a desh bhakt man using his poodles in police, whom he has befriended during the annual
tamasha named Umang.
COMPAS, an acronym for
Correctional Offender Management Profiling for Alternative Sanctions, is a case
management and decision support tool used by U.S. courts to assess the likelihood
of a defendant becoming a recidivist. Israel uses a similar RA tool to screw
Palestinians and grab their ancestral lands.
The COMPAS software
uses an algorithm to assess potential recidivism risk. Northpointe created risk
scales for general and violent recidivism, and for pretrial misconduct.
According to the COMPAS Practitioner's Guide, the scales were designed using
behavioral and psychological constructs "of very high relevance to recidivism
and criminal careers."
Pretrial Release Risk
scale: Pretrial risk is a measure of the potential for an individual to fail to
appear and/or to commit new felonies while on release. According to the
research that informed the creation of the scale, "current charges,
pending charges, prior arrest history, previous pretrial failure, residential
stability, employment status, community ties, and substance abuse" are the
most significant indicators affecting pretrial risk scores.
General Recidivism
scale: The General Recidivism scale is designed to predict new offenses upon
release, and after the COMPAS assessment is given. The scale uses an
individual's criminal history and associates, drug involvement, and indications
of juvenile delinquency.
Violent Recidivism
scale: The Violent Recidivism score is meant to predict violent offenses
following release. The scale uses data or indicators that include a person's
"history of violence, history of non-compliance, vocational/educational
problems, the person’s age-at-intake and the person’s age-at-first-
arrest." An individual's risk score for violent recidivism is calculated
as follows:
Violent Recidivism Risk
Score = (age∗−w)+(age-at-first-arrest∗−w)+(history of
violence∗w) + (vocation
education ∗
w) + (history of noncompliance ∗
w), where w is weight, the size of which is "determined by the strength of
the item’s relationship to person offense recidivism that we observed in our
study data."..
This objective bullshit is to keep white Jews
safe as Rothschild’s history states that Jews are always at the receiving end— when
in reality they are the worst criminals.
As of today, of course,
we know of no machine-learning tool that has been adopted in any court in the United States to
make an ultimate, fully automated determination on a legal or factual
question.
However, several trends
in recent years have emerged that could signal movement towards the eventual
use of such automated adjudication via
artificial intelligence. To date, the principal building blocks of artificial intelligence in the
courts comprise the digitization of court filings and processes, the
introduction of algorithmic tools for certain criminal court decisions, and the emergence of online
dispute resolution as an alternative to traditional court proceedings for small
claims.
Some courts have
created “dedicated computer kiosks” specifically designed to help litigants who
lack legal representation. In
California, for example, an “‘Online Self-Help Center’ offers PDFs that can be
filled in online and used for evictions, divorces, orders of protection, collection matters, small claims,
and other issues.”
The federal judiciary
has instituted a “comprehensive case management system” known as the Case
Management/Electronic Case Files (CM/ECF) system that allows for convenient filing and
organization of court documents, party
pleadings, and other relevant materials.
CM/ECF (Case Management/Electronic
Case Files) is the case management and electronic court filing system for most
of the United States Federal Courts. PACER, an acronym for Public Access to
Court Electronic Records, is an interface to the same system for public use.
Public Access to Court
Electronic Records (PACER(link is external)) is an electronic public access
service that allows users to obtain case and docket information from federal
appellate, district, and bankruptcy courts.
If you want online access to documents filed in Central District cases,
you must have a PACER account.
CM/ECF provides more
functionality than PACER, including the ability to electronically file cases
and documents, to control electronic service and notice of documents, and to
update a user’s contact information for the electronic service of documents
At law firms, the
increasing use of algorithmic tools, including those involving machine-learning
algorithms, can be found to support the review of documents during the
discovery process. This “e-discovery” practice has been shown to have a “strong impact” on reducing the need for
human labor—plus it has spawned services
that seek to analyze trends and make legal forecasts.
Algorithmic tools have
taken root in some court systems as an aid to judicial decision-making in
criminal cases on questions of bail, sentencing, and parole— but so far
virtually none of these appear to rely on machine-learning algorithms.
No one knows exactly
how COMPAS works; its manufacturer refuses to disclose the proprietary
algorithm. We only know the final risk assessment score it spits out . . .
Something about this
story is fundamentally wrong: Why are we allowing a computer program, into
which no one in the criminal justice system has any insight, to play a role in
sending a man to prison?
The courts have not yet
started to grapple with the legal implications of these algorithmic tools.
ML models can quickly
become “black boxes”, opaque systems for which the internal behavior cannot be
easily understood, and for which therefore it is not easy to understand (and
verify) how a model has reached a certain conclusion or prediction. The opaqueness
of a ML solution may vary depending on the complexity of the underlying model
and learning mode.
For example, neural networks tend to be more opaque, due to
the intrinsic complexity of the underlying algorithm, than decision trees, the internal functioning of
which can be more easily understood by humans. This technical
opaqueness is directly linked to the opposing concept of explainability.
A model is explainable
when it is possible to generate explanations that allow humans to understand
(i) how a result is reached or (ii) on what grounds the result is based
(similar to a justification).
Explainability helps
business leaders understand why a company is doing what they’re doing with AI.
“explainability.”
That means sorting out
what an AI algorithm did, what data was used, and why certain conclusions were
reached. If, say, a machine learning (ML) algorithm also made business
decisions, these decisions need to be annotated and presented effectively. it
will be incumbent on AI specialists to show that their data is free of bias and
that the outcomes their programs reach are consistent—an interesting challenge
for things like deep learning, where there are many, many layers of analysis
and different approaches that can affect the outcome.
With algorithms being
more and more involved in decision-making processes that can have a significant
impact on the lives of those concerned, it is important to understand how they
“reason”. as a society we cannot allow certain important decisions to be made
with no explanation: “Without being able to explain decisions taken by
autonomous systems, it is difficult to justify them: it would seem
inconceivable to accept what cannot be justified in areas as crucial to the
life of an individual as access to credit, employment, accommodation, justice
and health”.
Algorithms can take
“bad” decisions due to errors or biases of human origin ( mostly deliberate in
Israel against Palestinians ) that are present in the datasets or the code. By
making their reasoning transparent, explainability helps to identify the source
of these errors and biases, and to correct them. This question is pivotal to
the future of AI, as a lack of public confidence could hinder its development.
Explainability – or
interpretability – is a component of algorithm transparency. It describes the
AI system’s property of being easily understandable by humans. The information
must therefore be presented in a form that is intelligible for experts
(programmers, data scientists, researchers, etc.) but also for the general
public.
Publishing the source
code is not enough, not only because that doesn’t systematically make it
possible to identify algorithmic bias (the running of certain algorithms cannot
be apprehended independently from the training data), but also because it is
not readable by a large majority of the public.
Furthermore, this could
be in conflict with intellectual property rights, as an algorithm’s source code
can be assimilated with a trade secret.
What’s more, X-AI holds
several challenges. The first is in the complexity of certain algorithms, based
on machine learning techniques such as deep neural networks or random forests,
which are intrinsically difficult to grasp for humans; then there is the large
quantity of variables that are taken into account.
Second challenge: it’s
precisely this complexity that has made algorithms more efficient. In the
current state of the art, increasing explainability is often achieved at the
expense of precision of the results.
Algorithms that explain
algorithms.. Algorithm refers to a set
of rules/instructions that step-by-step define how a work is to be executed
upon in order to get the expected results.
In order for some
instructions to be an algorithm, it must have the following characteristics:
Clear and Unambiguous:
Algorithm should be clear and unambiguous. Each of its steps should be clear in
all aspects and must lead to only one meaning.
Well-Defined Inputs: If
an algorithm says to take inputs, it should be well-defined inputs.
Well-Defined Outputs:
The algorithm must clearly define what output will be yielded and it should be
well-defined as well.
Finite-ness: The
algorithm must be finite, i.e. it should not end up in an infinite loops or
similar.
Feasible: The algorithm
must be simple, generic and practical, such that it can be executed upon will
the available resources. It must not contain some future technology, or
anything.
Language Independent:
The Algorithm designed must be language-independent, i.e. it must be just plain
instructions that can be implemented in any language, and yet the output will
be same, as expected.
Inorder to write an
algorithm, following things are needed as a pre-requisite:--
The problem that is to
be solved by this algorithm.
The constraints of the
problem that must be considered while solving the problem.
The input to be taken
to solve the problem.
The output to be
expected when the problem the is solved.
The solution to this
problem, in the given constraints.
Then the algorithm is
written with the help of above parameters such that it solves the problem.
To be directly
understandable, the algorithm should therefore have a low level of complexity
and the model should be relatively simple.
Transparency consists therefore
in making data, features, algorithms and training methods available for
external inspection and constitutes a basis for building trustworthy models.
Explainable AI is the
set of capabilities that describes a model, highlights its strengths and
weaknesses, predicts its likely behavior, and identifies any potential biases.
It has the ability to articulate the decisions of a descriptive, predictive or
prescriptive model to enable accuracy, fairness, accountability, stability and
transparency in algorithmic decision making.
Visualization
approaches for seeing and understanding the data in the context of training and
interpreting machine learning algorithms.
Algorithms: Such as
spell check or phonetic algorithms can be useful – but they can also make the
wrong suggestion.
AI initiatives are
rendered useless, and in some cases detrimental, without clean data to feed
their algorithms.
The ability to explain
a model’s behavior, answering to an ML engineer, "why did the model
predict that?" For example, the
prior on variable alpha must not be Gaussian, as we can see in the misaligned
posterior predictive check
The ability to
translate a model to business objectives, answering in natural language,
"why did the model predict that?" For example, the predicted spike in
insulin levels are correlated to the recent prolonged inactivity picked up from
the fitness watch.
Both definitions are
clearly useful. The low-level notion of interpretability lends itself to the
engineer's ability to develop and debug models and algorithms. High-level
transparency and explainability is just as necessary, for humans to understand
and trust predictions in areas like financial markets and medicine
No matter the
definition, developing an AI system to be interpretable is typically
challenging and ambiguous. It is often the case that a model or algorithm is
too complex to understand or describe because its purpose is to model a complex
hypothesis or navigate a high-dimensional space, a catch-22. Not to mention
what is interpretable in one application may be useless in another.
Even with improved
methods and algorithms for explaining AI models and predictions, two core
issues must first be addressed in order to make legitimate progress towards
interpretable, transparent AI: underspecification and misalignment.
The notion of model or
algorithmic interpretability is underspecified -- that is, the AI field is
without precise metrics of interpretability. How can one argue a given model or
algorithm is more interpretable than another, or benchmark improvements in
explainability?
One method could provide beautifully detailed visualizations,
while the other provides coherent natural language rationale behind each
prediction. It can be apples-and-oranges to compare models on account of their
interpretability.
Simple exhaustive
searches are rarely sufficient for most real-world problems: the search space
(the number of places to search) quickly grows to astronomical numbers. The
result is a search that is too slow or never completes. The solution, for many
problems, is to use "heuristics" or "rules of thumb" that
prioritize choices in favor of those that are more likely to reach a goal and
to do so in a shorter number of steps.
In some search methodologies heuristics
can also serve to entirely eliminate some choices that are unlikely to lead to
a goal (called "pruning the search tree"). Heuristics supply the
program with a "best guess" for the path on which the solution lies.
Heuristics limit the search for solutions into a smaller sample size..
In computer science,
artificial intelligence, and mathematical optimization, a heuristic is a
technique designed for solving a problem more quickly when classic methods are
too slow, or for finding an approximate solution when classic methods fail to
find any exact solution. In computing, heuristic refers to a problem-solving
method executed through learning-based techniques and experience.
When
exhaustive search methods are impractical, heuristic methods are used to find
efficient solutions. A heuristic algorithm is one that is designed to solve a
problem in a faster and more efficient fashion than traditional methods by
sacrificing optimality, accuracy, precision, or completeness for speed.
Heuristic algorithms often times used to solve NP-complete problems, a class of
decision problems. In these problems, there is no known efficient way to find a
solution quickly and accurately although solutions can be verified when given.
Heuristics can produce a solution individually or be used to provide a good
baseline and are supplemented with optimization algorithms.
Heuristic
algorithms are most often employed when approximate solutions are sufficient
and exact solutions are necessarily computationally expensive A Heuristic is a
technique to solve a problem faster than classic methods, or to find an
approximate solution when classic methods cannot.
This is a kind of a shortcut
as we often trade one of optimality, completeness, accuracy, or precision for
speed. A Heuristic (or a heuristic function) takes a look at search algorithms.
At each branching step, it evaluates the available information and makes a
decision on which branch to follow. It does so by ranking alternatives.
The
Heuristic is any device that is often effective but will not guarantee work in
every case. three heuristics—availability, representativeness, and anchoring
and adjustment. Subsequent work has identified many more. Heuristics that
underlie judgment are called "judgment heuristics".
Heuristics can be
mental shortcuts that ease the cognitive load of making a decision. Examples
that employ heuristics include using a rule of thumb, an educated guess, an
intuitive judgment, a guesstimate, profiling, or common sense.
A heuristic
technique, often called simply a heuristic, is any approach to problem solving,
learning, or discovery that employs a practical method not guaranteed to be
optimal or perfect, but sufficient for the immediate goals Heuristics are
simple strategies to form judgments and make decisions by focusing on the most
relevant aspects of a complex problem. As far as we know, animals have always
relied on heuristics to solve adaptive problems, and so have humans
A heuristic function,
also called simply a heuristic, is a function that ranks alternatives in search
algorithms at each branching step based on available information to decide
which branch to follow. For example, it may approximate the exact solution.
Heuristic search refers to a search strategy
that attempts to optimize a problem by iteratively improving the solution based
on a given heuristic function or a cost measure. Heuristic search is an AI
search technique that employs heuristic for its moves. Heuristic is a rule of
thumb that probably leads to a solution. ...
Heuristics help to reduce the
number of alternatives from an exponential number to a polynomial number. A
heuristic function, or simply a heuristic, is a function that ranks
alternatives in search algorithms at each branching step based on available
information to decide which branch to follow
Examples that employ heuristics
include using a rule of thumb, an educated guess, an intuitive judgment, a
guesstimate, profiling, or common sense.. heuristic describes a rule or a
method that comes from experience and helps you think through things, like the
process of elimination, or the process of trial and error.
You can think of a
heuristic as a shortcut. Examples of heuristics include using: A rule of thumb.
An educated guess The simplest way to describe them is as follows: A heuristic
is a rule, strategy or similar mental shortcut that one can use to derive a
solution to a problem. A heuristic that works all of the time is known as an
algorithm. ...
A systematic error that results from the use of a heuristic is
called a cognitive bias The heuristic question is the simpler question that you
answer instead. ... On some occasions , substitution will occur and a heuristic
answer will be endorsed by System 2. Of course, System 2 has the opportunity to
reject this intuitive answer, or to modify it by incorporating other
information Heuristic inquiry involves exploring the subjective experience of a
particular phenomenon within a purposive sample of individuals.
Heuristic
research- ers do not separate the individual from the experience but rather
focus their exploration on the essential nature of the relationship or
interaction between both. The Affect Heuristic and Decision Making. The affect
heuristic is a type of mental shortcut in which people make decisions that are
heavily influenced by their current emotions. Essentially, your affect (a
psychological term for emotional response) plays a critical role in the choices
and decisions you make
Generally speaking, a heuristic is a "rule of
thumb," or a good guide to follow when making decisions. In computer science,
a heuristic has a similar meaning, but refers specifically to algorithms. ...
As more sample data is tested, it becomes easier to create an efficient
algorithm to process similar types of data
“Heuristic”
It is based on the psychological
principles of "trial and error" theory. Bias occurs when you
interpret subsequent information around the anchor. A heuristic is a problem solving approach used
to accelerate the process of finding a satisfactory solution. It is a mental
shortcut that eases cognitive load when making decisions.
It is a good guess
often not made with strong reasoning The heuristic function is a way to inform
the search about the direction to a goal. It provides an informed way to guess
which neighbor of a node will lead to a goal. There is nothing magical about a
heuristic function. It must use only information that can be readily obtained
about a node.
When our heuristics fail to produce a correct judgment, it can
sometimes result in a cognitive bias, which is the tendency to draw an
incorrect conclusion in a certain circumstance based on cognitive factors. ...
This mismatch between our judgment and reality is the result of a bias “A
heuristic technique, often called simply a heuristic, is any approach to
problem solving, learning, or discovery that employs a practical method not
guaranteed to be optimal or perfect, but sufficient for the immediate goals
The
heuristic-systematic model is a theory of persuasion that suggests attitudes
can change in two fundamentally different ways. ... This simplified form of
attitude judgment is called heuristic processing, and it involves using rules
of thumb known as heuristics to decide what one's attitudes should be.
The accuracy-effort
trade-off theory states that humans and animals use heuristics because
processing every piece of information that comes into the brain takes time and
effort. With heuristics, the brain can make faster and more efficient
decisions, albeit at the cost of accuracy
It is a thorough assessment of a
product's user interface, and its purpose is to detect usability issues that
may occur when users interact with a product, and identify ways to resolve
them. The heuristic evaluation process is conducted against a predetermined set
of usability principles known as heuristics.
The classic example of heuristic
search methods is the travelling salesman problem. generate a possible solution
which can either be a point in the problem space or a path from the initial
state. test to see if this possible solution is a real solution by comparing
the state reached with the set of goal states.
Heuristics that underlie judgment are called
"judgment heuristics". Heuristic Function is a function that
estimates the cost of getting from one place to another (from the current state
to the goal state.)
Also called as simply a heuristic. Used in a decision
process to try to make the best choice of a list of possibilities (to choose
the move more likely to lead to the goal state.) A heuristic is a mental
shortcut that allows people to solve problems and make judgments quickly and
efficiently.
These rule-of-thumb strategies shorten decision-making time and
allow people to function without constantly stopping to think about their next
course of action It sounds fancy, but you might know a heuristic as a
"rule of thumb."
Derived from a Greek word that means "to
discover," heuristic describes a rule or a method that comes from
experience and helps you think through things, like the process of elimination,
or the process of trial and error Heuristic method is a pure discovery method
of learning science independent of teacher. ...
In this the teacher set a
problem for the students and then stands aside while discover the answer . 2.
The method requires the students to solve a number of problems experimentally
AN EXAMPLE OF WHERE
HEURISTICS GOES WRONG IS WHENEVER YOU BELIEVE THAT CORRELATION IMPLIES
CAUSATION.
Correlation is a
relationship or connection between two variables where whenever one changes,
the other is likely to also change. But a change in one variable doesn’t cause
the other to change. That’s a correlation, but it’s not causation.
Spurious correlations--It
is a mathematical relationship in which two or more events or variables are
associated but not causally related, due to either coincidence or the presence
of a certain third, unseen factor
To find causation, we
need explainability. In the era of artificial intelligence and big data
analysis, this topic becomes increasingly more important. AIs make data-based
recommendations. Sometimes, humans can’t see any reason for those
recommendations except that an AI made them. In other words, they lack
explainability.
Correlation is about
analyzing static historical datasets and considering the correlations that
might exist between observations and outcomes. However, predictions don’t
change a system. That’s decision making. To make software development
decisions, we need to understand the difference it would make in how a system
evolves if you take an action or don’t take action. Decision making requires a
casual understanding of the impact of an action.
AI technology can’t
take its previous learnings from one context and apply them to another
situation. Sure, it can identify correlations. But it has no idea which one
caused the other, or if that’s even the case.
“Too much of deep
learning has focused on correlation without causation, and that often leaves
deep learning systems at a loss when they are tested on conditions that aren’t
quite the same as the ones they were trained on Giving AI the ability to
understand causality would unlock a second renaissance for the technology
Most machine
learning-based data science focuses on predicting outcomes, not understanding
causality Current approaches to machine learning assume that the trained AI
system will be applied on the same kind of data as the training data. In real
life it is often not the case.
When humans rationalize the world, we often
think in terms of cause and effect — if we understand why something happened,
we can change our behavior to improve future outcomes. Causal inference is a
statistical tool that enables our AI and machine learning algorithms to reason
in similar ways.
Let’s say we’re looking
at data from a network of servers. We’re interested in understanding how
changes in our network settings affect latency, so we use causal inference to
proactively choose our settings based on this knowledge.
The future of AI
depends on building systems with notions of causality. while machine learning
methods excel in describing the real world, they’re often lacking in
understanding the world -- simple perturbations hardly noticed by humans can
cause state-of-art deep learning systems to misclassify road signs… The formal
modeling and logic to support seemingly fundamental causal reasoning has been
lacking in data science and AI
deep learning, and most
machine learning (ML) methods for that matter, learn patterns or associations
from data. On its own, observational data can only possibly convey associations
between variables -- the familiar adage correlation does not imply causation
In correlated data, a
pair of variables are related in that one thing is likely to change when the
other does. This relationship might lead us to assume that a change to one
thing causes the change in the other.
The human brain simplifies incoming
information, so we can make sense of it. Our brains often do that by making
assumptions about things based on slight relationships, or bias. But that
thinking process isn’t foolproof. An example is when we mistake correlation for
causation. Bias can make us conclude that one thing must cause another if both
change in the same way at the same time
Correlation is a relationship or
connection between two variables where whenever one changes, the other is
likely to also change. But a change in one variable doesn’t cause the other to
change. That’s a correlation, but it’s not causation. There are many forms of
cognitive bias or irrational thinking patterns that often lead to faulty conclusions
and economic decisions.
These types of cognitive bias are some reasons why
people assume false causations in business and marketing: Putting too much
weight on your own personal beliefs, over-confidence, and other unproven
sources of information often produce an illusion of casualty.
It’s easy to
watch correlated data change in tandem and assume that one thing causes the
other. That’s because our brains are wired for cause-relation cognitive bias.
We need to make sense of large amounts of incoming data, so our brain
simplifies it. This process is called heuristics, and it’s often useful and
accurate. But not always.
An example of where
heuristics goes wrong is whenever you believe that correlation implies
causation.
It is a mathematical
relationship in which two or more events or variables are associated but not
causally related, due to either coincidence or the presence of a certain third,
unseen factor To find causation, we need explainability.
In the era of
artificial intelligence and big data analysis, this topic becomes increasingly
more important. AIs make data-based recommendations. Sometimes, humans can’t
see any reason for those recommendations except that an AI made them. In other
words, they lack explainability.
Causation takes a step
further than correlation. It says any change in the value of one variable will
cause a change in the value of another variable, which means one variable makes
other to happen. It is also referred as cause and effect Causation, also known
as cause and effect, is when an observed event or action appears to have caused
a second event or action.
False Causality. To falsely assume when two events
occur together that one must have caused the other. While causation and
correlation can exist at the same time, correlation does not imply causation.
Causation explicitly applies to cases where action A Causation explicitly
applies to cases where action A causes outcome B.causes outcome B. On the other
hand, correlation is simply a relationship. Action A relates to Action B—but
one event doesn’t necessarily cause the other event to happen.
A heuristic technique,
often called simply a heuristic, is any approach to problem solving, learning,
or discovery that employs a practical method not guaranteed to be optimal or
perfect, but sufficient for the immediate goals Heuristics can be mental
shortcuts that ease the cognitive load of making a decision.
Examples that employ
heuristics include using trial and error, a rule of thumb, an educated guess,
an intuitive judgment, a guesstimate, profiling, or common sense Where finding
an optimal solution is impossible or impractical, heuristic methods can be used
to speed up the process of finding a satisfactory solution. Heuristics can be
mental shortcuts that ease the cognitive load of making a decision.
Examples that employ
heuristics include using trial and error, a rule of thumb, an educated guess,
an intuitive judgment, a guesstimate, profiling, or common sense. Heuristics
are simple strategies or mental processes that humans, animals, organizations and even some machines use to quickly form
judgments, make decisions, and find solutions to complex problems. This happens
when an individual, human or otherwise, focuses on the most relevant aspects of
a problem or situation to formulate a solution.
Those involved in
making these decisions can also be influenced by similar past experiences as
well. This is the reason that people do not generally stress test every chair
or surface they might choose to sit on. Heuristic processes can easily be
confused with the use of human logic, and probability. While these processes
share some characteristics with heuristics, the assertion that heuristics are
not as accurate as logic and probability misses the crucial distinction between
risk and uncertainty.
Risk refers to situations where all possible
outcomes of an action are known and taken into account when making a decision. In
contrast, uncertainty refers to situations where pieces of information are
unknown or unknowable In situations of risk, heuristics face an accuracy-effort
trade-off where their simplified decision process leads to reduced accuracy. In
contrast, situations of uncertainty allow for less-is-more effects, where
systematically ignoring (or in some cases lacking) information leads to more
accurate inferences.
Less-is-more effects
have been shown experimentally, analytically, and by computer simulations. Though both of these mental processes are
similar to heuristics, they are not the same.
heuristics are concerned with finding a solution that is "good enough"
to satisfy a need. They serve as a quick mental reference for everyday
experiences and decisions.
Understanding cause and
effect would make existing AI systems smarter and more efficient. A robot that
understands that dropping things causes them to break would not need to toss
dozens of vases onto the floor to see what happens to them. “Humans don't need
to live through many examples of accidents to drive prudently,. They can just imagine accidents, “in order to
prepare mentally if it did actually happen.”
Algorithms are not the
computer code. They are just the instructions which give a clear idea to write
the computer code.
Qualities of a good
algorithm:---
Input and output should
be defined precisely.
Each step in the
algorithm should be clear and unambiguous.
Algorithms should be
most effective among many different ways to solve a problem.
An algorithm shouldn't
have computer code. Instead, the algorithm should be written in such a way
that, it can be used in different programming languages.
An algorithm is a
series of steps for solving a problem, completing a task or performing a
calculation. Algorithms are usually executed by computer programs .. Algorithm is a precise step-by-step plan for
a computational procedure that possibly begins with an input value and yields
an output value in a finite number of steps while code is a short symbol, often
with little relation to the item it represents.
An algorithm is an
idea, a process, a recipe, etc. It's a sequence of steps, a procedure, that can
be used to produce a result. It is independent of any programming language. Code
is practical realization of algorithm.
An algorithm is an
idea, a process, a recipe, etc. It's a sequence of steps, a procedure, that can
be used to produce a result. It is independent of any programming language. ...
When you then implement that algorithm by coding it in some language, that is
code. Algorithms can be expressed using natural language, flowcharts, etc.
As a job title,
“programmer”, “software developer”, and “software engineer” can mean whatever a
given company wants them to mean. Some places call the people who create
software “engineers”, others call them “developers”, and still others call them
“programmers”. A programmer programs. A
software developer develops software.
A software engineer engineers software
systems. They’re 3 different hats that
the same people often wear at different times. In the US the word “engineer”
often has an actual legal meaning with licensing requirements one is expected
to meet before applying the term to oneself.
In order to become an actual licensed software engineer, you would have
to get a degree in software engineering.. someone who designs and creates a
program from scratch
Autonomous and adaptive
analytics: this technique is the most complex and uses
forward looking predictive analytics models that automatically learn from
transactions and update results in real time using ML. This includes the
ability to self-generate new algorithmic models with suggested insights for
future tasks, based on correlations and patterns in the data that the system
has identified and on growing volumes of Big Data.
The Adaptive Learning
process monitors and learns the new changes made to the input and output values
and their associated characteristics. In addition, it learns from the events
that may alter the market behavior in real time and, hence, maintains its
accuracy at all times. Adaptive AI accepts the feedback received from the operating
environment and acts on it to make data-informed predictions.
Advanced analytics
often uses ML to gain deeper insights, make predictions or generate recommendations
for business purposes. This is done by means of suitable ML algorithms able to recognise
common patterns in large volumes of data via a learning (or ‘training’)
process. The result of the learning process is a model, which represents what
the algorithm has learnt from the training
data and which can be used to make predictions based on new input data
In machine learning, a
common task is the study and construction of algorithms that can learn from and
make predictions on data. Such algorithms work by making data-driven
predictions or decisions, through building a mathematical model from input
data. The data used to build the final model usually comes from multiple
datasets.
Model training (also
called ‘learning’) consists in feeding the training dataset to the algorithm tobuild
the model. The challenge of this phase is to build a model that fits the given
dataset with sufficient accuracy and has a good generalisation capability on
unseen data, i.e. a model that is fit.
An ML model generalises
well when its predictions on unseen observations are of a similar quality (accuracy)
to those made on test data.
After the training
phase, models are calibrated/ tuned by adjusting their hyper-parameters. Examples
of hyper-parameters are the depth of the tree in a decision tree algorithm, the
number of trees in a random forest algorithm, the number of clusters k in a
k-means algorithm, the number of layers
in a neural network, etc.
Selection of incorrect hyper-parameters can result in
the failure of the model.
Data mining of Big Data
is achieved by using AI programming that works with algorithms to find patterns
in the Big Data that are noteworthy. This provides insights that help
management make better-informed decisions.
Upwrds of 70% of the
time and energy spent in AI projects is consumed by preparing the data to be
consumed by the ML algorithms. This data-management work can be broken down
into a handful of phases, including: ---
Data discovery: What
data do you need, and how do you find it?
Data analysis: Is the
data useful for ML? How do you validate it?
Data preparation: What
format is the data in? How do you transform it into the correct format?
Data modeling: What
algorithm can you use to model the data?
Model evaluation: Is
the model working as expected? What changes do you need to make?
Before an algorithm can
train on a piece of data, it must be converted into a machine-readable format
that it understands, which is another critical step in the AI and ML process.
Scientists may have to encode the data a certain way, or use bucketization or
binning techniques to represent data values in certain ranges.
Once the data is fully
prepped and in the correct format, the data scientist can write an algorithm,
or a pre-written one, to model the data. This starts the productionalization
phase of the AI and ML journey, which has its own set of challenges.
if we want text
analysis software to perform desired tasks, we need to teach machine learning
algorithms how to analyze, understand and derive meaning from text. But how?
The simple answer is by tagging examples of text. Once a machine has enough
examples of tagged text to work with, algorithms are able to start
differentiating and making associations between pieces of text, and can even
begin to make predictions.
Clustering
Text clusters are able
to understand and group vast quantities of unstructured data. Although less
accurate than classification algorithms, clustering algorithms are faster to
implement because you don't need to tag examples to train models. That means
these smart algorithms mine information and make predictions without the use of
training data, otherwise known as unsupervised machine learning.
Google is a great
example of how clustering works. When you search for a term on Google, have you
ever wondered how it takes just seconds to pull up relevant results? Google's
algorithm breaks down unstructured data from web pages and groups pages into
clusters around a set of similar words or n-grams (all possible combinations of
adjacent words or letters in a text). So, the pages from the cluster that
contains a higher count of words or n-grams relevant to the search query will
appear first within the results.
Consistent Criteria
Humans make errors.
Fact. And the more tedious and time-consuming a task is, the more errors that
are made. By using automated text analysis models that have been trained,
algorithms are able to analyze, understand, and sort through data more
accurately than humans.
We are influenced by personal experiences, thoughts, and
beliefs when reading texts, whereas algorithms are influenced by the
information they've received. By applying the same criteria to analyze all
data, algorithms are able to deliver more consistent and reliable data.
AI works by combining
large amounts of data with fast, iterative processing and intelligent
algorithms, allowing the software to learn automatically from patterns or
features in the data.
The naive Bayes
algorithm uses a slight modification of the Bayes formula to determine the
probability that certain words belong to text of a specific type. The ‘naive’ part of naive
Bayes come from the fact the algorithm treats each word independently and a text is considered simply
as a set of words. The wider context of the words is then lost using naive Bayes.
Algorithms are being
developed to help pilot cars, guide
weapons, perform tedious or dangerous work, engage in conversations, recommend products, improve collaboration, and make consequential decisions in areas such as jurisprudence, lending, medicine, university admissions, and hiring.
But while the
technologies enabling AI have been
rapidly advancing, the societal impacts
are only beginning to be fathomed if
automatic techniques or naïve statistical methodologies are used to train algorithms
on data that contain inaccuracies or biases,
those algorithms themselves might well reflect
those inaccuracies or biases.
A machine learning algorithm is only safe and reliable
to the extent that it is trained on (1) sufficient volumes of data that (2) are
suitably representative of the scenarios in which the algorithm is to be deployed.
Artificial intelligence
is leading us toward a new algorithmic warfare battlefield that has no
boundaries or borders, may or may not have humans involved, and will be
impossible to understand and perhaps control across the human ecosystem in
cyberspace, geospace, and space (CGS).
As a result, the very idea of the
weaponization of artificial intelligence, where a weapon system that, once
activated across CGS, can select and engage human and non-human targets without
further intervention by a human designer or operator, is causing great concern.
The thought of an
intelligent machine or machine intelligence to have the ability to perform any
projected warfare task without any human involvement and intervention -- using only
the interaction of its embedded sensors, computer programming, and algorithms
in the human environment and ecosystem -- is becoming a reality that cannot be
ignored anymore.
The rapid development of AI weaponization is evident across
the board: navigating and utilizing unmanned naval, aerial, and terrain
vehicles, producing collateral-damage estimations, deploying “fire-and-forget”
missile systems and using stationary systems to automate everything from
personnel systems and equipment maintenance to the deployment of surveillance
drones, robots and more are all examples.
algorithms are by no
means secure—nor are they immune to bugs, malware, bias, and manipulation. And,
since machine learning uses machines to train other machines, what happens if
there is malware or manipulation of the training data?
While security risks are
everywhere, connected devices increase the ability of cybersecurity breaches
from remote locations and because the code is opaque, security is very complex.
So, when AI goes to war with other AI (irrespective if that is for
cyber-security, geo-security, or space-security), the ongoing cybersecurity
challenges will add monumental risks to the future of humanity and the human
ecosystem in CGS.
There are weapons that use artificial intelligence in active
use today, including some that can search, select and engage targets on their
own, attributes often associated with defining what constitutes a lethal
autonomous weapon system (a.k.a. a killer drone/ robot).
the Israel Aerospace
Industries Harpy, an armed drone that can hang out high in the skies surveying
large areas of land until it detects an enemy radar signal, at which point it
crashes into the source of the radar, destroying both itself and the target.
The weapon needs no
specific target to be launched, and a human is not necessary to its lethal
decision making. spooky strand of research seeks to build algorithms that tip
human analysts off to such targets by singling out cars driving suspiciously
around a surveilled city.
An actor with darker
motives might use algorithms as a convenient veil for an intentionally
insidious decisions.
Automation’s vast
potential to make humans more efficient extends to the very human act of
committing war crimes.
If one system offers up
a faulty conclusion, it could be easy to catch the mistake before it does any
harm. But these algorithms won’t act alone. A few months ago, the U.S. Navy
tested a network of three AI systems, mounted on a satellite and two different
airplanes, that collaboratively found an enemy ship and decided which vessel in
the Navy’s fleet was best placed to destroy it, as well as what missile it
should use. The one human involved in this kill chain was a commanding officer
on the chosen destroyer, whose only job was to give the order to fire.
Eventually, the lead-up
to a strike may involve dozens or hundreds of separate algorithms, each with a
different job, passing findings not just to human overseers but also from
machine to machine. Mistakes could accrue; human judgment and machine
estimations would be impossible to parse from one another; and the results
could be wildly unpredictable.
Militaries have long
argued that AI will make conflict more precise. But that argument has a dark
flipside: An algorithm designed to minimize civilian casualties could just as
easily be used to calculate how civilian harm could be maximized.
Governments must
develop inscrutable and transparent mechanisms to audit algorithms that go bad,
as well as those humans who employ their algorithms badly.
The drones are required
to communicate with each other and even respond to each other. Therefore they
require sensors and they require decision-making based algorithms – and thus
this is the ‘AI and autonomy’ stage.AI Worldwide for Warfare
Unlike human intelligence,
AI algorithms do not possess common sense, conceptual understanding, notions of
cause-and-effect, or intuitive physics.
Their lack of common
sense, the inability to generalize or to consider context, makes AI algorithms “brittle,”
meaning that they cannot handle unexpected scenarios or unfamiliar situations.
As causes that may lead to unfairness
in machine learning :---
• Biases already
included in the datasets used for learning, which are based on biased device measurements,
historically biased human decisions, erroneous reports or other reasons.
Machine learning algorithms are essentially designed to replicate these biases.
• Biases caused by
missing data, such as missing values or sample/selection biases, which result
in datasets that are not representative
of the target population.
• Biases that stem from
algorithmic objectives, which aim at minimizing overall aggregated prediction
errors and therefore benefit majority groups over minorities.
• Biases caused by
"proxy" attributes for sensitive attributes. Sensitive attributes
differentiate privileged and unprivileged groups, such as race, gender and age,
and are typically not legitimate for use in decision making. Proxy attributes
are non-sensitive attributes that can be exploited to derive sensitive
attributes. In the case that the dataset contains proxy attributes, the machine learning algorithm can implicitly make
decisions based on the sensitive attributes under the cover of using presumably legitimate
attributes .
Algorithms make
predictions that mirror past patterns. This new data is then fed back into the
technological model, creating a pernicious feedback loop in which social
injustice is not only replicated, but in fact further entrenched.
It is also
worth noting that the same communities that have been overpoliced have been
severely neglected, both intentionally and unintentionally, in many other areas
of social and political life. While they are overrepresented in crime rate data
sets, they are underrepresented in many other data sets
CJI BOBDE WANTS TO USE
AI IN INDIAN JUDICIARY.. WARNING, NEVER GIVE A COMPUTER TO A MONKEY
Today 1 in 33 Americans are under some form of
correctional supervision.
Risk assessment tools
are designed to do one thing: take in the details of a defendant’s profile and
spit out a recidivism score—a single number estimating the likelihood that he
or she will reoffend. A judge then factors that score into a myriad of
decisions that can determine what type of rehabilitation services particular
defendants should receive, whether they should be held in jail before trial,
and how severe their sentences should be. A low score paves the way for a
kinder fate. A high score does precisely the opposite.
RISK ASSESSMENT CAN
NEVER BE OBJECTIVE . SUBJECTIVE (
CONSCIOUS HUMAN ) MUST HAVE VETO POWER
The logic for using
such algorithmic tools is that if you can accurately predict criminal behavior,
you can allocate resources accordingly, whether for rehabilitation or for
prison sentences. Judges are making
STUPID OBJECTIVE decisions on the basis of data-driven recommendations and not
SUBJECTIVE DECISIONS based on wisdom. .
Machine-learning
algorithms use statistics to find patterns in data. So if you feed it
historical crime data, it will pick out the patterns associated with crime. But
those patterns are statistical correlations—nowhere near the same as
causations.
If an algorithm found, for example, that low income was correlated
with high recidivism, it would leave you none the wiser about whether low
income actually caused crime. But this is precisely what risk assessment tools
do: they turn correlative insights into causal scoring mechanisms.
Now populations that
have historically been disproportionately targeted by law
enforcement—especially low-income and minority communities—are at risk of being
slapped with high recidivism scores like Palestinians in Israel.
The algorithm could
amplify and perpetuate embedded biases and generate even more bias-tainted data
to feed a vicious cycle. Because most risk assessment algorithms are proprietary,
it’s also impossible to interrogate their decisions or hold them accountable.
Humans have empathy
built in because we evolved to be social animals. An artificial intelligence
built from the ground needn't come with empathy. If we don't make sure to build
empathy into such AI at the onset it could be dangerous for us
WE WILL NOT ALLOW STARE
DECISIS TO BE SUSTAINED BY AI. STARE DECISIS IS ILLEGAL AND HAS BEEN EXPLOITED
BY TRAITOR JUDGES IN FOREIGN PAYROLL TO CREATE THE NAXAL RED CORRIDOR AND CAUSE
ETHNIC CLEANSING OF KASHMIRI PANDITS..
WHEN IT SUITS THE AGENDA OF THESE TRAITOR
FOREIGN PAYROLL JUDGES THEY DECLARE THAT MAJORITARIANISM HARMS DEMOCARCY..
ARTIFICIAL INTELLIGENCE
IS WHEN THE CODE IS SELF AWARE .. ANY
IDIOT KNOWS THAT THIS IS IMPOSSIBLE . THE
GOAL OF AI IS TO TAKE OVER DECISIONS THAT WE USUALLY TAKE AS HUMANS.
HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE, REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK, A GUT WHICH INTUITS, 30 TRILLION BODY CELLS WHICH HOLD MEMORY, A VAGUS NERVE , AN AMYGDALA , 73% WATER IN BRAIN FOR MEMORY, 10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.
HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE, REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK, A GUT WHICH INTUITS, 30 TRILLION BODY CELLS WHICH HOLD MEMORY, A VAGUS NERVE , AN AMYGDALA , 73% WATER IN BRAIN FOR MEMORY, 10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.
The software is able to understand itself, and automatically understand and respond differently to different situations. AI today is stupid AI. It's just dumb algorithms that try to do clever stuff
AI CAN BE USED TO HELP
OUT STUPID COLLEGIUM JUDICIARY WITH BODMAS
https://timesofindia.indiatimes.com/city/mumbai/mumbai-noida-kerala-raids-blow-lid-off-phone-racket/articleshow/74025539.cms
Complex AI algorithms
allow organizations to unlock insights from data that were previously
unattainable. However, the blackbox nature of these systems means it isn't
straightforward for business users to understand the logic behind the decision.
Even the data scientists that created the model may have trouble explaining why
their algorithm made a particular decision.
One way to achieve better model
transparency is to adopt from a specific family of models that are considered
explainable. Examples of these families include linear models, decision trees,
rules sets, decision sets, generalized additive models and case-based reasoning
methods.
It is useful to
distinguish between the concepts of procedural and distributive fairness.
Procedural justice
concerns the fairness and the transparency of the processes by which decisions
are made, and may be contrasted with distributive justice (fairness in the
distribution of rights or resources), and retributive justice (fairness in the
punishment of wrongs) Procedural justice is the idea of fairness in the
processes that resolve disputes and allocate resources.
One aspect of
procedural justice is related to discussions of the administration of justice
and legal proceedings. Procedural fairness is concerned with the procedures
used by a decision maker, rather than the actual outcome reached. It requires a
fair and proper procedure be used when making a decision.
Procedural justice is
when employees perceive that the processes that lead to important outcomes are
fair and just. For example, the process of how a manager gives raises will be
seen as unfair if he only gives raises to his friends..
A policy (or an
algorithm) is said to be procedurally fair if it is fair independently of the outcomes it
produces.
Procedural fairness is
related to the legal concept of due
process. A policy (or an algorithm) is said to
be distributively fair if it produces fair outcomes.
Most ethicists take a
distributive view of justice, whereas a
procedure’s fairness rests largely on the outcomes it produces. On the other
hand, people often tend toward a more procedural view, in some cases caring
more about being treated fairly than the
outcomes they experience
AI algorithms often
attract criticism for being distributively unfair
Distributive justice
concerns the socially just allocation of goods. Often contrasted with just
process, which is concerned with the administration of law, distributive
justice concentrates on outcomes. This subject has been given considerable
attention in philosophy and the social sciences.
Distributive justice
theory argues that societies have a duty to individuals in need and that all
individuals have a duty to help others in need. Proponents of distributive
justice link it to human rights.
Five types of
distributive norm are defined --
Equality: Regardless of
their inputs, all group members should be given an equal share of the
rewards/costs. Equality supports that someone who contributes 20% of the
group's resources should receive as much as someone who contributes 60%.
.
Equity: Members'
outcomes should be based upon their inputs. Therefore, an individual who has
invested a large amount of input (e.g. time, money, energy) should receive more
from the group than someone who has contributed very little. Members of large
groups prefer to base allocations of rewards and costs on equity.
Power: Those with more
authority, status, or control over the group should receive less than those in
lower level positions.
Need: Those in greatest
needs should be provided with resources needed to meet those needs. These
individuals should be given more resources than those who already possess them,
regardless of their input.
Responsibility: Group
members who have the most should share their resources with those who have
less.
Substantive fairness
means there is a fair or valid reason for the employer to dismiss an employee
Employers have the right to expect a certain standard of work and conduct from
an employee and in turn, an employee should be protected from arbitrary action.
principles of
restorative justice--- Crime causes
harm and justice should focus on repairing that harm. The people most affected
by the crime should be able to participate in its resolution
Data Science is the
study of all types of data- structured or unstructured, to gain business
insights. It makes use of various techniques and algorithms that help to
collect, store and analyze business data and gain valuable information. It
allows business organizations to collect and organize the data. They use this
data to analyze trends and present the gained information within the
organization.
The professionals who apply all the techniques in data and build
models on top of that data are Data Scientists. Data Scientists use various
scientific algorithms that help to develop business strategies and make
necessary changes and improvements in the business.
A confusion matrix is a
table that is often used to describe the performance of a classification model
(or “classifier”) on a set of test data for which the true values are known. It
allows the visualization of the performance of an algorithm.
It is a matrix
where we put the actual values in the columns and the predicted values in the
rows. Thus the intersection of rows and columns becomes our metrics. A
Confusion matrix is the comparison summary of the predicted results and the
actual results in any classification problem use case.
The comparison summary
is extremely necessary to determine the performance of the model after it is
trained with some training data. There are various components that exist when
we create a confusion matrix. The components are mentioned below
Positive(P): The
predicted result is Positive (Example: Image is a cat)
Negative(N): the
predicted result is Negative (Example: Images is not a cat)
True Positive(TP): Here
TP basically indicates the predicted and the actual values is 1(True)
True Negative(TN): Here
TN indicates the predicted and the actual value is 0(False)
False Negative(FN):
Here FN indicates the predicted value is 0(Negative) and Actual value is 1.
Here both values do not match. Hence it is False Negative.
False Positive(FP):
Here FP indicates the predicted value is 1(Positive) and the actual value is 0.
Here again both values mismatches. Hence it is False Positive.
Accuracy and Components
of Confusion Matrix
After the confusion
matrix is created and we determine all the components values, it becomes quite
easy for us to calculate the accuracy. So, let us have a look at the components
to understand this better.
Classification Accuracy
Accuracy-formula-Confusion-Matrix
From the above formula,
the sum of TP (True Positive) and the TN (True Negative) are the correct
predicted results. Hence in order to calculate the accuracy in percentage, we
divide with all the other components
In the field of machine
learning and specifically the problem of statistical classification, a
confusion matrix, also known as an error matrix, is a specific table layout
that allows visualization of the performance of an algorithm, typically a
supervised learning one (in unsupervised learning it is usually called a
matching matrix).
Each row of the matrix represents the instances in a
predicted class while each column represents the instances in an actual class
(or vice versa). The name stems from the fact that it makes it easy to see if
the system is confusing two classes (i.e. commonly mislabeling one as another).
It is a special kind of
contingency table, with two dimensions ("actual" and
"predicted"), and identical sets of "classes" in both
dimensions (each combination of dimension and class is a variable in the
contingency table).
A false positive is an error resulting from a
test or algorithm indicating the
presence of a condition (for instance,
being a fraudster, having a rare disease,
or being a terrorist) that does not in fact
exist. If the overall population-level base
rate is low, then even the most sophisticated algorithms often yield more false positives Than
true positives. This is known as the “false
positives paradox.”
To illustrate, suppose that each year a country faces only a small
handful of commercial airline terrorists
threats, and that the best available
algorithm homes in on a few hundred
suspects out of millions of passengers.
Though tiny relative to
the overall population, the great
majority of people on this list will be innocent.
Furthermore, because no algorithm is perfectly
accurate, it is quite possible that this list won’t contain all of the actual
terrorists, a type of error called a
false negative. The tradeoff is that,
expanding the list of suspects to reduce
The likelihood of false negatives will
increase the number of false positives—and therefore the risk of harming or treating unfairly still
more innocent people. Analogous
scenarios involve electing algorithmic thresholds for deciding when to treat people at risk of a disease. There
is generally a tradeoff between
correctly identifying as many people
with the disease as possible versus
avoiding potentially risky treatments of
healthy people.
In the regulation of
algorithms, particularly artificial intelligence and its subfield of machine
learning, a right to explanation (or right to an explanation) is a right to be
given an explanation for an output of the algorithm. Such rights primarily
refer to individual rights to be given an explanation for decisions that
significantly affect an individual, particularly legally or financially.
However, today
algorithms take a myriad of decisions without consulting humans: they have
become the decision makers, and humans have been pushed into an artefact shaped
by technology.
When it comes to AI,
“explanation” could mean several things: 1) How an algorithm works or how the
system functions. 2) The factors or data that resulted in a decision by the
algorithm or system that impacted an individual (a data subject)
The algorithm is
basically a code developed to carry out a specific process. Its a process or set of rules to be followed
in calculations or other problem solving operations, usually by a computer.
Algorithmic trading is
heavily used by banks and trading institutions..
Algorithmic trading, encompasses trading systems that are heavily
reliant on complex mathematical formulas and high-speed, computer programs to
determine trading strategies
It is a trading system
that utilizes very advanced mathematical models for making transaction decisions
in the financial markets.
Algorithmic Trading is
a process to Buy or Sell a security based on some pre-defined set of rules
which are backtested on Historical data. These rules can be based on Technical
Analysis, charts, indicators or even Stock fundamentals.
In algorithmic trading:--
Inputs: quotes, trades
in the stock market, liquidity opportunities
Output: intelligent
trading decisions.
Algorithmic trading is
a method of executing orders using automated pre-programmed trading
instructions accounting for variables such as time, price, and volume . This
type of trading was developed to make use of the speed and data processing
advantages that computers have over human traders.
Algorithmic trading is the
use of computers and computer-based models to initiate trades and match buyers
with sellers, hence, it can be used for making markets and also for proprietary
trading
Artificial intelligence
refers to a class of computer programs designed to solve problems requiring
inferential reasoning, decision making based on incomplete or uncertain
information, classification, optimization,
and perception
On the most inflexible end
of the spectrum are AI that make decisions based on preprogrammed rules from
which they make inferences or evaluate options . On the most flexible end are
modern AI programs that are based on machine-learning algorithms that can learn
from data.
Such AI would, in contrast to the rule-based AI, examine countless
other chess games and dynamically find patterns that it then uses to make moves — it would come up
with its own scoring formula For this
sort of AI, there are no pre-programmed rules
about how to solve the problem at hand, but rather only rules about how
to learn from data.
Many modern
machine-learning algorithms share their pedigree with the vast array of
statistical inference tools that are employed
broadly in the physical and social sciences. They may, for example, use methods that minimize prediction error,
adjust weights assigned to various
variables, or optimize both in tandem.
For instance, a machine-learning algorithm may
be given three pieces of data, such as a
person’s height, weight, and age, and then charged with the task of predicting the time in which each person in a
dataset can run a mile.
The machine-learning
algorithm would look through hundreds or
thousands of examples of people with various heights, weights and ages and their mile times to devise a model.
One simple way to do so would be to
assign some co-efficient or weight to each piece of data to redict the mile time. For example:--
Predicted Mile Time = A
x Height + B x Weight + C x Age
The algorithm may
continue to adjust A, B and C as it goes
through the examples it has been given to look for the values for A, B and C that result in the smallest error —
that is, the difference between each person in the training data’s actual mile
time and the algorithm’s predicted mile time.
This example is the same framework for a least-squares
regression, in which the square of the error of the predicting
equation is minimized. Many machine-learning
algorithms are directed at a similar task but use more mathematically sophisticated methods to
determine weights for each variable or
to minimize some defined error or “loss function.
Machine-learning
algorithms are often given training sets of data to process. Once the algorithm trains on that data, it is
then tested with a new set of data used
for validation. The goal of tuning a machine-learning algorithm is to ensure
that the trained model will generalize, meaning that it has predictive power
when given a test dataset (and
ultimately live data).
Machine-learning
algorithms commonly (though not necessarily) make predictions through
categorization. These “classifiers” are
able to, for example, look at millions of credit reports and classify individuals
into separate credit risk categories or process images and separate the ones
containing faces from the ones that do not.
If A machine-learning algorithm is properly generalizing, it will
correctly predict the appropriate
classification for a particular data point. One possible reason AI may be a
black box to humans is that it relies on
machine-learning algorithms that internalize data in ways that are not easily audited or understood by
humans a lack of transparency may arise from
the complexity of the algorithm’s structure, such as with a deep neural network, which consists of thousands
of artificial neurons working together
in a diffuse way to solve a problem.
This reason for AI being a black box is referred to as
“complexity.” The lack of transparency may arise because the AI is
using a machine-learning algorithm that
relies on geometric relationships that humans cannot visualize, such as with support vector
machines. This reason for AI being a
black box is referred to as “dimensionality.”
The deep neural network is based
on a mathematical model called the
artificial neuron. While originally based on a simplistic model of the neurons in human and animal brains, the
artificial neuron is not meant to be a
computer-based simulation of a biological neuron. Instead, the goal of the
artificial neuron is to achieve the same ability to learn from experience as with the biological
neuron.
The ability to connect layers of
neural networks has yielded staggering results. What has emerged is the so-called “deep”
architecture of artificial neurons, referred to as Deep Neural Networks, where
several layers of interconnected neurons are used to progressively find
patterns in data or to make logical or
relational connections between data points.
Deep networks of artificial neurons have been used
to recognize images, even detecting
cancer at levels of accuracy equalling that of experienced doctors. No single
neuron in these networks encodes a distinct part of the decision-making process.
The thousands or
hundreds of thousands of neurons work
together to arrive at a decision. A layer or cluster of neurons may encode some
feature extracted from the data (e.g., an eye
or an arm in a photograph), but often what is encoded will not be intelligible
to human beings.
The net result is akin
to the way one “knows” how to ride a bike. Although one can explain the process
descriptively or even provide detailed steps, that information is unlikely to
help someone who has never ridden one before to balance on two wheels. One
learns to ride a bike by attempting to do so over and over again and develops
an intuitive understanding.
Because a neural network is learning from experience,
its decision-making process is likewise intuitive. Its knowledge cannot in most
cases be reduced to a set of instructions, nor can one in most cases point to
any neuron or group of neurons to determine what the system found interesting or important.
Its power comes from “connectionism,” the
notion that a large number of simple computational units can together perform computationally
sophisticated tasks. The complexity of
the large multi-layered networks of neurons is what gives rise to the Black Box
Problem.
Some machine-learning
algorithms are opaque to human beings because they arrive at decisions by
looking at many variables at once and finding geometric patterns among those
variables that humans cannot visualize.
Modern AI systems are
built on machine-learning algorithms that
are in many cases functionally black boxes to humans. At present, it poses an immediate threat to intent and
causation tests that appear in virtually
every field of law. These tests, which assess what is foreseeable or the basis
for decisions, will be ineffective when applied to black-box AI.
The solution to this
problem should not be strict liability or a regulatory framework of granularly
defined transparency standards for AI
design and use. Both solutions risk stifling innovation and erecting significant barriers to entry for smaller
firms.
A sliding scale system is a
better approach. It adapts the current regime of causation and intent tests, relaxing their requirements for
liability when AI is permitted to operate
autonomously or when AI lacks transparency, while preserving traditional intent
and causation tests when humans supervise AI or when the AI is transparent.
The definition of
interpretable AI isn't exactly black and white. To have a productive
conversation it's essential to be clear what model interpretability means to
different stakeholders:
Deep learning is
fundamentally blind to cause and effect.
Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations. Deep learning’s pattern recognition capabilities have revolutionized technology.
Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations. Deep learning’s pattern recognition capabilities have revolutionized technology.
But if it
can’t understand cause and effect, AI will never reach its true potential
because it will never come close to replicating human intelligence Machine
learning applications involving deep learning are usually trained to accomplish
a highly specific task such as recognizing spoken commands or images of human
faces.
Since its explosion in popularity in 2012, deep learning’s unparalleled
ability to recognize patterns in data has led to some incredibly important
uses, like uncovering fraud in financial activity and identifying indications
of cancer in x-ray scans.
GO OR GOLANG ( MY ELDER SON USES THIS )
Go is syntactically similar to C, but with memory
safety, garbage collection, structural typing, and CSP-style concurrency. Go is
a Procedural, functional and concurrent language
Go is ideal for system programming.. Go supports concurrency..
There are two major implementations:--
Google's self-hosting compiler toolchain targeting multiple
operating systems, mobile devices, and WebAssembly.
gccgo, a GCC frontend.
A third-party transpiler GopherJS compiles Go to
JavaScript for front-end web development.
Go is an open-source programming language developed
by Google. It is a statically-typed compiled language. This language support
concurrent programming and also allows running multiple processes
simultaneously. This is achieved using channels, goroutines, etc. Go has
garbage collection, which itself does the memory management and allows the
deferred execution of functions.
Here, are important reasons for using Go language:--
It allows you to use static linking to combine all
dependency libraries and modules into one single binary file based on the type
of the OS and architecture.
Go language performed more efficiently because of
CPU scalability and concurrency model.
Go language offers support for multiple libraries
and tools, so it does not require any 3rd party library.
It's statically, strongly typed programming language
with a great way to handle errors
Here, are important reasons for using Go language:--
It allows you to use static linking to combine all
dependency libraries and modules into one single binary file based on the type
of the OS and architecture.
Go language performed more efficiently because of
CPU scalability and concurrency model.
Go language offers support for multiple libraries
and tools, so it does not require any 3rd party library.
It's statically, strongly typed programming language
with a great way to handle errors
Here, are cons/drawbacks of using GO language:--
Go is not a generic language
API integration with Go does not have an officially
supported Go SDK.
Poor Library Support
Fractured Dependency Management
- REGARDING TORTURE / MURDER OF IB OFFICER ANKIT SHARMA BY AN ILLEGAL IMMIGRANT MUSLIM IMMIGRANT MOB LED BY AAP LEADER TAHIR HUSSAIN..
NOTHING WILL HAPPEN AS TRAITOR JUDICIARY AND MEDIA ARE ON THE SIDE OF THE ILLEGAL IMMIGRANT MUSLIMS..
INDIAN COPS / SECURITY AGENCIES HAVE NO PRIDE OR HONOR.
THIS IS WHY CJI GOGOI WAS ABLE TO TREAT CBI CHIEF LIKE A CLASS DUNCE AND MAKE HIM SIT IN A CORNER OF THE COURT ROOM THE WHOLE DAY..
IN REALITY, IT SHOULD HAVE BEEN THE OTHER WAY AROUND.. CBI DIRECTOR WHO HAS TAKEN THE OATH IS NOT SMALL FRY..
NOW-- LET US COMPARE INDIA WITH USA.
MIGUEL ÁNGEL FÉLIX GALLARDO WAS A MEXICAN COCAINE DRUG LORD WHO RAN GUNS (FOR PRESIDENT RONALD REAGAN AND CIA DIRECTOR GEORGE HERBERT WALKER BUSH SENIOR ) TO THE CONTRAS IN NICARAGUA .
MERCENARY CONTRAS WERE CREATED/ ARMED/ FUNDED BY CIA TO TOPPLE THE PATRIOT SANDINISTA GOVT OF NICARAGUA WHO KICKED OUT JEWISH OLIGARCHS WHO WERE LOOTING THE NATION..
PATRIOT SANDINISTAS WERE DUBBED AS BAAAAD COMMIES BY PRESIDENT REAGAN.
US PRESIDENT AND CIA WERE UNDERCUTTING AMERICAN DEA DEPT.. THEY GOT HUGE BRIBES FROM COCAINE DRUG LORD FELIX BYPASSING OFFICIAL PROTOCOL OF US CONGRESS SANCTION OF FUNDS..
BUT DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO MADE A BIG MISTAKE..
HE ORDERED THE KILLING OF US DEA AGENT KIKI CAMARENA ( WITH BLESSINGS OF BUSH/ REAGAN ) WHO EXHUMED THE “GUNS FOR COCAINE” CONSPIRACY…
AS SOON AS THIS HAPPENED DEA WENT AGAINST THE WHITE HOUSE AND CIA.. THEY CREATED THEIR OWN UNOFFICIAL ROGUE HIT SQUAD TO TAKE REVENGE ..
https://en.wikipedia.org/wiki/Kiki_Camarena
DEA FOUND OUT DIRECT CIA / WHITE HOUSE INVOLVEMENT IN THE TORTURE AND MURDER OF DEA AGENT KIKI CAMARENA ..
THE ROGUE DEA SQUAD TORTURED AND KILLED WHOEVER WERE INVOLVED IN THE TORTURE OF KIKI CAMARENA ..
DEA EXTRACTED CONFESSIONS FROM A MEXICAN DOCTOR AND A MEXICAN POLICE OFFICER AFTER TORTURING THEM.
A US CIA OFFICER FELIX RODRIGUEZ HAD OVERSEEN THE ENTIRE TORTURE AND KILLING OF THE DEA AGENT ON ORDERS FROM BUSH SR AND REAGAN..
FELIX RODRIGUEZ RAN THE CONTRA SUPPLY DEPOT .. DEAD MEN TELL NO TALES..
REAGAN AND BUSH SR ORDERED THE ARREST OF DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO TO SHUT DOWN THIS CASE BEFORE SHIT HIT THE FAN.
https://en.wikipedia.org/wiki/Miguel_%C3%81ngel_F%C3%A9lix_Gallardo
IN USA , IF YOU KILL A COP HIS MATES GO ROGUE TAKE REVENGE .. AND THIS IS UNOFFICIALLY ALLOWED.. THIS IS WHY NOBODY KILLS A COP OR CIA/ DEA OFFICERS IN US..
IT IS A DISGRACE THAT A MUSLIM SHAHRUKH POINTED A GUN AT A COP FROM SIX INCHES RANGE.. AND HE IS STILL ALIVE.. IN ANY OTHER NATION, HE WOULD HAVE BEEN SHOT DEAD ON THE SPOT.. NO JUDGES – NO JURY..
TRAITOR JUDGES IN FOREIGN PAYROLL CAUSED ETHNIC CLEANSING OF KASHMIRI PANDITS...
TRAITOR JUDGES CREATED THE NAXAL RED CORRIDOR...
THEY NEVER EMPHATISED WITH SLAIN JAWANS AND THEIR FAMILIES ..
ILLEGAL COLLEGIUM JUDICIARY HAS NO POWERS TO INTERFERE WITH BHARATMATAs INTERNAL/ EXTERNAL SECURITY....
OUR JUDICIARY IS PACKED WITH TRAITOR JUDGES IN FOREIGN PAYROLL.
WE DONT NEED THE "VISHWAAS" OF TRAITOR MUSLIMS IN PAKISTANI ISI PAYROLL.
https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html
BHARATMATA IS BEING BLED BY TRAITOR JUDGES , BENAMI MEDIA JOURNALISTS AND PAKISTANI ISI FUNDED NGOs.
WE THE PEOPLE WATCH IN UTTER FRUSTRATION HOW ILLEGAL COLLEGIUM JUDICIARY IS TREATING “WE THE PEOPLE” AND THE CONSTITUTION IN CONTEMPT..
Capt ajit vadakayil
..
- THIS IS ONE OF THE MOST IMPORTANT COMMENTS EVER MADE ON THIS PLANET.
############################################################
SOMEBODY CALLED ME UP AND ASKED ME
CAPTAIN
WHY IS US DEMOCRAT PRESIDENTIAL CANDIDATE BERNIE SANDERS SUPPORTING MUSLIMS AND RUNNING DOWN HINDUS IN THIS DELHI RIOTS..
WELL
TO UNDERSTAND THIS , LEARN THE FOLLOWING SHOCKING TRUTHS
BERNIE SANDERS IS A COMMIE JEW..
COMMIE JEWS HAVE TAKEN LEADERSHIP OF MUSLIMS ALL OVER THE PLANET SINCE THE PAST SEVERAL CENTURIES ..
JEW ROTHSCHILD CREATED "SEAMLESS BOUNDARIES" IN EU.. WITH COMMON CURRENCY. AFTER THAT HUNDREDS OF THOUSANDS OF SYRIAN MUSLIMS HAVE BEEN ALLOWED TO FLOOD INTO EU AND SCANDINAVIA USING A DROWNED SYRIAN BOY AS A TRIGGER..
WHY?
THE REASON IS ALMOST ALL EU AND SCANDINAVIAN NATIONS ARE RULED BY CRYPTO JEWS..
THE IDEA IS TO SCREW CHRISTIANS / HINDUS USING MUSLIMS ( NAIVE RIGHT TO LEFT WRITING PARTY ) WHOSE TOP LEADERS WILL BE JEWS..
HAVE YOU SEEN A SINGLE TOP MUSLIMS LEADER IN INDIA OR ABROAD WITH ZEBIBA PRAYER MARK IN THE RIGHT PLACE, IF THEY HAVE IT AT ALL?.
http://ajitvadakayil.blogspot.com/2011/07/cracked-heels-and-prayer-marks-capt.html
THE JEWISH DEEP STATE IN ISTANBUL CREATED THE SUNNI/ SHIA DIVIDE..
http://ajitvadakayil.blogspot.com/2019/09/istanbul-deep-seat-of-jewish-deep-state.html
ALL MADRASSAS ON THIS PLANET HAVE BEEN CREATED AND FUNDED BY JEW ROTHSCHILD.. WAHABBI/ SALAFI FUNDS ARE JEWISH..
ISLAMIC BANKING IS JEWISH -- PAKISTANI BANK BCCI WAS A JEWISH BANK. ( I WILL WRITE A FULL POST ON THIS BANK LATER )..
JEW ROTHSCHILD CREATED THE JEWISH PATHAN CLAN ( PASHTUNS )..AND INDUCTED THEM INTO INDIA..
PAKISTANI IMRAN KHAN IS A JEW .. MALALA YOUSAFZAI IS A JEWESS. JINNAH WAS A JEW..
CRYPTO JEW AFRIDI CLAN WAS CREATED TO CONTROL THE BOLAN/ KHYBER PASSES..
EX-PRESIDENT ZAKIR HUSSAIN WHO HAS BEEN SEEN PRAYING IN THE SYNAGOGUE OF HAN MARKET DELHI IS A AFRIDI JEW. HIS GRANDSON IS SALMAN KHURSHID..
ALMOST ALL MAJOR INDIAN NATIONAL CONGRESS MUSLIMS LEADERS WERE JEWS.. MALANA ABDUL KALAM AZAD WAS A QURESHI JEW..
KHAN ABDUL GAFFAR KHAN WAS A JEW.. WE GAVE HIM BHARAT RATNA..
ALMOST ALL MAJOR MUSLIM KINGDOMS IN 1947 WERE RULED BY CRYPTO JEWS.. TIPU SULTAN WAS A JEW. NIZAM OF HYDERABAD IS A JEW..
ISIS WAS CREATED/ ARMED/ FUNDED BY JEWS.. HUNDREDS OF HARDCORE ISLAMIC ISIS SUICIDE BOMBERS NEVER KILLED A SINGLE JEW-- WHY?
THE LAST 70% OF OTTOMAN EMPIRE SULTANS WERE JEWS.. THE MOTHER OF SULTAN MEHMED II WHO FINISHED OFF THE CHRISTIAN ROMAN EMPIRE AT CONSTANTINOPLE WAS A JEWESS.
ALL MOGHUL EMPERORS AFTER HUMAYUN WERE JEWS.. HUMAYUN'S WIFE WAS A JEWESSS.
ALL OIL RICH MUSLIM KINGDOMS OF MIDDLE EAST ARE RULED BY JEW KINGS..
ROTHSCHILD USED JEW LAWRENCE OF ARABIA FOR THIS.. LAWRENCE OF ARABIA WAS MARRIED TO THE DAUGHTER OF FRENCH JEW MICHAEL HARRY NEDOU.. LATER SHEIKH ABDULLAH MARRIED THIS WOMAN AKBAR JEHAN..
WHEN OIL GETS OVER THESE JEW KINGS WILL HAND OVER POWER TO THE ARAB PEOPLE SAYING "DEMOCARASSYY WERY GOOODD" AND RUN AWAY TO THE WEST WHERE THEY HAVE SALTED AWAY THEIR ILL GOTTEN WEALTH ..
POET IQBAL WAS A JEW
JAUHAR ALI BROTHERS WERE JEWS..
JEWS CREATED THE AMU AND JAMIA UNIVERSITIES..
JEW ROTHSCHILD BUILT ALL THE MOSQUES IN KANPUR -- ATTACHED TO HIS TANNERIES.. THESE WERE THE FIRST MOSQUES TO BE FITTED WITH LOUDSPEAKERS, WHICH WERE BASICALLY "RISE AND SHINE REVEILLE CALL" TO START WORKING..
ROTHSCHILD ELIMINATED OTTOMAN SULTANS AND USED JEW MUSTAFA KEMAL ATATURK TO RULE .
ALL YOUNG TURKS WERE JEWS.. IMAGINE THE STUPID INDIAN MEDIA WERE CALLING CHANDRASHEKHAR/ RAHUL GANDHI/ PILOT/ SCINDIA AS YOUNG TURKS..
YOUNG TURK JEWS CONDUCTED THE ARMENIAN CHRISTIAN GENOCIDE AND BLAMED IT ON MUSLIMS..
http://ajitvadakayil.blogspot.com/2015/04/lawrence-of-arabia-part-two-capt-ajit.html
CONTINUED TO 2--
BELOW: SPOT THE CUNT CONTEST
SPOT THE "PIECE OF SHIT" CONTEST
THIS POST IS NOW CONTINUED TO PART 14 BELOW--
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_27.html
CAPT AJIT VADAKAYIL
..
ReplyDeleteI JUST WATCHED MALAYALAM MOVIE MAMANGAM ON AMAZON PRIME VIDEOS..
THE DIRECTOR OF THE MOVIE HAS JUST REPEATED ROTHSCHILDs LIES ..
https://en.wikipedia.org/wiki/Mamankam_festival
https://en.wikipedia.org/wiki/Mamangam_(2019_film)
MAMANKAM IS A FEST WHICH HAS GONE ON FOR MILLINIUMS ..
6000 YEARS AGO PARASHURAMA CAME TO MAMANKAM TIRUNAVAYA WITH THE KERALA NAMBOODIRIS FROM SARASWATI RIVER BANKS ( WHO KNEW VEDAS ON ORAL ROUTE ) ALONG WITH 4 VEDAS AND 108 UPANISHADS..
THE GREEK SCHOLARS IN THE ERA OF SOCRATES WHO STUDIED AT KODUNGALLUR UNIVERSITY HAVE WRITTEN ABOUT THIS FEST..
http://ajitvadakayil.blogspot.com/2019/10/perumal-title-of-calicut-thiyya-kings.html
THE SON OF EMPEROR MARCUS AURELIUS, EMPEROR COMMODUS , " DID A MAMANKAM " AT THE ROMAN COLOSSEUM ON THE 12TH YEAR OF HIS RULE AND DIED IN 192 AD, AS HE CHEATED.. THIS WAS RECORDED BY SENATOR CASSIUS DIO.
ETRUSCAN BLOODLINE SENATORS FROM ROME CAME TO WITNESS THIS 12 YEARLY EVENT, AFTER ALL IT WAS HOMECOMING FOR THEM..
THE FIRST KING OF ROME WAS RAMA , A KERALA THIYYA ( ETRUSCAN ) WHO WAS CROWNED ON 21ST APRIL 830 BC...
http://ajitvadakayil.blogspot.com/2019/08/secrets-of-roman-pantheon-inaugurated.html
IN THE MALAYALAM MOVIE "MAMANGAM" THE KING OF CALICUT WAS SHOWN AS A COWARDLY ASSHOLE.. HE RAN AWAY FROM THE PODIUM WHEN HERO MAMOOTY CHALLENGED HIM AFTER KILLING HUNDREDS OF HIS SOLDIERS..
24 YEARS LATER MAMMOTYs 12 YEAR OLD NEPHEW AGAIN LANDED ON THE PODIUM AFTER KILLING HUNDREDS OF THE CALICUT KINGs SOLDIERS.. THE MOVIE SHOWS THE BOY TAKING A SWIPE AT THE KING, WHO WAS SAVED BY A HANGING BRASS LAMP. THIS IS A WIKIPEDIA LIE..
AND OF COURSE THE COWARDLY CUNT KING OF CALICUT RUNS AWAY FROM THE PODIUM ---LUNGI UTTHAAKE, CHARKHA GHUMAAAKE.. SO WHAT ELSE IS NEW
THE MOVIE SHOWED A DISCLAIMER THAT IT IS A "FICTIONAL MOVIE" -- TO PREVENT GETTING SUED FOR DISTORTING TRUE HISTORY..
capt ajit vadakayil
..
THE INDIAN POLICE DOES NOT WORK FOR THE JUDICIARY..
ReplyDeleteWE THE PEOPLE DEMAND FROM MODI/ AMIT SHAH--- TELL THE POLICE THAT THEY DONT HAVE TO FEAR THE JUDICIARY OR TAKE ORDERS FROM THE JUDGES..
JUST WHO ARE THESE LAWYERS TURNED JUDGES ? WHY SHOULD A IPS POLICE OFFICER WHO HASTAKEN THE OATH, FEAR THESE BOTTOM DREGS OF THE SCHOOL CEREBRAL BARREL?
WHY WAS CJI GOGOI ALOWED TO TREAT CBI BOSS LIKE A CLASS DUNCE ?
MODI , AMIT SHAH , PRASAD , JAVEDEKAR AND AJIT DOVAL HAVE FAILED INDIA..
THEY HAVE ALLOWED ILLEGAL COLLEGIUM JUDICIARY TO TREAT "WE THE PEOPLE" AND THE CONSTITUTION WITH UTTER CONTEMPT..
https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html
Dear Captain Sir,
DeleteSent emails to PM Modi, Amit Shah and Delhi Police. I wasn't active from last 40 days. Please forgive me. I was really upset with PM Modi and the system for no action. Needed a break to balance out Mind and settle personal issues. My apologies. Thanks and Gratitude
FYI:
Recieved reply from Delhi Police -
Subject: ABOLISH ALL STUDENTS UNIONS IN COLLEGES.
Thanks for your E-mail. Your E-mail has been acknowledged by Commissioner of Police, Delhi and the same has been referred to the Deputy Commissioner of Police/HQ (his office telephone No. 23762616 Extn. 30044 & and Email-ID is dcp.hq@delhipolice.gov.in) for further necessary action vide Dy. No. is 1409/E-mail dated 08.01.2020.
dcp.hq@delhipolice.gov.in, dcp-vigilance-dl@nic.in
https://en.wikipedia.org/wiki/A._N._Shamseer
ReplyDeletePOOR PINARAYI VIJAYAN DOES NOT KNOW THAT MLA AN SHAMSEER ( WHO LOOKED LIKE AK GOPALANs TWIN BROTHER AT THE AGE OF 18 ) IS PLANNING TO TAKE OVER HIS PARTY..
SHEIKH ABDULLAH AND NEHRU LOOKED LIKE TWIN BROTHERS AT THE AGE OF 18..
GITA GOPINATH ( IMF CHIEF ) IS ROTHSCHILD AGENT COMMIE AK GOPALANs BLOOD RELATIVE..
AN SHAMSEER HAS BEEN GROOMED.. IT IS NOT BY ACCIDENT THAT HE STUDIED ( LIKE PINARAYI VIJAYAN ) AT BRENNAN COLLEGE KANNUR..
IT IS NOT BY ACCIDENT THAT AN SHAMSEER NOWADAYS APPEARS IN ENGLISH TV CHANNEL DEBATES..
POOR PINARAYI VIJAYAN.. HE CANNOT FIGHT WHITE COMMIE JEWS..
capt ajit vadakayil
..
WHEN GODHRA RIOTS HAPPENED IT WAS CRYSTAL CLEAR THAT THE MURDER OF INNOCENT WOMEN AND CHILDREN IN A RAILWAY COMPARTMENT TRIGGERED A KNEE JERK REACTION FROM HINDUS AGAINST MUSLIMS..
ReplyDeleteHINDUS WERE SICK AND TIRED OF BEING THE COWARDS WHEN IT COMES TO MUSLIM SADISM..
THE HINDU HATING ITALIAN WAITRESS HIRED A TRAITOR BONG JUDGE UC BANNERJEE AND PINNED THE BLAME ON HINDU WOMEN COOKING INSIDE THE TRAIN AT GODHRA RAILWAY STATION..
I HAVE CURSED THIS BASTARD -- HIS SOUL WILL BE IN THE FIRST ASTRAL LAYER FOREVER..
https://en.wikipedia.org/wiki/Umesh_Chandra_Banerjee
http://ajitvadakayil.blogspot.com/2012/11/babri-masjid-demolition-mughal-emperor.html
capt ajit vadakayil
..
CULTURAL TERRORIST MODI LIED IN HIS NAMASTE TRUMP ADDRESS " SANSKRIT IS ONE OF THE OLDEST LANGUAGES ON THE PLANET"..
ReplyDeleteIN FRONT OF CHINESE PRESIDENT XI, MODI LIED THAT TAMIL IS THE OLDEST LANGUAGE ON THE PLANET-- JUST FOR TAMIL VOTES..
THIS SHALL BE ON RECORD WHEN I WRITE MODIs LEGACY.
VEDAS SRUTIS WERE DOWNLOADED 400 CENTURIES AGO..AND PENNED DOWN 70 CENTURIES AGO..
THIS DOES NOT MEAN THAT SANSKRIT WAS INVENTED 70 CENTURIES AGO...
BHARTRHARI EXISTED 9000 YEARS AGO..
LAWS OF MANU WERE PENNED 9300 YEARS AGO..
PANINI WROTE HIS SANSKRIT TREATISE 7500 YEARS AGO.. 500 YEARS BEFORE VEDAS WERE PENNED DOWN..
KING MAHABALI WROTE IN SANSKRIT AND MALAYALAM 12,000 YEARS AGO..
http://ajitvadakayil.blogspot.com/2019/07/secrets-of-12000-year-old-machu-picchu.html
capt ajit vadakayil
..
google for
ReplyDelete“ WHAT ARTIFICIAL INTELLIGENCE CANNOT DO “
See where my post lands up among 171 million posts the search engine coughs up.
With my readership, my blogs must come on page 1 .. the reason it does not is because Google uses algorithms to sink my posts..
If you use a 24 hour or a week time filter—my posts are just SUNK.
If I support homosexuality -- or write that Hindus are savages and India rose as a civilization only after Islamic invasion ( like that crypto Jew traitor from Calicut Raziman TV ) all will be fine..
Raziman TV is in the database of desh drohis.
It will affect his future generations ( like what happened to Raghuram Rajan whose father was kicked out of RAW ) ..
Congratulations are in order..
https://www.quora.com/Who-is-Captain-Ajit-Vadakayil
LAW MINISTER BIHARI KAYASTHA RAVI SHANKAR PRASAD IS THE CABINET MINISTER FOR COMMUNICATIONS, ELECTRONICS & INFORMATION TECHNOLOGY..
ReplyDeleteHE IS NOT QUALIFIED FOR THIS POST..
THIS USELESS FELLOW HAS DONE NOTHING TILL TODAY FOR THE LAW DEPT ( HARNESS JUDICIARY PLAYING GOD ) OR FOR THE IT DEPT..
MODI LOVES PRASAD AS HE GIVES ENDLESS EGO MASSAGE ..
https://twitter.com/TheChetak/status/1232667843407368192/photo/1
ReplyDeleteROTHSCHLDs AGENTS JEWESS GRETA THUNBERG AND JEWESS MALALA YOUSAFZAI..
CAPT AJIT VADAKAYIL WARNS NSA AJIT DOVAL
ReplyDeleteYOUR JOB IS NOT TO RUSH AROUND DELHI, PLAYING TO THE GALLERY, PACIFYING VICTIMS OF VIOLENCE..
YOU JOB AS NSA IS TO FIND OUT THE CULPRITS WHO SET DELHI ON FIRE-- ALMOST ALL ARE OUTSIDERS AND ILLEGAL IMMIGRANTS ( MUSLIM ) FUNDED BY PAKISTANI ISI..
ANTI-CAA IS JUST A FRONT..THESE ARE DESH DROHI MUSLIMS --
MOST OF THEM ARE ILLEGAL IMMIGRANTS WHO HAVE THE SUPPORT OF DESH DROHI FOREIGN PAYROLL JUDICIARY AND THEIR LACKEYS THE BENAMI MEDIA..
THIS FELLOW AJIT DOVAL AT THE AGE OF 75 IS DYEING HIS HAIR AND TYING TO LOOK LIKE A BOLLYWOOD HERO..
CORONAVIRUS WILL AFFECT SOUTH KOREA AND CHINA-
ReplyDeleteIT WILL NOT AFFECT INDIANS DUE TO OUR DIET HABITS..
MAN MUST NOT EAT CARNIVOROUS ANIMALS..IN CHINA AND KOREA THEY EAT DOGS..
Thanks
Delete
ReplyDeleteTHIS IS ONE OF THE MOST IMPORTANT COMMENTS EVER MADE ON THIS PLANET.
############################################################
SOMEBODY CALLED ME UP AND ASKED ME
CAPTAIN
WHY IS US DEMOCRAT PRESIDENTIAL CANDIDATE BERNIE SANDERS SUPPORTING MUSLIMS AND RUNNING DOWN HINDUS IN THIS DELHI RIOTS..
WELL
TO UNDERSTAND THIS , LEARN THE FOLLOWING SHOCKING TRUTHS
BERNIE SANDERS IS A COMMIE JEW..
COMMIE JEWS HAVE TAKEN LEADERSHIP OF MUSLIMS ALL OVER THE PLANET SINCE THE PAST SEVERAL CENTURIES ..
JEW ROTHSCHILD CREATED "SEAMLESS BOUNDARIES" IN EU.. WITH COMMON CURRENCY. AFTER THAT HUNDREDS OF THOUSANDS OF SYRIAN MUSLIMS HAVE BEEN ALLOWED TO FLOOD INTO EU AND SCANDINAVIA USING A DROWNED SYRIAN BOY AS A TRIGGER..
WHY?
THE REASON IS ALMOST ALL EU AND SCANDINAVIAN NATIONS ARE RULED BY CRYPTO JEWS..
THE IDEA IS TO SCREW CHRISTIANS / HINDUS USING MUSLIMS ( NAIVE RIGHT TO LEFT WRITING PARTY ) WHOSE TOP LEADERS WILL BE JEWS..
HAVE YOU SEEN A SINGLE TOP MUSLIMS LEADER IN INDIA OR ABROAD WITH ZEBIBA PRAYER MARK IN THE RIGHT PLACE, IF THEY HAVE IT AT ALL?.
http://ajitvadakayil.blogspot.com/2011/07/cracked-heels-and-prayer-marks-capt.html
THE JEWISH DEEP STATE IN ISTANBUL CREATED THE SUNNI/ SHIA DIVIDE..
http://ajitvadakayil.blogspot.com/2019/09/istanbul-deep-seat-of-jewish-deep-state.html
ALL MADRASSAS ON THIS PLANET HAVE BEEN CREATED AND FUNDED BY JEW ROTHSCHILD.. WAHABBI/ SALAFI FUNDS ARE JEWISH..
ISLAMIC BANKING IS JEWISH -- PAKISTANI BANK BCCI WAS A JEWISH BANK. ( I WILL WRITE A FULL POST ON THIS BANK LATER )..
JEW ROTHSCHILD CREATED THE JEWISH PATHAN CLAN ( PASHTUNS )..AND INDUCTED THEM INTO INDIA..
PAKISTANI IMRAN KHAN IS A JEW .. MALALA YOUSAFZAI IS A JEWESS. JINNAH WAS A JEW..
CRYPTO JEW AFRIDI CLAN WAS CREATED TO CONTROL THE BOLAN/ KHYBER PASSES..
EX-PRESIDENT ZAKIR HUSSAIN WHO HAS BEEN SEEN PRAYING IN THE SYNAGOGUE OF HAN MARKET DELHI IS A AFRIDI JEW. HIS GRANDSON IS SALMAN KHURSHID..
ALMOST ALL MAJOR INDIAN NATIONAL CONGRESS MUSLIMS LEADERS WERE JEWS.. MALANA ABDUL KALAM AZAD WAS A QURESHI JEW..
KHAN ABDUL GAFFAR KHAN WAS A JEW.. WE GAVE HIM BHARAT RATNA..
ALMOST ALL MAJOR MUSLIM KINGDOMS IN 1947 WERE RULED BY CRYPTO JEWS.. TIPU SULTAN WAS A JEW. NIZAM OF HYDERABAD IS A JEW..
ISIS WAS CREATED/ ARMED/ FUNDED BY JEWS.. HUNDREDS OF HARDCORE ISLAMIC ISIS SUICIDE BOMBERS NEVER KILLED A SINGLE JEW-- WHY?
THE LAST 70% OF OTTOMAN EMPIRE SULTANS WERE JEWS.. THE MOTHER OF SULTAN MEHMED II WHO FINISHED OFF THE CHRISTIAN ROMAN EMPIRE AT CONSTANTINOPLE WAS A JEWESS.
ALL MOGHUL EMPERORS AFTER HUMAYUN WERE JEWS.. HUMAYUN'S WIFE WAS A JEWESSS.
ALL OIL RICH MUSLIM KINGDOMS OF MIDDLE EAST ARE RULED BY JEW KINGS..
ROTHSCHILD USED JEW LAWRENCE OF ARABIA FOR THIS.. LAWRENCE OF ARABIA WAS MARRIED TO THE DAUGHTER OF FRENCH JEW MICHAEL HARRY NEDOU.. LATER SHEIKH ABDULLAH MARRIED THIS WOMAN AKBAR JEHAN..
WHEN OIL GETS OVER THESE JEW KINGS WILL HAND OVER POWER TO THE ARAB PEOPLE SAYING "DEMOCARASSYY WERY GOOODD" AND RUN AWAY TO THE WEST WHERE THEY HAVE SALTED AWAY THEIR ILL GOTTEN WEALTH ..
POET IQBAL WAS A JEW
JAUHAR ALI BROTHERS WERE JEWS..
JEWS CREATED THE AMU AND JAMIA UNIVERSITIES..
JEW ROTHSCHILD BUILT ALL THE MOSQUES IN KANPUR -- ATTACHED TO HIS TANNERIES.. THESE WERE THE FIRST MOSQUES TO BE FITTED WITH LOUDSPEAKERS, WHICH WERE BASICALLY "RISE AND SHINE REVEILLE CALL" TO START WORKING..
ROTHSCHILD ELIMINATED OTTOMAN SULTANS AND USED JEW MUSTAFA KEMAL ATATURK TO RULE .
ALL YOUNG TURKS WERE JEWS.. IMAGINE THE STUPID INDIAN MEDIA WERE CALLING CHANDRASHEKHAR/ RAHUL GANDHI/ PILOT/ SCINDIA AS YOUNG TURKS..
YOUNG TURK JEWS CONDUCTED THE ARMENIAN CHRISTIAN GENOCIDE AND BLAMED IT ON MUSLIMS..
http://ajitvadakayil.blogspot.com/2015/04/lawrence-of-arabia-part-two-capt-ajit.html
CONTINUED TO 2--
DeleteCONTINUED FROM 1--
WHEN MAJOR MUSLIM LEADERS DIE IN INDIA, WHITE JEWS ATTEND THE FUNERAL --HIDING THEIR FACES.. WHY? WHEN OWAISIs FATHER DIED WE KNOW HOW MANY TOP WHITE JEW LEADERS FROM ISRAEL AND USA ATTENDED....
http://ajitvadakayil.blogspot.com/2013/04/razakers-of-mim-operation-polo-to-annex.html
AL JAZEERA SUPPORTS INDIAN MUSLIMS ALWAYS.. THIS IS A QATARI JEWISH CHANNEL..
MF HUSSAIN WAS A JEW.. HE PAINTED HINDU GODS HAVING SEXUAL ORGIES.. HE WAS GIVEN REFUGE BY THE JEWISH ROYAL FAMILY OF QATAR..
EDUCATED INDIAN MUSLIMS MUST SAVE THEIR OWN CLAN.. THEY MUST NOT ALLOW INDIAN ISLAM TO BE HIJACKED BY RIGHT TO LEFT WRITING ILLITERATES WHO ARE CONTROLLED BY JEWS.
THERE ARE MORE MUSLIMS IN INDIA THAN IN PAKISTAN..
http://ajitvadakayil.blogspot.com/2013/01/the-kashmir-conflict-capt-ajit-vadakayil.html
OMAR ABDULLAHs MOTHER IS A WHITE JEWESS MOLLY.. HIS GRANDMOTHER IS JEWESS AKBAR JEHAN.. IN WHAT WAY IS HE MUSLIM ? HIS GREAT GRANDFATHER WAS CRYPTO JEW GHIAZUDDIN GHAZI..
IN 1947 ROTHSCHILD WHO RULED INDIA WANTED INDIAN LANDMASS TO BE DIVIDED AMONG THREE JEWISH FAMILIES.. JINNAH/ NEHRU/ ABDULLAH.
THE LAST POLICE CHIEF OF THE LAST JEW MOGHUL EMPEROR WAS A JEW WITH A MUSLIM NAME..GHIAZUDDIN GHAZI ... HIS BUNGALOW WAS NAMED "YAMUNA NEHR"..
THE SURNAME "NEHRU" IS NOT KASHMIRI HINDU.. IT JUST MEAN A FELLOW LIVING IN "NEHR BUNGALOW".
http://ajitvadakayil.blogspot.com/2012/12/sir-muhammed-iqbal-knighted-for.html
KHAN MARKET DELHI OPIUM RETAIL SALES WAS CONTROLLED BY PASHTUN KHAN JEWS..
THE MUMBAI MAFIA WAS CONTROLLED BY JEW PATHANS..
MOST BOLLYWOOD KHANS ARE JEWS, WHOSE ANCESTORS WERE OPIUM DRUG STREET RUNNERS..
http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html
I AM AT THE 60% REVELATIONS SEGMENT.. I HAVE NOT YET REVEALED 2% OF SHOCKING TRUTHS REGARDING JEWS .
REAL SHOCKERS WILL COME ONLY AFTER THE 98 % SEGMENT..
capt ajit vadakayil
..
Dear captain,
DeleteFor thousands of years thiyyas ruled this planet....now for the past 2000 years jews rule by deciet...
When will thiyya rule will be back....
My dad used to say in my younger days thiyya means theeran....after reading your blogs now im an proud thiyya hindu indian...
Could feel alot of positivity and awareness in the society....long way to go..
Namaskaram
Thanks,
Jaihind jaibharat.
ReplyDeletehttps://www.koimoi.com/box-office/shubh-mangal-zyada-saavdhan-box-office-day-6/
I ASK MY READERS TO BOYCOTT THIS PERVERTED GAY MOVIE "SHUBH MANGAL ZYADA SAAVDHAN" WHICH WON THE PRAISES OF CHILDLESS MODI AND TRUMP
INDIA HAS THE LEAST HOMOSEXUALITY ON THIS PLANET BY PERCENTAGE..
I HAVE SEEN THIS PLANET FOR 40 YEARS-- I KNOW..
99% HOMOSEXUALS ARE PEDOPHILES.
http://ajitvadakayil.blogspot.com/2018/09/supreme-court-strikes-down-sec-377.html
FOR THROWING ACID ON POLICE/ HINDUS WE WANT TAHIR HUSSAIN TO GET LIFE IMPRISONMENT..
ReplyDeleteREGARDING TORTURE / MURDER OF IB OFFICER ANKIT SHARMA BY AN ILLEGAL IMMIGRANT MUSLIM IMMIGRANT MOB LED BY AAP LEADER TAHIR HUSSAIN..
ReplyDeleteNOTHING WILL HAPPEN AS TRAITOR JUDICIARY AND MEDIA ARE ON THE SIDE OF THE ILLEGAL IMMIGRANT MUSLIMS..
INDIAN COPS / SECURITY AGENCIES HAVE NO PRIDE OR HONOR.
THIS IS WHY CJI GOGOI WAS ABLE TO TREAT CBI CHIEF LIKE A CLASS DUNCE AND MAKE HIM SIT IN A CORNER OF THE COURT ROOM THE WHOLE DAY..
IN REALITY, IT SHOULD HAVE BEEN THE OTHER WAY AROUND.. CBI DIRECTOR WHO HAS TAKEN THE OATH IS NOT SMALL FRY..
NOW-- LET US COMPARE INDIA WITH USA.
MIGUEL ÁNGEL FÉLIX GALLARDO WAS A MEXICAN COCAINE DRUG LORD WHO RAN GUNS (FOR PRESIDENT RONALD REAGAN AND CIA DIRECTOR GEORGE HERBERT WALKER BUSH SENIOR ) TO THE CONTRAS IN NICARAGUA .
MERCENARY CONTRAS WERE CREATED/ ARMED/ FUNDED BY CIA TO TOPPLE THE PATRIOT SANDINISTA GOVT OF NICARAGUA WHO KICKED OUT JEWISH OLIGARCHS WHO WERE LOOTING THE NATION..
PATRIOT SANDINISTAS WERE DUBBED AS BAAAAD COMMIES BY PRESIDENT REAGAN.
US PRESIDENT AND CIA WERE UNDERCUTTING AMERICAN DEA DEPT.. THEY GOT HUGE BRIBES FROM COCAINE DRUG LORD FELIX BYPASSING OFFICIAL PROTOCOL OF US CONGRESS SANCTION OF FUNDS..
BUT DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO MADE A BIG MISTAKE..
HE ORDERED THE KILLING OF US DEA AGENT KIKI CAMARENA ( WITH BLESSINGS OF BUSH/ REAGAN ) WHO EXHUMED THE “GUNS FOR COCAINE” CONSPIRACY…
AS SOON AS THIS HAPPENED DEA WENT AGAINST THE WHITE HOUSE AND CIA.. THEY CREATED THEIR OWN UNOFFICIAL ROGUE HIT SQUAD TO TAKE REVENGE ..
https://en.wikipedia.org/wiki/Kiki_Camarena
DEA FOUND OUT DIRECT CIA / WHITE HOUSE INVOLVEMENT IN THE TORTURE AND MURDER OF DEA AGENT KIKI CAMARENA ..
THE ROGUE DEA SQUAD TORTURED AND KILLED WHOEVER WERE INVOLVED IN THE TORTURE OF KIKI CAMARENA ..
DEA EXTRACTED CONFESSIONS FROM A MEXICAN DOCTOR AND A MEXICAN POLICE OFFICER AFTER TORTURING THEM.
A US CIA OFFICER FELIX RODRIGUEZ HAD OVERSEEN THE ENTIRE TORTURE AND KILLING OF THE DEA AGENT ON ORDERS FROM BUSH SR AND REAGAN..
FELIX RODRIGUEZ RAN THE CONTRA SUPPLY DEPOT .. DEAD MEN TELL NO TALES..
REAGAN AND BUSH SR ORDERED THE ARREST OF DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO TO SHUT DOWN THIS CASE BEFORE SHIT HIT THE FAN.
https://en.wikipedia.org/wiki/Miguel_%C3%81ngel_F%C3%A9lix_Gallardo
IN USA , IF YOU KILL A COP HIS MATES GO ROGUE TAKE REVENGE .. AND THIS IS UNOFFICIALLY ALLOWED.. THIS IS WHY NOBODY KILLS A COP OR CIA/ DEA OFFICERS IN US..
IT IS A DISGRACE THAT A MUSLIM SHAHRUKH POINTED A GUN AT A COP FROM SIX INCHES RANGE.. AND HE IS STILL ALIVE.. IN ANY OTHER NATION, HE WOULD HAVE BEEN SHOT DEAD ON THE SPOT.. NO JUDGES – NO JURY..
TRAITOR JUDGES IN FOREIGN PAYROLL CAUSED ETHNIC CLEANSING OF KASHMIRI PANDITS...
TRAITOR JUDGES CREATED THE NAXAL RED CORRIDOR...
THEY NEVER EMPHATISED WITH SLAIN JAWANS AND THEIR FAMILIES ..
ILLEGAL COLLEGIUM JUDICIARY HAS NO POWERS TO INTERFERE WITH BHARATMATAs INTERNAL/ EXTERNAL SECURITY....
OUR JUDICIARY IS PACKED WITH TRAITOR JUDGES IN FOREIGN PAYROLL.
WE DONT NEED THE "VISHWAAS" OF TRAITOR MUSLIMS IN PAKISTANI ISI PAYROLL.
https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html
BHARATMATA IS BEING BLED BY TRAITOR JUDGES , BENAMI MEDIA JOURNALISTS AND PAKISTANI ISI FUNDED NGOs.
WE THE PEOPLE WATCH IN UTTER FRUSTRATION HOW ILLEGAL COLLEGIUM JUDICIARY IS TREATING “WE THE PEOPLE” AND THE CONSTITUTION IN CONTEMPT..
Capt ajit vadakayil
..
PUT ABOVE COMMENT IN WEBSITES OF-
DeleteTRUMP
PUTIN
INDIAN AMBASSADOR TO USA/ RUSSIA
US / RUSSIAN AMBASSADOR TO INDIA
EXTERNAL AFFAIRS MINISTER/ MINISTRY
PRESIDENT OF NICARAGUA
AMBASSADOR TO / FROM NICARAGUA
PMO
PM MODI
NSA
AJIT DOVAL
RAW
IB CHIEF
IB OFFICERS
CBI
NIA
ED
AMIT SHAH
HOME MINISTRY
DEFENCE MINISTER/ MINISTRY
ALL 3 ARMED FORCE CHIEFS-- PLUS TOP CDS CHIEF
ALL DGPs OF INDIA
ALL IGs OF INDIA
ALL STATE HIGH COURT CHIEF JUSTICES
CJI BOBDE
SUPREME COURT JUDGES/ LAWYERS
ATTORNEY GENERAL
LAW MINISTER/ MINISTRY CENTRE AND STATES
ALL CMs OF INDIA
ALL STATE GOVERNORS
I&B MINISTER/ MINISTRY
LT GOVERNOR DELHI
MOHANDAS PAI
PGURUS
SWAMY
RAJIV MALHOTRA
DAVID FRAWLEY
STEPHEN KNAPP
WILLIAM DALRYMPLE
KONRAED ELST
FRANCOIS GAUTIER
NITI AYOG
AMITABH KANT
PRESIDENT OF INDIA
VP OF INDIA
SPEAKER LOK SABHA
SPEAKER RAJYA SABHA
THAMBI SUNDAR PICHAI
SATYA NADELLA
CEO OF WIKIPEDIA
QUORA CEO ANGELO D ADAMS
QUORA MODERATION TEAM
KURT OF QUORA
GAUTAM SHEWAKRAMANI
SPREAD ON SOCIAL MEDIA
SPREAD MESSAGE VIA WHATS APP
Namaste Master Ji,
DeletePosted in facebook, shared in whatsapp and sent mail to 165 + contacts
office@arunjaitley.com,
csoffice@nic.in,
cs@punjab.gov.in,
chairperson-ncw@nic.in,
cmup@nic.in,
chairpersonncw@nic.in,
feedback-mha@nic.in,
minister.yas@nic.in,
ms-ncw@nic.in,
mib.inb@nic.in,
mvnaidu@sansad.nic.in,
m.subbarayan@nic.in,
manoharparrikar@yahoo.co.in,
Sushma Swaraj <2009vidisha@gmail.com>,
sharma.rekha@gov.in,
supremecourt@nic.in,
secy.inb@nic.in,
sushma.sahu@gov.in,
jsrev@nic.in,
jsncw-wcd@nic.in,
jsabc-dea@nic.in,
jscpg-mha@nic.in,
pseam@mea.gov.in,
pstohrm@gov.in,
pp.chaudhary@sansad.nic.in,
ravis@sansad.nic.in,
rawat.alok@gov.in,
request-hrd@gov.in,
rsdalal@hry.nic.in,
drhrshvardhan@gmail.com,
Info@sureshprabhu.in,
urijitpatel@rbi.org.in,
lk-admin@nic.in,
amitabh.kant@nic.in,
ambuj.sharma38@nic.in,
17akbarroad@gmail.com,
ajaitley@sansad.nic.in,
admin@nic.in,
kashish.mittal@ias.nic.in,
kkvenu@vsnl.com,
smritizirzni@gmail.com,
smritizirani@sansad.nic.in,
abvpasom@gmail.com,
abvpbihar@rediffmail.com,
abvpcentralup@gmail.com,
abvpdelhi@gmail.com,
abvpharyana@gmail.com,
abvphp@gmail.com,
abvpkarnataka@yahoo.com,
abvpnestates@gmail.com,
abvpoffice@gmail.com,
abvptn@gmail.com,
abvputtaranchal@gmail.com,
abvpwesternup@gmail.com,
advanilk@sansad.nic.in,
alokkumar.up@nic.in,
arpolice@rediffmail.com,
bjpandaman1990@rediffmail.com,
bjphqo@gmail.com,
bk.gupta@nic.in,
chairman.customer@sbi.co.in,
chief.advisor@telangana.gov.in,
chiefminister@karnataka.gov.in,
chiefminister@kerala.gov.in,
cm@maharashtra.gov.in,
cm@mp.nic.in,
cm_nagaland@yahoo.com,
cmcell@tn.gov.in,
cmo@nic.in,
cmsect-jk@nic.in,
contact@amitshah.co.in,
contact@hindujagruti.org,
contact@yogiadityanath.in,
contactus@rss.org,
cs-manipur@nic.in,
cs-mizoram@nic.in,
cs@hry.nic.in,
dcp-vigilance-dl@nic.in,
dg.prisons@kerala.gov.in,
dgofpoliceorissa@sify.com,
dgp-bih@nic.in,
dgp-chattisgarh@yahoo.co,
dgp-gs@gujarat.gov.in,
dgp-mnp@nic.in,
dgp-rj@nic.in,
dgp.punjab.police@punjab.gov.in,
dgp@and.nic.in,
dgp@appolice.gov.in,
dgp@assampolice.com,
dgp@keralapolice.gov.in,
dgp@up.nic.in,
dgpmp@mppolice.gov.in,
dgpms.mumbai@mahapolice.gov.in,
dgptripura@yahoo.co.in,
dirhq-cbdt@nic.in,
eam@mea.gov.in,
gandhim@nic.in,
gandhim@sansad.nic.in,
gaur_piyush@rediffmail.com,
goagp@rediffmail.com,
governor@rajbhavangoa.org,
governor@rbi.org.in,
indiaportal@gov.in,
info@vhp.org,
jharkhandabvp@gmail.com,
jkpolice@nic.in,
jp.kurian@sansad.nic.in,
jp.nadda@sansad.nic.in,
jse@nic.in,
keralaprisons@gov.in,
kk.rao@gov.in,
lk.advani@sansad.nic.in,
manoharpaaricar@yahoo.co.in,
meghpol@hotmail.com,
mizopol@rediffmail.com,
mpofficebhopal@gmail.com,
nahmad@jharkhandpolice.gov.in,
neera.bali@nic.in,
nirmal_chouhan@hotmail.com,
nirmla_chowhan@hotmail.com,
nitin.gadkari@nic.in,
nsab.nscs@nic.in,
office@wgs-cet.in,
officelka@gmail.com,
p.chhabra@nic.in,
padma.ravi@nic.in,
piyush@bjp.org,
pnath@nic.in,
police-chd@nic.in,
pscm@hry.nic.in,
rajev.c@sansad.nic.in,
ramvilas.paswan@sansad.nic.in,
rc.miz@gmail.com,
s.kalyanaraman@nic.in,
s_mahajan@nic.in,
sachin.rane@schneider-electric.com,
secy.president@rb.nic.in,
secysw-bih@nic.in,
sikphq@hotmail.com,
speakerloksabha@sansand.nic.in,
subbarayan@nic.in,
supremecourt@hub.nic.in,
uma.bharati@sansad.nic.in,
vaibhav.dange@nic.in,
vanlal@nic.in,
vashishth.suresh@nic.in,
vasundhararajeofficial@gmail.com,
websitemhaweb@nic.in,
Yogi.Adityanath@sansad.nic.in
Thanks
Sent to trump and putin.
DeleteMailed to-
amitshah.mp@sansad.nic.in
contact@amitshah.co.in
amitabh.kant@nic.in
rmo@mod.nic.in
38ashokroad@gmail.com
alokmittal.nia@gov.in
prakash.j@sansad.nic.in
proiaf.dprmod@nic.in
pronavy.dprmod@nic.in
webmaster.indianarmy@nic.in
ravis@sansad.nic.in
minister.hrd@gov.in
supremecourt@nic.in
swamy39@gmail.com
ombirlakota@gmail.com
vch-niti@nic.in
narendramodi1234@gmail.com
info@nibindia.in
mohan.pai@manipalglobal.com
pstolg.delhi@nic.in
secylaw-dla@nic.in
secy-jus@gov.in
Done captain. Posted on Facebook and WhatsApp
Deletepranaam captain,
Deletehttps://twitter.com/prashantjani777/status/1233370429655584768
https://twitter.com/prashantjani777/status/1233371065902125062
https://twitter.com/prashantjani777/status/1233371235402252288
@ kannan bhai and readers,
DeleteI see that the list of emails have arun Jaitley and sushma swaraj addresses. Kindly omit them as they are no longer in this world.
WILL USA OR EUROPE ALLOW AN ARTERY ROAD OF ITS CAPITAL TO BE BLOCKED BY MUSLIM WOMEN WITH SMALL BABIES?
ReplyDeleteBHARATMATA WILL NOT SURVIVE THIS DECADE IF WE DO NOT CLEANSE THE ILLEGAL COLLEGIUM JUDICIARY OF TRAITORS IN FOREIGN PAYROLL.
WE HAVE A LAW MINISTER BIHARI KAYASTHA PRASAD WHO WAS IN NAXAL OUTFIT PUCL IN 1976... PUCL MEMBERS WERE IN THE PAYROLL OF JEW ROTHSCHILD INCLUDING ITS FOUNDER BIHARI KAYASTHA JP WHO WAS ALSO A CIA SPOOK..
WHY IS PRASAD BEING SPONSORED BY MODI? THIS FELLOW HAS DONE NOTHING FOR 6 YEARS --LIKE KAYASTHA PRAKASH JAVEDEKAR..
KAYASTHAS ARE A TRAITOR CLAN CREATED BY JEW ROTHSCHILD..
http://ajitvadakayil.blogspot.com/2019/07/we-never-heard-words-kayastha-and.html
All the anti-CAA protests are based on lies,and these lies are supported by illegal collegium judiciary and benami media to misguide the nation.Amit shah and modi dont have the guts to tackle this menace.No nation will allow such a blockade that too when it is based on utter lies.crucial portfolios for the interests and security of nation are given to two jokers ravi shankar prasad and prakash javedkar,who never performed their duties as ministers.
Deletehttps://timesofindia.indiatimes.com/india/bernie-sanders-slams-donald-trump-for-being-non-committal-on-riots/articleshow/74350563.cms
ReplyDeleteBERNIE SANDERS IS ROTHSCHILD AGENT AND A COMMIE JEW....
HIS CAMPAIGN MANAGER IS FAIZ SHAKIR A CRYPTO JEW PAKISTANI ....
FAIZ SHAKIR IS A MEMBER OF COMMIE JEW ORGANISATION THE AMERICAN CIVIL LIBERTIES UNION (ACLU) FOUNDED BY JEW ROTHSCHILD IN 1920…
FAIZ SHAKIR IS A CRYPTO JEW JUST LIKE LONDON MAYOR SADIQ KHAN…
capt ajit vadakayil
..
PUT ABOVE COMMENT IN WEBSITES OF-
DeleteTRUMP
PUTIN
INDIAN AMBASSADOR TO USA/ RUSSIA
US / RUSSIAN AMBASSADOR TO INDIA
EXTERNAL AFFAIRS MINISTER/ MINISTRY
PRESIDENT OF NICARAGUA
AMBASSADOR TO / FROM NICARAGUA
PMO
PM MODI
NSA
AJIT DOVAL
RAW
IB CHIEF
IB OFFICERS
CBI
NIA
ED
AMIT SHAH
HOME MINISTRY
DEFENCE MINISTER/ MINISTRY
ALL 3 ARMED FORCE CHIEFS-- PLUS TOP CDS CHIEF
ALL DGPs OF INDIA
ALL IGs OF INDIA
ALL STATE HIGH COURT CHIEF JUSTICES
CJI BOBDE
SUPREME COURT JUDGES/ LAWYERS
ATTORNEY GENERAL
LAW MINISTER/ MINISTRY CENTRE AND STATES
ALL CMs OF INDIA
ALL STATE GOVERNORS
I&B MINISTER/ MINISTRY
LT GOVERNOR DELHI
MOHANDAS PAI
PGURUS
SWAMY
RAJIV MALHOTRA
DAVID FRAWLEY
STEPHEN KNAPP
WILLIAM DALRYMPLE
KONRAED ELST
FRANCOIS GAUTIER
NITI AYOG
AMITABH KANT
PRESIDENT OF INDIA
VP OF INDIA
SPEAKER LOK SABHA
SPEAKER RAJYA SABHA
THAMBI SUNDAR PICHAI
SATYA NADELLA
CEO OF WIKIPEDIA
QUORA CEO ANGELO D ADAMS
QUORA MODERATION TEAM
KURT OF QUORA
GAUTAM SHEWAKRAMANI
SPREAD ON SOCIAL MEDIA
SPREAD MESSAGE VIA WHATS APP
Sent to trump and putin.
DeleteMailed to-
narendramodi1234@gmail.com
amitshah.mp@sansad.nic.in
contact@amitshah.co.in
prakash.j@sansad.nic.in
ravis@sansad.nic.in
38ashokroad@gmail.com
rmo@mod.nic.in
info.nia@gov.in
eam@mea.gov.in
info@nibindia.in
webmaster.indianarmy@nic.in
amitabh.kant@nic.in
vch-niti@gov.in
pstolg.delhi@nic.in
swamy39@gmail.com
https://twitter.com/shree1082002/status/1233328657524903937
Deletehttps://twitter.com/Sashwatdharma/status/1233327386197778434 - US, Rus pres
Deletehttps://twitter.com/Sashwatdharma/status/1233328567817138176 - Nicaragua pres, pmo, pm
Emails sent to adds given by Charishma.
Dear Capt Ajit sir,
DeleteTwitter message sent... https://twitter.com/IwerePm/status/1233352859678175232/photo/1
pranaam captain,
Deletehttps://twitter.com/prashantjani777/status/1233386987328917504
https://twitter.com/prashantjani777/status/1233387419002638336
https://twitter.com/prashantjani777/status/1233387545142202368
Namaste Captain,
DeleteE-Mail sent to the email addresses provided by Charishma .
Thanks,
Hemanth Kumar K
ReplyDeleteXXXXXXXX
Thu, Feb 27, 11:46 PM (9 hours ago)
to me
Why is Amartya Sen always spewing venom against Narendra Modi sitting in US?
So, here is the answer.... Read it carefully.
When UPA government inaugurated Nalanda University in Bihar in 2007, Amartya Sen was made the first Chancellor of the university. A very important feature of his appointment was that he had all powers in the name of "autonomy".... So much so that he did not even have to provide the account of money spent on anything to the government....
.
Imagine a public servant spending any amount of taxpayers' money and yet exempt from any kind of accountability.... Not only that, he was withdrawing a salary of ₹ 5 lakh per month - a university chancellor of a government university drawing a salary more than any other public servant. Apart from that, he had unlimited foreign trips allowances on taxpayers money by the virtue of being Nalanda University Chancellor.
.
The story doesn't end here. During the 7 years (2007-2014), Amartya Sen spent ₹ 2730 CRORE on a university which still was not fully functional.... Yes...a whopping ₹ 2730 CRORE.....
Since it was by law (made by UPA) exempt from any kind of accountability, we can never know what happened to that money and yet it will remain legal.
.
Now, coming to appointments....
Even appointments made by Amartya Sen were exempt from any kind of accountability.
So who did he appoint ?
The first 4 faculties were :
1. Dr. Upinder Singh
2. Anjana Sharma
3. Nayanjot Lahiri
4. Gopa Sabharwal.
.
Who were they ??
Dr. Upinder Singh is the Daughter of former PM Manmohan Singh.
The other 3 are close associates/friends of Dr. Upinder Singh.
.
Amartya Sen then appointed 2 more "GUEST" faculties-
1. Daman Singh
2. Amrit Singh
.
Who are they ?
Middle and youngest DAUGHTER of ex-PM Manmohan Singh.
.
What's unique about the appointment of Daman Singh and Amrit Singh is that they REMAINED in the USA all along 7 years... But were drawing a huge salary as a guest faculty. What salary they were withdrawing, only God knows.... The reason, again, is that Nalanda University had been made exempt from any kind of accountability to government.
.
So, the summary....
1. The University had hardly one building.
2. It had just 7 faculty members & a few guest faculties (who NEVER came) - all relatives/friends of Manmohan Singh/Amartya Sen.
3. There were hardly a hundred students.
4. There was no expenses on costly reagents or equipments as no scientific research was going on.
5. Still, the expenses was ₹ 2730 CRORE
.
In short, Amartya Sen had access to unlimited government fund without any accountability.
.
When Modi came to know about what all was going in the name of university, he kicked this leech out of the university in 2015 and cancelled all the appointments he had made. Amartya Sen had splurged more than ₹ 2700 CRORE on himself and his associates. He lived in USA and was drawing 5 Lakh per month and enjoying all allowances from India's taxpayers' money without doing anything.
.
Just because someone is a Nobel laureate doesn't mean that he is totally clean or doesn't have any ulterior intention. Nobel prize or a big degree is no indication of people's nature.
Even Manmohan had a PhD degree. That didn't mean he was the best in governance. His government turned out to be the worst in India's independent history.
.
We can never take action against Amartya Sen or technically call him corrupt because he was merely following the "rules" and the rules had been made in such a way by the UPA Government that he had the powers to spend as much he wanted without being accountable. That's why he will remain protected and can never be dragged to court. This was a LEGALISED PLUNDER of ₹ 2730 Crore by Amartya Sen.
Source: https://twitter.com/bhartijainTOI/status/1122408575731507200?s=17
IT IS A LIE THAT NALANDA UNIVERSITY WAS A BUDDHIST UNIVERSITY..
DeleteTHERE WAS NOT A SINGLE BUDDHIST TEACHER OR STUDENT HERE..EVER
http://ajitvadakayil.blogspot.com/2019/06/deliberately-buried-truths-about-buddha.html
Captain,
ReplyDeleteInstead of forwarding grievances to concerned department and taking action,ambuj sharma many times closes the cases by saying FILE.It seems that he will do puja to these filed comments after closing the case.He is sticking to this pmo grievances department for a long time.
Date of Receipt
26/02/2020
Received By Ministry/Department
Prime Ministers Office
Grievance Description
Sir,
Kindly acknowledge the following comment from captain ajit vadakayil's blog-
https://www.sciencemag.org/news/2020/02/indian-scientists-decry-infuriating-scheme-study-benefits-cow-dung-urine-and-milk
WE KNOW THE INDIAN SCIENTISTS IN DEEP STATE PAYROLL.
THE VEDIC HUMPED COW GIVES US PRICELESS A2 MILK, URINE HAVING GOLD COLLOIDS AND DUNG.. ITS MEAT IS HARMLESS FOR THE BRAIN.. THE VEDIC HUMPED COW FARTS ONLY 5% OF METHANE COMPARED TO IS WESTERN COUNTERPART DUE TO AN EFFICIENT DIGESTIVE SYSTEM..
THE WESTERN HUMPLESS COW GIVES TOXIC A1NILK, TOXIC URINE, TOXIC DUNG, AND HARMFUL MEAT FOR THE BRAIN..
EVER SINCE DEEP STATE AGENT VERGHESE KURAIN SWICHED OUR VEDIC COWS WITH WORSE THAN PIGS WESTERN HUMPLESS COWS, , EVIL HOSHER PHARMA IS LAUGHING ALL THE WAY TO THE BANK..
WE HINDUS CARE ONLY FOR THE HUMPED COW.. INDIA MUST REVERT TO ORGANIC FARMING USING HUMPED COW DUNG..HUMUS LADEN TOP SOIL CAN HOLD WATER..
EXTERMINATE ALL HUMPLESS COWS -- WE DONT CARE..
http://ajitvadakayil.blogspot.com/2013/07/nutritious-a1-milk-of-vedic-cows-with.html
http://ajitvadakayil.blogspot.com/2013/02/gomutra-drinking-cows-urine-as-elexir.html
http://ajitvadakayil.blogspot.com/2013/12/shocking-legacy-of-mad-cow-disease-capt.html
capt ajit vadakayil
..
Current Status
Case closed
Date of Action
27/02/2020
Remarks
FILE
Officer Concerns To
Officer Name
Shri Ambuj Sharma
Officer Designation
Under Secretary (Public)
Contact Address
Public Wing 5th Floor, Rail Bhawan New Delhi
Email Address
ambuj.sharma38@nic.in
Contact Number
011-23386447
Name Of Complainant
ReplyDeleteVardhan Talera
Date of Receipt
27/02/2020
Received By Ministry/Department
Prime Ministers Office
Grievance Description
CAPT AJIT VADAKAYIL WARNS NSA AJIT DOVAL
YOUR JOB IS NOT TO RUSH AROUND DELHI, PLAYING TO THE GALLERY, PACIFYING VICTIMS OF VIOLENCE..
YOU JOB AS NSA IS TO FIND OUT THE CULPRITS WHO SET DELHI ON FIRE-- ALMOST ALL ARE OUTSIDERS AND ILLEGAL IMMIGRANTS ( MUSLIM ) FUNDED BY PAKISTANI ISI..
ANTI-CAA IS JUST A FRONT..THESE ARE DESH DROHI MUSLIMS --
MOST OF THEM ARE ILLEGAL IMMIGRANTS WHO HAVE THE SUPPORT OF DESH DROHI FOREIGN PAYROLL JUDICIARY AND THEIR LACKEYS THE BENAMI MEDIA..
THIS FELLOW AJIT DOVAL AT THE AGE OF 75 IS DYEING HIS HAIR AND TYING TO LOOK LIKE A BOLLYWOOD HERO.
Current Status
Case closed
Date of Action
28/02/2020
Reason
Others
Remarks
Suggestion/Feedback noted.
Officer Concerns To
Officer Name
Shri Ambuj Sharma
Officer Designation
Under Secretary (Public)
Contact Address
Public Wing 5th Floor, Rail Bhawan New Delhi
Email Address
ambuj.sharma38@nic.in
Contact Number
011-23386447
Your Registration Number is : PMOPG/E/2020/0101528
ReplyDeleteRegarding torture / murder of ib officer ankit sharma ...
https://twitter.com/Swamy39/status/1240268662889541632
ReplyDeleteTAMIL IYER SWAMY PLANS TO PUSH HIS AGAMA SHIT
YOU SHOULD HAVE SEEN THIS FELLOW SWAMY PLAYING TO THE GALLERY DURING THE FAKE VISHNU IDOL 40 YEARLY EXPOSITION AT VARADHARAJA TEMPLE
http://ajitvadakayil.blogspot.com/2019/06/ramanuja-varadharaja-perumal-temple.html
SWAMY HAS NEVER UTTERED THE WORD ROTHSCHILD.. BUT HE DOES PROPAGANDA FOR ROTHSCHILD FAKES..
https://www.thehansindia.com/news/national/kancheepuram-temple-opens-after-40-years--542084
ReplyDeletevikramadityaMarch 19, 2020 at 7:09 AM
Guruji,
It seems Babu's found a new anecdote for readers piling up their complaints.
Arey yeh toh ek blog hai bhaiyya constitution thodi hai ????
for registration number : MINHA/E/2019/05791
Grievance Concerns To
Name Of Complainant
Vikramaditya
Date of Receipt
17/08/2019
Received By Ministry/Department
Home Affairs
Grievance Description
https://timesofindia.indiatimes.com/city/mumbai/parliament-not-absolute-ruler-says-mark-tully/articleshow/70695073.cms
INDIAN COLLEGIUM JUDICIARY IS ILLEGAL..
READ ALL 8 PARTS OF THE POST BELOW--
http://ajitvadakayil.blogspot.com/2019/01/justice-be-damned-enforce-law-not-any.html
REVOKE THE PADMA BHUSHAN AWARD GIVEN TO MARK TULLY.. REVOKE HIS INDIAN VISA..
MARK TULLYs JEWISH ANCESTORS WERE ALL JEW ROTHSCHILDs OPIUM AGENTS ..I HAVE DONE ENOUGH RESEARCH AND I WILL POST A BLOG SOON..
MARK TULLYs GRANDFATHER WORKED UNDER GEORGE ORWELLs FATHER AT CHAMPARAN, EXPORTING OPIUM FROM INDIA TO CHINA ....
http://ajitvadakayil.blogspot.com/2019/07/how-gandhi-converted-opium-to-indigo-in.html
MARK TULLYs JEWISH GREAT GRANDFATHER WAS A VERY POWERFUL MAN.. DERIVING POWER FROM JEW ROTHSCHILD..
I KNOW MORE ABOUT MARK TULLYs ANCESTORS THAN TULLY BABY HIMSELF..
http://ajitvadakayil.blogspot.com/2010/12/dirty-secrets-of-boston-tea-party-capt.html
http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html
BBC KNEW IN ADVANCE THAT INDIRA GANDHI WOULD BE MURDERED.. THEY WERE THERE TO WITNESS IT LIVE..
MARK TULLY DID NOT GIVE INDIA THE TIME TO MOVE TO PLAN B.. HE ANNOUNCED THE INDIRA GANDHI MURDER ON BBC.. THIS IS SEDITION BY LAWS OF ANY NATION..
HIS BOOK FOUR FACES IS RIDICULOUS BULL. JESUS CHRIST NEVER EXISTED..
BIBLE / CHRISTIANITY/ JESUS / MOSES/ ABRAHAM/ GABRIEL/ NOAH/ ADAM ETC WAS COOKED UP BY JEWESS HELENA ( MOTHER OF ROMAN EMPEROR CONSTANTINE THE GREAT ) IN 325 AD, AT THE FIRST COUNCIL OF NICEA..
EVERY WORK ON MARK TULLY IS A LIE.. IT WAS DIFFICULT TO READ THROUGH HIS RIDICULOUS LIES..
MARK TULLYs WATERLOO WILL HAPPEN SOONER THAN LATER.. HE HAS BLED BHARATMATA ENOUGH..
MARK TULLY IS NOT A FRIEND OF INDIA SAYS CAPT AJIT VADAKAYIL..HE IS A WOLF IN SHEEPs CLOTHING.. NOT ANY MORE
capt ajit vadakayil
This grievance is not a suggestion or mere whitewash of formalities. NO MERRY GO ROUND (THIS IS NOT THIS DEPARTMENT S DIRTY WORK. statements will not go down well with public. Save the nation from enemies. Ignoring such complaints will have severe repercussions.
JAI BHARAT MATHA .
Current Status
Case closed
Date of Action
18/03/2020
Reason
Others
Remarks
Quoting from someone s blog does not constitute a grievance. This is personal view of blog writer.
Officer Concerns To
Officer Name
S K Shahi
Officer Designation
Joint Secretary
Contact Address
Email Address
vb.dubey@gov.in
Contact Number
23092722
https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/sonam-kapoor-comes-to-kanika-kapoors-defence-faces-the-wrath-of-social-media-trolls/articleshow/74749791.cms
ReplyDeleteALL BOLLYWOOD JEWESSES PRETENDING TO BE KHATRI HINDUS ARE COMING TOGETHER..
SOMEBODY ASKED ME
ReplyDeleteCAPTAIN
THE WHITE IRISH ( SLAVES WHO WERE DRIVEN TO AMERICA BY DELIBERATE POTATO BLIGHT FAMINE ) WERE USED TO TORTURE AND KILL CHINESE ( SLAVES WHO WERE DRIVEN AWAY TO AMERICA BY DELIBERATE FAMINE BY TAIPING REBELLION ) ..
DID BLACK SLAVES KILL CHINESE ?
LISTEN
BLACK SOLDIERS WERE USED TO EXTERMINATE NATIVE INDIAN BUFFALO TO STARVE THEM..
JEW ROTHSCHILD WANTED HUNDREDS OF MILES ON EITHER SIDE OF THE GREAT TRANSCONTINENTAL RAILWAY TO BE CLEARED OF NATIVE INDIANS.. THE ONLY WAY WAS TO EXTERMINATE THEM USING SMALL POX BLANKETS AND ALSO SHOOTING DOWN THEIR STAPLE FOOD, THE BUFALLO WHICH ONCE ROAMED IN MILLIONS .
THE BUFFALO WAS EXTERMINATED TO 300-- FORCING STARVING INDIANS INTO RESERVATIONS..
ALL THIS IS KEPT A GREAT SECRET..
BOB MARLEY SANG A SONG ABOUT THE BLACK BUFFALO SOLDIER..
https://www.youtube.com/watch?v=eksV02us5DQ
THE GATLING GUN WAS DESIGNED BY THE AMERICAN INVENTOR DR. RICHARD J. GATLING IN 1861 TO EXTERMINATE BUFFALO ENMASSE..
TRANSCONTINENTAL RAILWAY WORK STARTED IN 1863 USING CHINESE SLAVES..
THE ILLEGAL EXTERMINATION OF BUFFALO BY JEW ROTHSCHILD WAS OFFICIALLY RATIFIED IN 1866, WHEN SIX ALL-BLACK CAVALRY AND INFANTRY REGIMENTS WERE CREATED AFTER CONGRESS PASSED THE ARMY ORGANIZATION ACT.
AT LEAST THREE YEARS BEFORE THE GREAT TRANSCONTINENTAL RAIL WORK STARTED BLACK BUFFALO SOLDIERS WERE SHOOTING DOWN THOUSANDS OF BUFFALO UNOFFICIALLY.
https://www.youtube.com/watch?v=t2SPLDfHo0A
TILL TODAY SCHOOL TEXT BOOKS DO NOT TEACH HOW THE WHITE MAN EXTERMINATED ENTIRE CONTINENTS OF LOCAL NATIVES USING SMALL POX BLANKETS GIVEN BY CHRISTIAN MISSIONARIES ( WHO ALREADY HAD SMALL POX AND WERE IMMUNE )..
http://ajitvadakayil.blogspot.com/2013/01/christopher-columbus-father-of-american.html
http://ajitvadakayil.blogspot.com/2012/07/fransisco-pizarro-hernan-cortez.html
WIKIPEDIA TOOK OUT A POST ON GOAN INQUISITION AFTER I POSTED A BLOG..
http://ajitvadakayil.blogspot.com/2016/09/portuguese-inquisition-is-goa-by-jesuit.html
WAIT TILL I POST ON "CHINESE SLAVERY EXHUMED"..
WATCH THIS SPACE..
capt ajit vadakayil
..
PUT ABOVE COMMENT IN WEBSITES OF-
DeleteTRUMP
PUTIN
BORIS JOHNSON
ANGELA MERKEL
MACRON
AMBASSADORS TO FROM USA/ RUSSIA/ UK / GERMANY FRANCE
PMO
PM MODI
ENTIRE BBC GANG
ENTIRE MEDIA OF INDIA
EXTERNAL AFFAIR MINISTER/ MINISTRY
AMIT SHAH
HOME MINISTRY
CJI BOBDE
SUPREME COURT JUDGES/ LAWYERS
ATTORNEY GENERAL
LAW MINISTER PRASAD / MINISTRY CENTRE AND STATES
CHIEF JUSTICES OF ALL STATE HIGH COURTS
I&B MINISTER / MINISTRY
NSA
AJIT DOVAL
RAW
IB
CBI
NIA
ED
DEFENCE MINISTER/ MINISTRY
ALL 3 ARMED FORCE CHIEFS -- PLUS TOP CDS CHIEF
ALL DGPs OF INDIA
ALL IGs OF INDIA
COLLECTORS OF MAJOR CITIES OF INDIA
ALL CMs OF INDIA
ALL STATE GOVERNORS
EVERY MP OF INDIA
EVERY MLA OF INDIA
NCERT
EDUCATION MINISTRY/ MINISTER- CENTRE AND STATES
NITI AYOG
AMITABH KANT
PRESIDENT OF INDIA
VP OF INDIA
SPEAKER LOK SABHA
SPEAKER RAJYA SABHA
RSS
AVBP
VHP
MOHAN BHAGWAT
RAM MADHAV
SOLI BABY
FALI BABY
KATJU BABY
SALVE BABY
MOHANDAS PAI
RAJEEV CHANDRASHEKHAR
PGURUS
SWAMY
RAJIV MALHOTRA
DAVID FRAWLEY
STEPHEN KNAPP
WILLIAM DALRYMPLE
KONRAED ELST
FRANCOIS GAUTIER
NALIN KOHLI
GVL NARASIMHA RAO
SAMBIT PATRA
ASHOK PANDIT
ANUPAM KHER
KANGANA RANAUT
VIVEK AGNIHOTRI
MEENAKSHI LEKHI
SMRITI IRANI
PRASOON JOSHI
SWAPAN DASGUPTA
MADHU KISHWAR
SUDHIR CHAUDHARY
GEN GD BAKSHI
RSN SINGH
ARNAB GOSWAMI
NAVIKA KUMAR
ANAND NARASIMHAN
UDDHAV THACKREY
RAJ THACKREY
SHAZIA ILMI
CHANDA MITRA
SRI SRI RAVISHANKAR
SADGURU JAGGI VASUDEV
BABA RAMDEV
THAMBI SUNDAR PICHAI
SATYA NADELLA
CEO OF WIKIPEDIA
QUORA CEO ANGELO D ADAMS
QUORA MODERATION TEAM
KURT OF QUORA
GAUTAM SHEWAKRAMANI
DAVID HATCHER CHILDRESS
SPREAD ON SOCIAL MEDIA..
SPREAD BY WHATS APP
ALSO--
CHINESE PRESIDENT XI
GLOBAL TIMES EDITOR HU XIJIN
AMBASSADOR TO FROM CHINA
https://ajitvadakayil.blogspot.com/2020/02/coronavirus-deaths-nano-gold-colloids.html
ReplyDeleteIN THE POST ABOVE , SCROLL DOWN TO THE END OF THE POST..
I HAVE PUT A VIDEO HOW "SABKA VISHWAS " MODI, SCREENS FOR CORONAVIRUS .
capt ajit vadakayil
..
PUT ABOVE COMMENT IN WEBSITES OF-
DeletePMO
PM MODI
ENTIRE BBC GANG
ENTIRE MEDIA OF INDIA
EXTERNAL AFFAIR MINISTER/ MINISTRY
AMIT SHAH
HOME MINISTRY
CJI BOBDE
SUPREME COURT JUDGES/ LAWYERS
ATTORNEY GENERAL
LAW MINISTER PRASAD / MINISTRY CENTRE AND STATES
CHIEF JUSTICES OF ALL STATE HIGH COURTS
I&B MINISTER / MINISTRY
NSA
AJIT DOVAL
RAW
IB
CBI
NIA
ED
DEFENCE MINISTER/ MINISTRY
ALL 3 ARMED FORCE CHIEFS -- PLUS TOP CDS CHIEF
ALL DGPs OF INDIA
ALL IGs OF INDIA
COLLECTORS OF MAJOR CITIES OF INDIA
ALL CMs OF INDIA
ALL STATE GOVERNORS
EVERY MP OF INDIA
EVERY MLA OF INDIA
NCERT
EDUCATION MINISTRY/ MINISTER- CENTRE AND STATES
NITI AYOG
AMITABH KANT
PRESIDENT OF INDIA
VP OF INDIA
SPEAKER LOK SABHA
SPEAKER RAJYA SABHA
RSS
AVBP
VHP
MOHAN BHAGWAT
RAM MADHAV
SOLI BABY
FALI BABY
KATJU BABY
SALVE BABY
MOHANDAS PAI
RAJEEV CHANDRASHEKHAR
PGURUS
SWAMY
RAJIV MALHOTRA
DAVID FRAWLEY
STEPHEN KNAPP
WILLIAM DALRYMPLE
KONRAED ELST
FRANCOIS GAUTIER
NALIN KOHLI
GVL NARASIMHA RAO
SAMBIT PATRA
ASHOK PANDIT
ANUPAM KHER
KANGANA RANAUT
VIVEK AGNIHOTRI
MEENAKSHI LEKHI
SMRITI IRANI
PRASOON JOSHI
SWAPAN DASGUPTA
MADHU KISHWAR
SUDHIR CHAUDHARY
GEN GD BAKSHI
RSN SINGH
ARNAB GOSWAMI
NAVIKA KUMAR
ANAND NARASIMHAN
UDDHAV THACKREY
RAJ THACKREY
SHAZIA ILMI
CHANDA MITRA
SRI SRI RAVISHANKAR
SADGURU JAGGI VASUDEV
BABA RAMDEV
BECAUSE OF CORONAVIRUS LOCKDOWN MANY ILLEGAL ROHINGYAS AND MUSLIM BANGLADESIS ARE ESCAPING FROM NORTH KERALA..
ReplyDeleteWHEN QUESTIONED THEY DECLARE THAT THEY ARE BIHARIS..
https://twitter.com/IndusSpirit/status/1241725460309925890
ReplyDeleteHINDU TAMILS HAVE WOKEN UP..
https://timesofindia.indiatimes.com/city/thane/gujarat-cops-beat-man-going-for-last-rites/articleshow/74856076.cms
ReplyDeleteTRAGEDY KINGS !
IT IS A WAR AGAINST CORONAVIRUS OUT THERE
This comment has been removed by the author.
ReplyDeleteHistory is always written by victors.
ReplyDeleteI don't think they will ever show themselves in a bad light. All plunders and pillages will be legal in the name of their queen or God or whoever else..It will always be reforming the savages and bringing heathens the gospel ...Suhel Seth types will later boast about it.
Until you exhume it of course!
Dear sir,
ReplyDeleteIs tetanus injection (vaccine) safe during pregnancy? Please help
https://indianexpress.com/article/opinion/columns/tavleen-singh-solicitor-general-sc-migrants-tavleen-singh-6434989/
ReplyDeleteJUST WHO IS TAVEELN SINGH? WHAT IS HER INHERENT WORTH?..
SHE SIRED A BASTARD NAMED AATISH TASEER FROM A PAKISTANI JEW SALMAN TASEER ..
PEA BRAINED WOMAN TAVLEEN SINGH DOES NOT KNOW THAT TUSHAR MEHTA AS SOLICITOR GENERAL DERIVES POWERS FROM THE INDIAN PRESIDENT WHO HAS BEEN GRATED EXTREME SUBJECTIVE POWERS BY THE INDIAN CONSTITUTION...
WHY IS TUSHAR MEHTA CONTROVERSIAL?.
ON THE 28TH OF MAY 2020, HE RIGHTFULLY CALLED DETRACTORS OF THE GOVERNMENT 'PROPHETS OF DOOM' AS THEY REPRESENTED LABOUR AND SOCIAL ORGANISATIONS IN A SUO MOTU HEARING BY THE SUPREME COURT OF INDIA ON THE PLIGHT OF MIGRANT WORKERS DURING COVID-19 PANDEMIC LOCKDOWN IN INDIA ..
ILLEGAL COLLEGIUM JUDGES OF SUPREME COURT MUST NOT PLAY GOD ..
JUDICIARY IS IN CONTEMPT OF WE THE PEOPLE AND THE CONSTITUTION..'
'HEY MELORDS.. THERE IS NOTHING CALLED “CONTEMPT OF JUDGE”..
CONTEMPT OF COURT CAN BE USED ONLY INSIDE THE WEE COURT ROOM, WHEN THE COURT IS IN SESSION AND PROCEEDINGS ARE DELIBERATELY DISRUPTED ..
AND THAT TOO ONLY FOR A SPECIFIC CASE THE JUDGE IS RULING.. WITHIN THE PERIMETER OF THE SPECIFIC CASE BEING ARGUED ...TO PREVENT A BREAKDOWN OF THE SYSTEM-- WHERE SOMEONE DEFIES THE JUDGE AND UNDERMINES THE JUDGE’'S AUTHORITY REPEATEDLY . … NOT TO ASSUAGE THE MELORDs EGO..
IT CANNOT BE USED OUTSIDE THE COURT AND LENGTH AND BREADTH OF THE COUNTRY, UNDER LAND , UNDERWATER OR IN THE SKIES..
DEVELOPED NATIONS DON’T HAVE "CONTEMPT OF COURT " CLAUSE ANYMORE … …TRUTH CANNOT BE BRANDED AS VILIFICATION OR DEFAMATION ...... INDIA IS A DEMOCRACY..
BHARATMATA IS RACING TO BE THIS PLANETS NO 1 SUPERPOWER IN 13 YEARS --BEFORE THAT THE NEW WORLD ORDER WANTS INDIA TO IMPLODE USING AGENTS LIKE TAVLEEN SINGH..
WE ARE DONE WITH THE BOTTOM DREGS OF THE SCHOOL CEREBRAL BARREL BECOMING LAWYERS AND THEN BOTTOM DREGS OF THE LOSER LAWYER POOL BECOMING MELORD JUDGES, WHO ARE SO SQUEAMISH ( OUT OF INFERIORITY COMPLEX ) THAT THEY WILL SEND YOU TO JAIL FOR "CONTEMPT OF COURT " AT THE DROP OF A HAT.
JUDGES ARE IN CONTEMPT OF THE CONSTITUTION ITSELF ...
COLLEGIUM JUDICIARY IS NOT ALLOWED BY THE CONSTITUTION.. JUDGES CANNOT MAKE AND BREAK LAWS FOR THE WHOLE NATION VIA PIL ROUTE , FILED BY FOREIGN PAYROLL DESH DROHIS.
JUDGES HAVE BEEN IN CONTEMPT OF “WE THE PEOPLE”.. AN EXAMPLE IS 5 JUDGES LEGALISING ADULTERY FOR 1300 MILLION PEOPLE..IF THERE IS A REFERENDUM 99.99 % INDIANS WILL VOTE AGAINST THE RULING…
JUDGES ARE IN CONTEMPT OF THE CONSTITUTION ITSELF.. COLLEGIUM JUDICIARY IS NOT ALLOWED BY THE CONSTITUTION.. ..
THESE JUDGES ARE RESPONSIBLE FOR THE RED NAXAL CORRIDOR AND ETHNIC CLEANSING OF KASHMIRI HINDUS..
YOU CAN ABUSE THE ELECTED PM --BUT CANT POINT A TRUTHFUL FINGER AT A "LAWYER TURNED JUDGE "?.......
IT IS A DISGRACE THAT THE LAW MINISTER , ELECTED PM AND THE PRESIDENT ARE INDIFFERENT , CALLOUS AND COWARDLY ..
https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html
Capt ajit vadakayil
..