Friday, February 28, 2020

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 15 - Capt Ajit Vadakayil



THIS POST IS CONTINUED FROM PART 14, BELOW--








CAPT AJIT VADAKAYIL SAYS  A MUST MEAN  “INTELLIGENCE AUGUMENTATION “  IN FUTURE ..

Let this be IA


OBJECTIVE AI CANNOT HAVE A VISION,  
IT CANNOT PRIORITIZE,   
IT CANT GLEAN CONTEXT,   
IT CANT TELL THE MORAL OF A STORY ,  
IT CANT RECOGNIZE A JOKE, OR BE A JUDGE IN A JOKE CONTEST  
IT CANT DRIVE CHANGE,    
IT CANNOT INNOVATE, 
IT CANNOT DO ROOT CAUSE ANALYSIS ,  
IT CANNOT MULTI-TASK,   
IT CANNOT DETECT SARCASM,  
IT CANNOT DO DYNAMIC RISK ASSESSMENT ,  
IT IS UNABLE TO REFINE OWN KNOWLEDGE TO WISDOM, 
IT IS BLIND TO SUBJECTIVITY,  
IT CANNOT EVALUATE POTENTIAL,   
IT CANNOT SELF IMPROVE WITH EXPERIENCE,
IT CANNOT UNLEARN
IT IS PRONE TO CATASTROPHIC FORGETTING    
IT DOES NOT UNDERSTAND BASICS OF CAUSE AND EFFECT,   
IT CANNOT JUDGE SUBJECTIVELY TO VETO/ ABORT,    
IT CANNOT FOSTER TEAMWORK DUE TO RESTRICTED SCOPE,  
IT CANNOT MENTOR,   
IT CANNOT BE CREATIVE,  
IT CANNOT THINK FOR ITSELF, 
IT CANNOT TEACH OR ANSWER STUDENTs QUESTIONS,  
IT CANNOT PATENT AN INVENTION, 
IT CANNOT SEE THE BIG PICTURE , 
IT CANNOT FIGURE OUT WHAT IS MORALLY WRONG, 
IT CANNOT PROVIDE NATURAL JUSTICE, 
IT CANNOT FORMULATE LAWS
IT CANNOT FIGURE OUT WHAT GOES AGAINST HUMAN DIGNITY
IT CAN BE FOOLED EASILY USING DECOYS WHICH CANT FOOL A CHILD, 
IT CANNOT BE A SELF STARTER, 
IT CANNOT UNDERSTAND APT TIMING, 
IT CANNOT FEEL
IT CANNOT GET INSPIRED
IT CANNOT USE PAIN AS FEEDBACK,
IT CANNOT GET EXCITED BY ANYTHING
IT HAS NO SPONTANEITY TO MAKE THE BEST OUT OF  SITUATION 
IT CAN BE CONFOUNDED BY NEW SITUATIONS
IT CANNOT FIGURE OUT GREY AREAS,
IT CANNOT GLEAN WORTH OR VALUE
IT CANNOT UNDERSTAND TEAMWORK DYNAMICS 
IT HAS NO INTENTION
IT HAS NO INTUITION,
IT HAS NO FREE WILL
IT HAS NO DESIRE
IT CANNOT SET A GOAL

IT CANNOT BE SUBJECTED TO THE LAWS OF KARMA

ON THE CONTRARY IT CAN SPAWN FOUL AND RUTHLESS GLOBAL FRAUD ( CLIMATE CHANGE DUE TO CO2 ) WITH DELIBERATE BLACK BOX ALGORITHMS,  JUST FEW AMONG MORE THAN 60 CRITICAL INHERENT DEFICIENCIES.



HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE,  REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK,  A GUT WHICH INTUITS,   30 TRILLION BODY CELLS WHICH HOLD MEMORY,   A VAGUS NERVE , AN AMYGDALA ,  73% WATER IN BRAIN FOR MEMORY,  10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.



SINGULARITY ,  MY ASS !





1
https://ajitvadakayil.blogspot.com/2019/08/what-artificial-intelligence-cannot-do.html
2
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do.html
3
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do_29.html
4
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do.html
5
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_4.html
6
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_25.html
7
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_88.html
8
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_15.html
9
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_94.html
10
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do.html
11
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_1.html
12
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do.html
13
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_21.html
14
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_27.html
15
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_28.html
16

https://ajitvadakayil.blogspot.com/2020/03/what-artificial-intelligence-cannot-do.html



Ransomware attacks are happening in India and it is not being reported.. Cyber awareness training is crucial to detecting attacks

A British student Zain Qaiser (24) from Barking, London was jailed for more than six years at Kingston Crown Court for his ransomware attacks in 2019


A good defense for ISRO against ransomware is the use of whitelisting software that only allows specified programs to be run on the company's computers and therefore blocks malware. If traveling, alert your IT department beforehand, especially if you’re going to be using public wireless Internet. 

Make sure you use a trustworthy Virtual Private Network (VPN) when accessing public Wi-Fi
Few months ago US prosecutors were forced to drop 11 narcotics cases against six suspected drug dealers after crucial case files were lost in a ransomware infection

Few days ago Wool sales across Australia have been halted after Talman, a major software supplier to the industry, was hit by a ransomware attack that encrypted its production databases.  The attackers encrypted certain database files, so that makes the whole system inoperative

The city of Del Rio in Texas was forced to revert to pen and paper systems after a ransomware disabled their servers; and a lone hacker gained access to Capital One's secure network and more than 100 million customer accounts and credit card applications

Ransomware is software that locks down part or all of a website until the owner pays ransom, usually in difficult-to-trace cyber-currency such as Bitcoin.

Ransomware attacks are typically carried out using a Trojan that is disguised as a legitimate file that the user is tricked into downloading or opening when it arrives as an email attachment. However, one high-profile example, the "WannaCry worm", travelled automatically between computers without user interaction

Ransomware is growing rapidly across the internet users but also for the IoT environment

Ransomware attacks are highly profitable and relatively simple for malicious actors to carry out.


Last year there were ransomware attacks on cities, hospitals, businesses and universities. Atlanta spent $2.6 million to restore its systems rather than pay the $52,000 ransom. Ransomware in 2019 cost the healthcare industry alone over $27 billion.


Ransomware is a form of malware in which rogue software code effectively holds a user's computer hostage until a "ransom" fee is paid. Ransomware often infiltrates a PC as a computer worm or Trojan horse that takes advantage of open security vulnerabilities.

It is possible for a Ransomware to spread over a network to your computer. It no longer infects just the mapped and hard drive of your computer system. Virus attacks nowadays can take down the entire network down and result in business disruptions.

Ransomware attacks will become increasingly common as long as there are vulnerable targets and insurance companies behind them. 

Particularly vulnerable are critical network infrastructure entities, like rural power generation and utility companies, small local telephone companies and other mission-critical infrastructure that possibly don’t have the cybersecurity protection that their big city peers might have.

New ransomware like CryptXXX not only encrypts all of your files, but also steals Bitcoins if they're present on your network, as well as stealing other sensitive data.

Ransomware is often spread through phishing emails that contain malicious attachments or through drive-by downloading. Drive-by downloading occurs when a user unknowingly visits an infected website and then malware is downloaded and installed without the user's knowledge.

Cyber criminals are launching ransomware attacks that are specifically targeting industrial control systems (ICS).. File-encrypting malware being built to directly infect computer networks that control operations in manufacturing and utilities environments. Files encrypted are renamed with a random five character file extension, while victims are presented with a ransom note with an email address to contact to negotiate a ransom to be paid in cryptocurrency.   

I n order to protect against ransomware attacks, it's recommended that ICS systems are segmented from the rest of the network, so even if a standard Windows machine is compromised, an attacker can't just move onto systems that control infrastructure. 

Organisations should also ensure that systems are regularly backed up and stored offline; and for ICS operations in particular, backups must include the last known good-configuration data to ensure a swift recovery.  


European law enforcement agency Europol's annual cybercrime report – the Internet Organised Crime Threat Assessment (IOCTA) – lists ransomware as the most widespread and financially damaging cyberattack..  

Cyber criminals are becoming more efficient, picking and choosing their targets with the aim of causing the highest amount of damage possible to organisations in order to demand much higher ransoms. 

Ransom demands are kept secret -.  ransom demands are now exceeding  one million Euros. The  Europol report warns there's a risk of cyber criminals deploying ransomware attacks as a means of pure sabotage.. 

The NotPetya attacks of 2017 showed how much damage can be done by a destructive cyberattack of this kind: in some cases it led to large companies having to almost entirely restore their networks from scratch, suffering large amounts of downtime and large financial costs as a result.

NotPetya looked like ransomware but the group behind it had no interest in receiving ransom payments, the motivation behind the attack was pure destruction. The target for this destruction was Ukraine, but the attack got out of control and spread around the world. 

A form of this ransomware attack emerged earlier this year. Named GermanWiper, the ransomware hit organisations across Germany with attacks that didn't encrypt files, but rewrote the files to destroy them.

Ultimately, it meant that even if a user paid the ransom, they wouldn't get their files back at all – unless they had offline back-ups.

Ransomware itself may have changed but the methods for distributing it have stayed the same over the last year: phishing emails and remote desktop protocols (RDPs) are the primary infection vectors of the malware.

Often, the attackers pushing ransomware are doing so with the aid of known vulnerabilities for which vendors have already issued security updates. Because of this, Europol stresses the importance of patching, especially when it comes to critical vulnerabilities.

The report notes that almost one million devices still haven't been patched against the powerful BlueKeep vulnerability, leaving networks open to attacks using the exploit.

The message from Europol is clear – ransomware and other cyberattacks won't be disappearing any time soon, especially if cyber criminals are able to take advantage of known vulnerabilities and old attacks.

In 2017, the WannaCry ransomware attack hit organizations in over 150 countries around the world, marking the beginning of a new era in cyberattack sophistication. Its success lay in its ability to move laterally through an organization in a matter of seconds while paralysing hard drives, and the incident went on to inspire multiple copycat attacks

Malware is malicious software, which - if able to run - can cause harm in many ways, including:--

causing a device to become locked or unusable
stealing, deleting or encrypting data
taking control of your devices to attack other organisations
obtaining credentials which allow access to your organisation's systems or services that you use
'mining' cryptocurrency
using services that may cost you money (e.g. premium rate phone calls).

Ransomware is a type of malware that prevents you from accessing your computer (or the data that is stored on it). The computer itself may become locked, or the data on it might be stolen, deleted or encrypted. Some ransomware will also try to spread to other machines on the network, such as the Wannacry malware that impacted the NHS in May 2017.

Occasionally malware is presented as ransomware, but after the ransom is paid the files are not decrypted. This is known as wiper malware. For these reasons, it's essential that you always have a recent offline backup of your most important files and data.

Make regular backups

The key action to take to mitigate ransomware is to ensure that you have up-to-date backups of important files; if so, you will be able to recover your data without having to pay a ransom.

Make regular backups of your most important files - it will be different for every organisation - and check that you know how to restore the files from the backup.

Ensure that a backup is kept separate from your network ('offline'), or in a cloud service designed for this purpose - our blog on offline backups in an online world provides useful additional advice for organisations.

Cloud syncing services (like Dropbox, OneDrive and SharePoint, or Google Drive) should not be used as your only backup. This is because they may automatically synchronise immediately after your files have been 'ransomwared', and then you'll lose your synchronised copies as well.

Make sure the device containing your backup (such as an external hard drive or a USB stick) is not permanently connected to your network and that you ideally have multiple copies. An attacker may choose to launch a ransomware attack when they know that the storage containing the backups is connected.

Prevent malware from being delivered to devices

You can reduce the likelihood of malicious content reaching your network through a combination of:-

filtering to only allow file types you would expect to receive
blocking websites that are known to be malicious
actively inspecting content
using signatures to block known malicious code

These are typically done by network services rather than users' devices. Examples include:---

mail filtering (in combination with spam filtering) which can block malicious emails and remove executable attachments
intercepting proxies, which block known-malicious websites
internet security gateways, which can inspect content in certain protocols (including some encrypted protocols) for known malware
safe browsing lists within your web browsers which can prevent access to sites known to be hosting malicious content

Public sector organisations are encouraged to subscribe to the NCSC Protective DNS service; this will prevent users from reaching known malicious sites.

Some ransomware attacks are deployed by attackers who have gained access to networks through remote access software like RDP. You should prevent attackers from being able to brute-force access to your networks through this (or similar) software by either:----

authenticating using Multi-Factor Authentication (MFA)
ensuring users have first connected through a VPN that meets our recommendations.

Prevent malware from running on devices

A 'defence in depth' approach assumes that malware will reach your devices. You should therefore take steps to prevent malware from running. The steps required will vary for each device type and OS , but in general you should look to use device-level security features such as:---

Centrally manage enterprise devices in order to either:-

• only permit applications trusted by the enterprise to run on devices using technologies including AppLocker, or

• only permit the running of applications from trusted app stores (or other trusted locations)

Consider whether enterprise antivirus or anti-malware products are necessary, and keep the software (and its definition files) up to date.

Provide security education and awareness training to your people, for example NCSC's Top Tips For Staff.

Disable or constrain macros in productivity suites, which means:--

• disabling (or constraining) other scripting environments (e.g. PowerShell)

• disabling autorun for mounted media (prevent the use of removable media if it is not needed)

• protect your systems from malicious Microsoft Office macros

In addition, attackers can force their code to execute by exploiting vulnerabilities in the device. Prevent this by keeping devices well-configured and up to date.  


It is recommended that you:--

install security updates as soon as they become available in order to fix exploitable bugs in your products. The NCSC has produced guidance on how to manage vulnerabilities within your organisation
enable automatic updates for operating systems, applications, and firmware if you can
use the latest versions of operating systems and applications to take advantage of the latest security features
configure host-based and network firewalls, disallowing inbound connections by default

The NCSC's End User Devices Security Guidance provides advice on how to achieve this across a variety of platforms.

Limit the impact of infection and enable rapid response

If put in place, the following steps will ensure your incident responders can help your organisation to recover quickly.

Help prevent malware spreading across your organisation by following NCSC guidance on preventing lateral movement. This will help because attackers aim to move across machines on the network. This might include targeting authentication credentials or perhaps abusing built-in tools.

Use two-factor authentication (also known as 2FA) to authenticate users so that if malware steals credentials they can't be reused.

Ensure obsolete platforms (OS and apps) are properly segregated from the rest of the network 

Regularly review and remove user permissions that are no longer required, to limit malware's ability to spread. Malware can only spread to places on your network that infected users' accounts have access to.

System Administrators should avoid using their administrator accounts for email and web browsing, to avoid malware being able to run with their high levels of system privilege.

Architect your network so that management interfaces are minimally exposed (our blog post on protecting management interfaces may help).

Practice good asset management, including keeping track of which versions of software are installed on your devices so that you can target security updates quickly if you need to.

Keep your infrastructure patched, just as you keep your devices patched and prioritise devices performing a security-related function on your network (such as firewalls), and anything on your network boundary.

Develop an incident response plan and exercise it.

Steps to take if your organisation is already infected--
If your organisation has already been infected with malware, these steps may help limit the impact of the infection..

Immediately disconnect the infected computers, laptops or tablets from all network connections, whether wired, wireless or mobile phone based.
Consider whether turning off your Wi-Fi and disabling any core network connections (including switches) might be necessary in a very serious case.
Reset credentials including passwords (especially for administrators) - but verify that you are not locking yourself out of systems that are needed for recovery.
Safely wipe the infected devices and reinstall the operating system.
Before you restore from a backup, verify that it is free from malware and ransomware. You should only restore from a backup if you are very confident that the backup is clean.
Connect devices to a clean network in order to download, install and update the operating system and all other software.
Install, update, and run antivirus software.
Reconnect to your network.
Monitor network traffic and run antivirus scans to identify if any infection remains.


Note: Files encrypted by most ransomware have no way of being decrypted by anyone other than the attacker. 

Don't waste your time or money on services that promise to do it. In some cases, security professionals have produced tools that can decrypt files due to weaknesses in the malware (which may be able to recover some data), but you should take precautions before running unknown tools on your devices.


2020 will likely see an increase in mobile threats, with hackers taking advantage of unsecured public Wi-Fi networks to tap into users’ web sessions and steal identity data and log-ins. 

Hackers will be ramping up attacks that target users’ smartphones and tablets, using malware hidden in ordinary-looking apps that users download unwittingly – just the first half of 2019 saw a 50% increase in mobile banking malware compared with 2018, for example, with users losing payment data, credentials and money to cyber attackers. 

Even public charging points can be stocked with malware by attackers. In 2020, the devices we use most frequently will become an increasingly popular attack surface.




Phishing will get even more sophisticated. Most cyber attacks today generally begin with a successful phishing attack. According to Microsoft, the number of inbound phishing emails more than doubled in 2019.   

In 2020, you can expect to see attackers continue to use targeted spear phishing attacks in big numbers, as open source intelligence (OSINT) tools like Maltego and the vast amount of personal data available on sites like LinkedIn and Facebook enable attackers to create ever more convincing phishing emails.

Worms make a comeback. Worms have always been popular because they self-replicate, helping hackers spread attacks without the need for user interaction. Following the drastic warning of the WannaCry ransomware cryptoworm attacks of 2017 that caused billions of dollars of damage, industry should have become a lot more vigilant.

However, 2020 presents a new playing field for worms in the shape of Bluekeep, a Microsoft flaw that affects computers and server operating systems and can let attackers remotely run malware or ransomware on vulnerable computers.

AI and ML. 2020 will also see attackers increasingly use artificial intelligence (AI) and machine learning (ML) to scale up attacks past general human ability to recognize or respond to them. AI and ML enable malicious programs to learn to attack things by themselves and make cyber attacks quicker and easier to carry out.

Using AI, an attacker can carry out multiple and repeated attacks on a network by programming a few lines of code to perform most of the work.  Hackers are turning to AI and using it to weaponize malware and attacks to counter the advancements made in cybersecurity solutions

The usual golden rules apply. IT and OT cybersecurity should be assessed at board level and managed as part of your critical business strategy and your organization’s corporate risk management. Don’t get carried away spending on new tech and shiny boxes; focus on finding a balance between spending on response and training as well as pre-emptive defense and detection.

Companies should work to combine ML with statistical analysis to predict attacks. ML and analytics can help you uncover cyber attackers’ underlying attack patterns, thereby enabling an AI system to predict attackers’ next moves, evaluate where a subsequent attack is most likely to occur and even determine which threat actors are the most likely originators.

Never assume, always prepare

Never assume you will not be the victim of a big data breach or major hack – your company will always need to have managed threat management and intelligence in place as well as detection and response systems and services. Be careful out there.





NATURAL LANGUAGE PROCESSING:   A specific ML approach that helps computers understand,  interpret, and manipulate human language. It does this by breaking down language into shorter pieces and  discovering how the pieces fit together to create meaning.   Natural language processing enables commonly  used services such as Google Translate and chatbots.

NLP is not Neuro Linguistic Programming,  a work of charlatans




Text mining (also referred to as text analytics) is an artificial intelligence (AI) technology that uses natural language processing (NLP) to transform the free (unstructured) text in documents and databases into normalized, structured data suitable for analysis or to drive machine learning (ML) algorithms. text mining is the process of examining large collections of documents to discover new information or help answer specific research questions. 

Text mining identifies facts, relationships and assertions that would otherwise remain buried in the mass of textual big data. Once extracted, this information is converted into a structured form that can be further analyzed, or presented directly using clustered HTML tables, mind maps, charts, etc. Text mining employs a variety of methodologies to process the text, one of the most important of these being Natural Language Processing (NLP).

Cognitive Computing – describes platforms that combine artificial intelligence with all other aspects of cognitive cycle – like perception, reasoning and optimizing decisions – that collectively approaches human-level cognitive dynamics. They encompass disciplines such as machine learning, natural language processing (NLP) and computer vision among others.

The ultimate objective of NLP is to read, decipher, understand, and make sense of the human languages in a manner that is valuable

NLP is the ability to extract or generate meaning and intent from text in a readable, stylistically natural, and grammatically correct form.
  
Natural Language Understanding (NLU) – a sub-field of natural language processing (NLP).  It  deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.  

Its application involves automated reasoning,  machine translation,  question answering,  news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis.
Natural Language Understanding (NLU) is the comprehension by computers of the structure and meaning of human language (e.g., English, Spanish, Japanese), allowing users to interact with the computer using natural sentences”. 

In other words, NLU is Artificial Intelligence that uses computer software to interpret text and any type of unstructured data. NLU can digest a text, translate it into computer language and produce an output in a language that humans can understand.

NLU and natural language processing (NLP) are often confused. Instead they are different parts of the same process of natural language elaboration. Indeed, NLU is a component of NLP. More precisely, it is a subset of the understanding and comprehension part of natural language processing.

Natural language understanding interprets the meaning that the user communicates and classifies it into proper intents. For example, it is relatively easy for humans who speak the same language to understand each other, although mispronunciations, choice of vocabulary or phrasings may complicate this. 

NLU is responsible for this task of distinguishing what is meant by applying a range of processes such as text categorization, content analysis and sentiment analysis, which enables the machine to handle different inputs.

On the other hand, natural language processing is an umbrella term to explain the whole process of turning unstructured data into structured data. NLP helps technology to engage in communication using natural human language. As a result, we now have the opportunity to establish a conversation with virtual technology in order to accomplish tasks and answer questions.



Natural language generation (NLG) is the process of artificial intelligence interpreting data and presenting or displaying the data in a digestible, easily understood manner. These tools are used when processing large data sets, structured or unstructured, to create business actions based on the data.

It involves using databases to derive semantic intentions and convert them into human language. Natural-language generation (NLG) is a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. 

It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out by a text-to-speech system.


Automated NLG can be compared to the process humans use when they turn ideas into writing or speech.

It can be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared to translators of artificial computer languages, such as decompilers or transpilers, which also produce human-readable code generated from an intermediate representation. 

Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging.

Turing Natural Language Generation (T-NLG) is a 17 billion parameter language model by Microsoft that outperforms the state of the art on many downstream NLP tasks.  T-NLG is a Transformer-based generative language model, which means it can generate words to complete open-ended textual tasks. 

In addition to completing an unfinished sentence, it can generate direct answers to questions and summaries of input documents. Generative models like T-NLG are important for NLP tasks since the goal is to respond as directly, accurately, and fluently as humans can in any situation. With T-NLG we can naturally summarize or answer questions about a personal document or email thread.



NLG addresses a different problem set than NLP/NLU, as the focus is on giving voice to data. Specifically, NLG is meant to extract actionable insights from vast amounts of data.  We don’t normally think about voice-enabling data, but the output is similar what text-to-speech produces. That is, the content is converted to a different format that in many cases is easier to consume, and makes the underlying content more valuable.

Whereas NLP/NLU works with unstructured data – voice – NLG works with data, which is highly structured. Rather than focus on the challenges of removing ambiguity from language and understanding intent, NLG applies AI to create narratives around the data, which can then be consumed either in text or voice form. NLG platforms can provide workers with a personal virtual analyst to manage various forms of data that otherwise require human effort.

NLG is tailor-made to help workers get the most from the data that touches almost every aspect of their jobs.

T-NLG is a Transformer-based generative language model, which means it can generate words to complete open-ended textual tasks. In addition to completing an unfinished sentence, it can generate direct answers to questions and summaries of input documents

Scalability of AI, -- NLG can process vast amounts of data, so the tools are there to generate new and richer insights that a manual effort could easily miss. Add to that the speed of AI, where the benefit is the ability to generate reports faster than humans can do. Not only that, but this means that reports can be dynamically updated on the fly, such as when tallying real-time data for a survey poll.

AI is a branch of computer science that aims to produce intelligent machines that have the characteristics of a human like learning, perception, recognition, planning, problem solving and reasoning. ..

Artificial intelligence technologies are utilized in pattern recognition that understands languages spoken by human beings and handwriting and in machine translation systems.

AI studies human intelligence and aims at embedding intelligent behavior, learning, and adaptation capabilities into machines. .. It has capacities for self-learning. ..This is the process of simulating human intelligence in machines. It includes the different skills of humans like problem solving, reasoning, knowledge gaining, learning, planning, manipulating, and perception..

AI deals with intelligent behaviour, learning, and adaptation in machines. . It is designed to exhibit features of human intelligence, including for example being able understand questions via natural language (human speech), solve complex problems and present reasoning, and output answers using natural language..

Machine learning is the best tool so far to analyze, understand and identify a pattern in the data. Machine learning provides machines the ability to learn automatically by feeding it tons of data & allowing it to improve through experience. Thus, Machine Learning is a practice of getting Machines to solve problems by gaining the ability to think.

The distinctive feature of AI is that the machine can go beyond its code and "learn" new things and thus outgrow its original programming.
.
Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.

Until about 2013, If you wanted to make a software system that could, say, recognise a cat in a photo, you would write logical steps. You’d make something that looked for edges in an image, and an eye detector, and a texture analyser for fur, and try to count legs, and so on, and you’d bolt them all together... and it would never really work. Conceptually, this is rather like trying to make a mechanical horse - it’s possible in theory, but in practice the complexity is too great for us to be able to describe. You end up with hundreds or thousands of hand-written rules without getting a working model.

With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.

Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.

Machine learning systems are not interchangeable, even in a narrow application like image recognition. You have to tune the structure of the system, sometimes just by trial and error, to be good at spotting the particular features in the data that you’re interested in, until you get to the desired degree of accuracy

People aren’t doing the statistical analysis directly anymore - it’s being done by machines, that generate models of great complexity and size that are not straightforward to analyse.

Machine learning is the best tool so far to analyze, understand and identify a pattern in the data. One of the main ideas behind machine learning is that the computer can be trained to automate tasks that would be exhaustive or impossible for a human being. The clear breach from the traditional analysis is that machine learning can take decisions with minimal human intervention.

The idea behind machine learning is that the machine can learn without human intervention. The machine needs to find a way to learn how to solve a task given the data.   Machine Learning systems can learn on their own, but only by recognizing patterns in large datasets and making decisions based on similar situations.  Machine Learning is dependent on large amounts of data to be able to predict outcomes.

If there are few or no structured inputs to extract patterns, Machine Learning systems can’t solve a new problem that has no apparent relation to its prior knowledge.



Again, the idea behind machine learning is that the machine can learn without human intervention. The machine needs to find a way to learn how to solve a task given the data.

Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.

Computerized reasoning is insight shown by machines, rather than the characteristic knowledge showed by people  Using processes like neural network technology and machine learning, AI attaches an intelligent or smart capacity to machines. Modern healthcare, telecoms, management, and financial industries of today all employ AI for many of their business processes.

AI market is expected to become a $ 200 billion industry by 2025.  Chatbots will power 88% of customer service by 2020

Within AI and next to ML there are the fields of robotics, speech recognition, computer vision, etc., which are key building blocks towards enabling machine intelligence.

Government agencies such as public safety and utilities have a particular need for machine learning since they have multiple sources of data that can be mined for insights.

Given the complexity of financial fraud, and the speed at which cybercriminals adapt, a combination of supervised and unsupervised machine learning methods are needed to create a model with sufficient predictive capability and accuracy.

Machine learning models operate with tens of thousands of parameters and are far more effective in finding subtle correlations in data, which may be hidden for an expert system, or a human reviewer.

The biggest problem with machine learning systems is that we ourselves don't quite understand everything they're supposedly learning, nor are we certain they're learning everything they should or could be. We've created systems that draw mostly, though never entirely, correct inferences from ordinary data, by way of logic that is by no means obvious.

The whole point of machine learning is to infer the relationships between objects when, unlike the tides, it isn't already clear to human beings what those relationships are. Machine learning is put to use when linear regression or best-fit curves are insufficient -- when math can't explain the relationship.

But perhaps that should have been our first clue: If no mathematical correlation exists, then shouldn't any other kind of relationship we can extrapolate be naturally weaker? Does a relationship exist, for instance, between a certain tech journalist with a goatee and any recorded inferences from suspected, goatee-wearing watch-list terrorists? And if there does exist such a relationship, should it?
.
With an AI that can examine thousands of scans in a minute, the “dull drudgery” is left to machines, and the doctors are freed to concentrate on the parts of the job that require more complex, subtle, experience-based judgment of the best treatments and the needs of the patient.

Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms. It’s easy to miss the subtle difference with interpretability, but consider it like this: interpretability is about being able to discern the mechanics without necessarily knowing why. Explainability is being able to quite literally explain what is happening.


Data gathering. All machine learning models require data as inputs. In today’s increasingly digitized world, data can be derived from various sources including user interactions on a website, collections of photo images and sensor recordings.

Data preparation. Data collected are rarely in a usable state as-is. Data often need to be cleaned, transformed and checked for errors before they are ready to be fed into a model.

Data preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data

Data preparation is the act of manipulating (or pre-processing) raw data (which may come from disparate data sources) into a form that can readily and accurately be analysed, e.g. for business purposes.

Data preparation is the first step in data analytics projects and can include many discrete tasks such as loading data or data ingestion, data fusion, data cleaning, data augmentation, and data delivery.

The issues to be dealt with fall into two main categories:--

systematic errors involving large numbers of data records, probably because they have come from different sources;

individual errors affecting small numbers of data records, probably due to errors in the original data entry.

Data Preparation involves checking or logging the data in; checking the data for accuracy; entering the data into the computer; transforming the data, and developing and documenting a database structure that integrates the various measures.

There are a wide variety of ways to enter the data into the computer for analysis. Probably the easiest is to just type the data in directly. In order to assure a high level of data accuracy, the analyst should use a procedure called double entry. In this procedure you enter the data once.

Then, you use a special program that allows you to enter the data a second time and checks each second entry against the first. If there is a discrepancy, the program notifies the user and allows the user to determine the correct entry. This double entry procedure significantly reduces entry errors.

Data preparation is a pre-processing step that involves cleansing, transforming, and consolidating data. In other words, it is a process that involves connecting to one or many different data sources, cleaning dirty data, reformatting or restructuring data, and finally merging this data to be consumed for analysis. More often than not, this is the most time consuming step of the entire analysis life cycle and the speed and efficiency of the data prep process directly impacts the time it takes to discover insights.

Cleaning up messy data involves tasks such as:--

Merging: Combine/enrich relevant data from different datasets into a new dataset
Appending: Combine two smaller (but similar) datasets into a larger dataset
Filtering: Rule-based narrowing of a larger dataset into a smaller dataset
Deduping: Remove duplicates based on specific criteria as defined
Cleansing: Edit or replace values, i.e. some records had “F” as gender while others had “Female”; alter to have “Female” for all records or set NULL values to a default value
Transforming: Convert missing values or derive a new column from existing column(s)
Aggregating: Roll up data to have summarized data for analysis

Sampling & Partitioning: This involves breaking down the entire dataset into a smaller set of sample data to reduce the size of the training data. These samples are then used for training, testing, and validating the model. It is important to ensure that the sample set includes data covering various scenarios to ensure the model is trained accordingly and not end up with a biased or inaccurate model.

Data becomes a precious resource only if it is cleansed, well-labeled, annotated, and prepared. Once the data goes through various stages of fitness tests it then finally becomes qualified for further processing. The processing could be of several methods - data ingested to BI tools, CRM database, developing algorithms for analytical models, data management tools and so on.

More than 80% of a data scientist’s time is spent on preparing the data.

Data scientists should ideally be spending more of their time interacting with data, advanced analytics, training and evaluating the model, and deploy to production.

Only 20% of the time goes into the major chunk of the process. In order to overcome time constraints, organizations need to reduce the time taken (varies depending on the complexity of the project) on cleansing, augmenting, labeling and enriching the data by leveraging expert solutions for data engineering, labeling, and preparation.

This brings us to the concept of Garbage in garbage out i.e. the quality of output is determined by the quality of the input.

The Data Preparation Process—
Here’s a quick brief of the data preparation process specific to machine learning models:--

Data extraction the first stage of the data workflow is the extraction process which is typically retrieval of data from unstructured sources like web pages, PDF documents, spool files, emails, etc. The process deployed in extracting the information from the web is termed as web scraping.

Data profiling is the process of reviewing existing data to improve the quality and bring structure through a format. This helps in assessing the quality and coherence to particular standards. Most machine learning models fail to work when the datasets are imbalanced and not well-profiled.

Data cleansing ensures data is clean, comprehensive, error-free with accurate information as it helps in detecting outliers not only for texts and numeric but also irrelevant pixels from images. You can eliminate bias and obsolete information to ensure your data is clean.

Data transformation is transforming data to make it homogeneous. Data like addresses, names, and other field types are represented in different formats and data transformation helps in standardizing and normalizing this.

Data anonymization is the process of removing or encrypting personal information from the datasets to protect privacy.

Data augmentation is used to diversify the data available for your training models. Bringing in additional information without extracting new information includes cropping and padding to train neural networks.

Data sampling Identify representative subsets from large datasets to analyze and manipulate data.

Feature engineering is a major determinant of classifying a machine learning model as a good or a bad model. To improve the model accuracy you would combine datasets to consolidate it into one.

Data Labeling - An essential and integral part of data preparation   .. Labeling is simply assigning tags to a set of unlabeled data to make it more identifiable for predictive analysis. 

These labels indicate whether the animal in the photo is a dog or a fox ..

Labeling helps the machine learning model to guess and predict a piece of unlabeled data as a result of feeding the model with millions of labeled data.

Poor data and good models are a bad combination that can ruin the efficiency and performance of the model you intend to build.

There are a plenty of data preparation solutions available out there that can help you save time and achieve efficiency. Though there are self-service data preparation tools available out there in the market, the managed services slightly have an edge over them considering the scalability of the in-house infrastructure, leveraging vast data collection from disparate sources, compliance to various data norms and guidelines, getting expert assistance as and when needed.


Data preprocessing is a data mining technique that involves transforming raw data into an understandable format. Real-world data is often incomplete, inconsistent, and/or lacking in certain behaviors or trends, and is likely to contain many errors.

Steps Of data preprocessing:--
Data cleaning: fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies.
Data integration: using multiple databases, data cubes, or files.
Data transformation: normalization and aggregation.

Data Generalization is the process of creating successive layers of summary data in an evaluational database. It is a process of zooming out to get a broader view of a problem, trend or situation. It is also known as rolling-up data.

The training data is used to make sure the machine recognizes patterns in the data, the cross-validation data is used to ensure better accuracy and efficiency of the algorithm used to train the machine, and the test data is used to see how well the machine can predict new answers based on its training

Dataset is split into training and testing sets. The training dataset is used to build and train the model while the testing dataset, which is kept separate, is used to evaluate how well the model performs. It is important to assess the model on data it has not seen before in order to ensure that it has indeed learned something about the underlying structure of the data rather than simply “memorized” the training data.

Training data's output is available to model whereas testing data is the unseen data for which predictions have to be made.

In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data.  Such algorithms work by making data-driven predictions or decisions,through building a mathematical model from input data. 

The data used to build the final model usually comes from multiple datasets. In particular, three data sets are commonly used in different stages of the creation of the model.

The model is initially fit on a training dataset,  that is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method (e.g. gradient descent or stochastic gradient descent). 

In practice, the training dataset often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), which is commonly denoted as the target (or label). The current model is run with the training dataset and produces a result, which is then compared with the target, for each input vector in the training dataset. 

Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation.

Successively, the fitted model is used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset provides an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters  (e.g. the number of hidden units in a neural network . 

Validation datasets can be used for regularization by early stopping: stop training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset. This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when overfitting has truly begun. 

Finally, the test dataset is a dataset used to provide an unbiased evaluation of a final model fit on the training dataset . If the data in the test dataset has never been used in training (for example in cross-validation), the test dataset is also called a holdout dataset.


 Fit and train models. This is the step where various types of ML models such as regression models, random forests and neural networks are built and applied to the training data. Models are iterated on by making small adjustments to their parameters in order to improve their performance with the goal of generating the most accurate predictions possible.

Evaluate model on the test dataset. The top performing model is used on the testing data to get a sense of how the model will perform on real world data it’s never seen before. Based on the results, further refinement and tuning of the model may be needed.

Make predictions! Once the model is finalized, it can begin to be used to answer the question it was designed for.

If the data available is that there are more women homemakers and more men computer programmers, and the mathematical model the machine is building is actually taking that distribution into account you end up with distributions you don't want to promote. 

A 50/50 distribution between males and females reflects the desired result in the absence of any other information, but the distribution in the data has a higher proportion of women associated with one class than men.

The desired value judgments must be added in or results filtered by an external process, because the math doesn't tell the machine learning algorithm how to differentiate.

AI, machine learning and deep learning can produce dangerous results if unchecked by extrapolating outdated mores to predict the future. The result would be the perpetuation of unjust perceptions of the past. Any responsible AI technology must be aware of these limitations and take the steps to avoid them.


One concept that could compromise ML is known as poisoning the well.

This is where malicious actors take advantage of the machine learning process and taint the data pool from which these systems learn how to identify malicious code.

By inserting fraudulent code into the process, attackers can cause a system to generate false positives, undermining its intended functionality.
.
Google’s Android operating system is an open book. Its Play Store is a neutral platform, making it easy for tech vendors and app developers to post wares for download. However, some malicious actors have exploited the platform’s open structure through malware

Google announced that it has removed around 600 apps from the Play Store as a part of a protracted effort to clamp down on groups that have violated the company’s disruptive ad policy. The purged apps collectively account for over 4.5 billion downloads

Once a computer is infected by malware, criminals can hurt consumers and enterprises in many ways.

Deepfakes can be leveraged in targeted malicious attacks to extort or manipulate perception and truth.
Microsoft was able to successfully implement ML (built into Windows Defense AV) to detect and mitigate Emotet malware. Emotet is a mature threat that is well known for its polymorphic capabilities, making it next to impossible to detect the next variant in the campaign using signature-based strategies.

Detection of Emotet using AI was achieved through modeling in a decision tree, related to probabilities, weighted components and calculations performed by the tool. It also included real-time cloud machine learning across the Windows Defender complex ML models. This enabled deep learning and real-time protection within an AI solution set to successfully detect and block Emotet.

AI must be adopted and implemented in a well-considered, deliberate fashion with an initial emphasis on manual execution and consistently capable outcomes. Once this is accomplished organizations can then implement components of orchestration and automation towards long-term AI goals.

The use of ML/AI today is best performed to streamline operations in a big data world that’s constantly changing. Shoddy implementation is a far greater threat than actual AI-empowered malware today.

As AI becomes a greater presence in the cybersecurity landscape, how organizations position and defend will separate survivors from victims. This is especially true for organizations that embrace the need to transform, leveraging AI to help wade through big data, contextualized modeling and decisions that need to be made to operationalize future security

Major banks throughout Asia were hit by hackers deploying a malicious code, named Silence Malware, on the banks’ networks; Facebook was the victim of an attack that exposed 540 million records about Facebook users and published them on Amazon's cloud computing service.

Security researchers found four billion records from 1.2 billion people on an unsecured Elasticsearch server; the UK’s Labour Party reported it was hit with two DDoS cyber-attacks in the run up to the country’s general election;. 2019 was also the year of deepfakes, with 2% of deepfake videos on YouTube featuring corporate figures.

CYBERSECURITY SOLUTIONS THAT RELY ON ML USE DATA FROM PRIOR CYBER-ATTACKS TO RESPOND TO NEWER BUT SOMEWHAT SIMILAR RISKS.. FALSE FLAG ATTACKS POISON THE SYSTEM

In this way, an AI system powered by ML can leverage what it knows and understands about past attacks and threats to identify other attacks in the same vein or style.

Because hackers are consistently building upon older threats – including new abilities or tweaking previously used samples to build out a malware family – utilizing AI and ML systems to look out for and provide notification of emerging attacks could be beneficial to stemming the tide of zero-day threats.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

By the everyday usage definition of the phrase, all computer systems are reasoning systems in that they all automate some type of logic or decision. In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning.

For example, not for systems that do fairly straightforward types of reasoning such as calculating a sales tax or customer discount but making logical inferences about a medical diagnosis or mathematical theorem.

Reasoning systems come in two modes: interactive and batch processing. Interactive systems interface with the user to ask clarifying questions or otherwise allow the user to guide the reasoning process. Batch systems take in all the available information at once and generate the best answer possible without user feedback or guidance.

Reasoning systems have a wide field of application that includes scheduling, business rule processing, problem solving, complex event processing, intrusion detection, predictive analytics, robotics, computer vision, and natural language processing.

Conventional, data-crunching artificial intelligence, which is the foundation of deep learning, isn’t enough on its own; the human-like reasoning of symbolic artificial intelligence is fascinating, but on its own, it isn’t enough either.

The unique hybrid combination of the two — numeric data analytics techniques that include statistical analysis, modeling, and machine learning, plus the explainability (and transparency) of symbolic artificial intelligence — is now termed “cognitive AI.”

A Cognitive computer or system learn at scale, reasons with purpose and interacts with humans naturally. Rather than being explicitly programmed, these systems learn and reason from their interactions with human beings and their experiences with their environment. Cognitive computing overlaps with Artificial Intelligence and involves similar technologies to power cognitive applications.

Using cognitive computing systems helps in making better human decisions at work. Some of the applications of cognitive computing include speech recognition, sentiment analysis, face detection, risk assessment, and fraud detection.


Cognitive computing systems synthesize data from various information sources while weighing context and conflicting evidence to suggest suitable answers. To achieve this, cognitive systems include self-learning technologies using data mining, pattern recognition, and natural language processing (NLP) to understand the way the human brain works.

Key Attributes--
Adaptive: Cognitive systems must be flexible enough to understand the changes in the information. Also, the systems must be able to digest dynamic data in real-time and make adjustments as the data and environment change.

Interactive: Human-computer interaction (HCI) is a critical component in cognitive systems. Users must be able to interact with cognitive machines and define their needs as those needs change. The technologies must also be able to interact with other processors, devices and cloud platforms.

Iterative and stateful: Also, these systems must be able to identify problems by asking questions or pulling in additional data if the problem is incomplete. The systems do this by maintaining information about similar situations that have previously occurred.

Contextual: Cognitive systems must understand, identify and mine contextual data, such as syntax, time, location, domain, requirements, a specific user’s profile, tasks or goals. They may draw on multiple sources of information, including structured and unstructured data and visual, auditory or sensor data.

AI augments human thinking to solve complex problems. ... Cognitive Computing focuses on mimicking human behavior and reasoning to solve complex problems. Cognitive Computing tries to replicate how humans would solve problems while AI seeks to create new ways to solve problems that can potentially be better than humans

Cognitive computing is a subset of AI and although the underlying purpose for both these technologies is to simplify tasks, the difference lies in the way they approach tasks. AI is used to augment human thinking and solve complex problems

Cognitive computing describes technologies that are based on the scientific principles behind artificial intelligence and signal processing, encompassing machine self-learning, human-computer interaction, natural language processing, data mining and more.

Cognitive Computing focuses on mimicking human behavior and reasoning to solve complex problems. AI augments human thinking to solve complex problems. It focuses on providing accurate results. It simulates human thought processes to find solutions to complex problems

AI augments human thinking to solve complex problems. It focuses on accurately reflecting reality and providing accurate results. Cognitive Computing focuses on mimicking human behavior and reasoning to solve complex problems.

Cognitive computing  combines  the power of computational processing with the power of human thinking. ... The Cognitive data scientist has the ability of use, customize or develop cognitive approach to solve problems or improve results


PROFORMA:  Cognitive Computing            // Artificial Intelligence

Cognitive Computing focuses on mimicking human behavior and reasoning to solve complex problems.   //AI augments human thinking to solve complex problems. It focuses on providing accurate results.

CC simulates human thought processes to find solutions to complex problems.       //AI finds patterns to learn or reveal hidden information and find solutions.

CC simply supplement information for humans to make decisions.// AI is responsible for making decisions on their own minimizing the role of humans.

CC is mostly used in sectors like customer service, health care, industries, etc.        // AI is mostly used in finance, security, healthcare, retail, manufacturing, etc.


An advanced application of cognitive modeling is the creation of cognitive machines, which are AI programs that approximate some areas of human cognition



Applications of Cognitive AI--
Smart IoT: This includes connecting and optimizing devices, data and the IoT.  But assuming we get more sensors and devices, the real key is what’s going to connect them.

AI-Enabled Cybersecurity: We can fight the cyber-attacks with the use of data security encryption and enhanced situational awareness powered by AI. This will provide a document, data, and network locking using smart distributed data secured by an AI key.

Content AI: A solution powered by cognitive intelligence continuously learns and reasons and can simultaneously integrate location, time of day, user habits, semantic intensity, intent, sentiment, social media, contextual awareness, and other personal attributes

Cognitive Analytics in Healthcare: The technology implements human-like reasoning software functions that perform deductive, inductive and abductive analysis for life sciences applications.

Intent-Based NLP: Cognitive intelligence can help a business become more analytical in their approach to management and decision making. This will work as the next step from machine learning and the future applications of AI will incline towards using this for performing logical reasoning and analysis.

What are the business benefits of cognitive computing?

Improved data collection and interpretation: Cognitive computing applications analyze patterns and apply machine learning to replicate human capabilities such as deduction, learning, perception and reasoning. Both structured and unstructured data can be collected from diverse sources, and in-depth cognitive analytics are applied to interpret the data. 

That information can then be used to improve visibility into internal processes, how your products and services are being received, what your customers’ preferences are and how best to build their loyalty.

Troubleshooting and error detection: By applying cognitive concepts to a robust technological environment, cognitive computing can help you more quickly and accurately identify issues in business processes and uncover opportunities for solutions.

More informed decision-making: Through its data collection and analysis capabilities, cognitive computing allows for more informed, strategic decision-making and business intelligence. This can lead to more efficient business processes, smarter financial decisions, and overall improved efficiency and cost savings.

Improved customer retention: Cognitive computing sets the stage for a more helpful, informed customer-to-technology experience, improving customer interactions. Its ability to interact with and understand and learn from humans greatly improves overall customer retention and satisfaction.
]

Cognitive Computing. is a platform that represents a new era of computing based on its ability to interact in natural language, process vast amounts of disparate forms of big data and learn from each interaction..  cognitive era is a merger of the immense strengths of computers with the current capabilities of their human operators



Cognitive analytics is considered as cutting edge framework that chats in human dialect and causes specialists to settle on better choices by understanding the complexities of huge information. In the present situation, the vast majority of the information got is unstructured, for example, pictures, recordings, normal dialect and images. 

Cognitive analytics, with the assistance of various advances, for example, characteristic dialect handling, machine learning and robotized thinking, makes an interpretation of unstructured information to detect, induce and foresee the best arrangement.

cognitive analytics have a unique way of approaching toward an information where it reveals patterns, connections and juxtaposition of unpredicted insights. These processes will assist in bringing out possibilities and will be managed by autonomous and self-learning platforms. The self-learning pattern of cognitive enterprise will pave the way in leveraging multifaceted technologies as, internet of things



The Internet of things (IoT) is a system of interrelated computing devices, mechanical and digital machines are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.


The definition of the Internet of things has evolved due to the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, and embedded systems. Traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), and others all contribute to enabling the Internet of things. 



In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the "smart home", covering devices and appliances (such as lighting fixtures, thermostats, home security systems and cameras, and other home appliances) that support one or more common ecosystems, and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers.

Cognitive analytics is a field of analytics that tries to mimic the human brain by draw inferences from existing data and patterns, draws conclusions based on existing knowledge bases and then inserts this back into the knowledge base for future inferences - a self learning feedback loop.

When you harness the power of analytics, automation, and artificial intelligence (AI), you can uncover hidden relationships from vast amounts of data. Implementing the right strategy and technology will balance speed, cost, and quality to deliver measurable business value. 

Deep Reasoning is the field of enabling machines to understand implicit relationships between different things. For example, consider the following: “all animals drink water. Cats are animals”. Here, the implicit relationship is that all cats drink water, but that was never explicitly stated. 

Turns out humans are really good at this kind of relational reasoning and understanding how different things relate to one another, but it doesn’t come so easily to computers which operate on strict, explicit rules. Deep reasoning allows AI to understand abstract relationships between different ‘things’.

A “relation network” module can easily be plugged into a deep learning model to give it relational reasoning capabilities.

They did this using 3 networks:---
A Long-Short-Term-Memory (LSTM) network for processing the question
A convolutional neural network (CNN) for processing the image
A Relation Network (RN) to understand how the different objects relate to each other.

Key technology components were at the core of the wildly successful NASA Mars Rover’s mission. Alone and 150 million miles from Earth, the rover was able to successfully adapt to conditions without direct instruction. After a dust storm, it taught itself to rotate its solar panels and shake off accumulated dust blocking essential solar ray absorption. Then it taught itself to correlate sensory evidence with mission objectives to build the first practical weather model of another planet.

Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AI system could find a solution.

While benefits of AGI are obvious, its eventual transition to ASI may create machines that human may struggle to control. Humans will gain as long as their natural intelligence controls intelligence of machines.


  1. SOCIAL MEDIA MINING IS THE PROCESS OF OBTAINING BIG DATA FROM USER-GENERATED CONTENT ON SOCIAL MEDIA SITES AND MOBILE APPS IN ORDER TO EXTRACT PATTERNS, FORM CONCLUSIONS ABOUT USERS, AND ACT UPON THE INFORMATION, OFTEN FOR THE PURPOSE OF MAKING MONEY BY SELLING IT TO MALICIOUS SPY AGENCIES.

    THE FACEBOOK–CAMBRIDGE ANALYTICA DATA SCANDAL WAS A MAJOR POLITICAL SCANDAL IN EARLY 2018 WHEN IT WAS REVEALED THAT CAMBRIDGE ANALYTICA HAD HARVESTED THE PERSONAL DATA OF MILLIONS OF PEOPLE'S FACEBOOK PROFILES WITHOUT THEIR CONSENT..

    FACEBOOK REGULARLY COLLECTS IMMENSE VOLUMES OF DATA ON ITS USERS BY TRAWLING THROUGH ADS THAT WERE CLICKED, UPDATES ON PROFILES, SHOWS WATCHED, AND HOLIDAYS, FILING ALL THAT DATA AWAY IN ORDER TO “HELP ADVERTISERS REACH PEOPLE…INTERESTED IN THEIR PRODUCTS, SERVICES, AND CAUSES”.

    COMPANIES LIKE GOOGLE SELL DATA ALL THE TIME

    SOCIAL MEDIA MINING REQUIRES HUMAN DATA ANALYSTS AND AUTOMATED SOFTWARE PROGRAMS TO SIFT THROUGH MASSIVE AMOUNTS OF RAW SOCIAL MEDIA DATA IN ORDER TO DISCERN PATTERNS AND TRENDS RELATING TO SOCIAL MEDIA USAGE, ONLINE BEHAVIOURS, PERSONAL CHOICES , SOLIDARITY PREFERENCES, SHARING OF CONTENT, CONNECTIONS BETWEEN INDIVIDUALS, ONLINE BUYING BEHAVIOUR, AND MORE.
    THESE PATTERNS AND TRENDS ARE OF INTEREST TO KOSHER COMPANIES, HOSTILE GOVERNMENTS AND NOT-FOR-PROFIT ORGANIZATIONS, AS THESE ORGANIZATIONS CAN USE THESE PATTERNS AND TRENDS TO DESIGN THEIR STRATEGIES OR INTRODUCE NEW PROGRAMS, NEW PRODUCTS, PROCESSES OR SERVICES.

    SOCIAL MEDIA MINING USES A RANGE OF BASIC CONCEPTS FROM COMPUTER SCIENCE, DATA MINING, MACHINE LEARNING AND STATISTICS. FACEBOOK/ TWITTER MINERS DEVELOP ALGORITHMS SUITABLE FOR INVESTIGATING MASSIVE FILES OF SOCIAL MEDIA DATA.

    SOCIAL MEDIA MINING IS BASED ON THEORIES AND METHODOLOGIES FROM SOCIAL NETWORK ANALYSIS, NETWORK SCIENCE, SOCIOLOGY, ETHNOGRAPHY, OPTIMIZATION AND MATHEMATICS. IT ENCOMPASSES THE TOOLS TO FORMALLY REPRESENT, MEASURE AND MODEL MEANINGFUL PATTERNS FROM LARGE-SCALE SOCIAL MEDIA DATA

    WEB DATA MINING CAN LEND INFORMATION OUT TO OTHER BUSINESSES FOR MONEY. USERS ARE UNAWARE OF HOW THE INFORMATION COLLECTED ABOUT THEM IS BEING USED. ... HENCE, IT IS UNETHICAL TO USE WEB DATA MINING AS PEOPLE'S PRIVACY IS VIOLATED..

    SOCIAL MEDIA ALSO PROVIDES A PERFECT SANDBOX FOR ETHNOGRAPHIES AND STUDYING PEOPLE FOR RELIGIOUS CONVERSIONS OR EVEN HOMOSEXUALITY / DRUG SELLER GROUPS TO HIJACK THEM..

    IN USA SOCIAL MEDIA SITES INCLUDING YOUTUBE AND FACEBOOK PLAYED A SIGNIFICANT ROLE IN RAISING FUNDS AND GETTING CANDIDATES’ MESSAGES TO VOTERS, AND THE DATABASE COLLECTED FROM FACEBOOK ALLOWED CAMPAIGNS TO “IDENTIFY POSSIBLE SWING VOTERS AND CRAFT MESSAGES MORE LIKELY TO RESONATE.”

    PRISM IS A CODE NAME FOR A PROGRAM UNDER WHICH THE UNITED STATES NATIONAL SECURITY AGENCY (NSA) COLLECTS INTERNET COMMUNICATIONS FROM VARIOUS U.S. INTERNET COMPANIES..

    PRISM IS A TOOL USED BY THE US NATIONAL SECURITY AGENCY (NSA) TO COLLECT PRIVATE ELECTRONIC DATA BELONGING TO USERS OF MAJOR INTERNET SERVICES LIKE GMAIL, FACEBOOK, TWITTER ETC

    EDWARD SNOWDEN, AN INTELLIGENCE CONTRACTOR FORMERLY EMPLOYED BY THE NSA/ CIA CONFESSED RESPONSIBILITY FOR LEAKING THE PRISM DOCUMENTS.

    NSA PROGRAMS COLLECT TWO KINDS OF DATA: METADATA AND CONTENT. METADATA IS THE SENSITIVE BYPRODUCT OF COMMUNICATIONS, SUCH AS PHONE RECORDS THAT REVEAL THE PARTICIPANTS, TIMES, AND DURATIONS OF CALLS; THE COMMUNICATIONS COLLECTED BY PRISM INCLUDE THE CONTENTS OF EMAILS, CHATS, VOIP CALLS, CLOUD-STORED FILES, AND MORE.

    CONTINUED TO 2--
    1. CONTINUED FROM 1--

      “Facebook’s internal purpose, whether they state it publicly or not, is to compile perfect records of private lives to the maximum extent of their capability, and then exploit that for their own corporate enrichment. And damn the consequences,” ----------“The more Google knows about you, the more Facebook knows about you, the more they are able ... to create permanent records of private lives, the more influence and power they have over us”--------- “There is no good reason why Google should be able to read your email. There is no good reason why Google should know the messages that you’re sending to your friend. Facebook shouldn’t be able to see what you’re saying when you’re writing to your mother.” --------- “There is a class led by Mark Zuckerberg that is moving toward the maximization of technological power and influence that can be applied to society because they believe they can profit by it or, rightly or wrongly, they can better use the influence that their systems provide to direct the world into a better direction. ... And then you have this other fork in the road where there are people ... who go, ‘The advance of technology is inevitable and technology can do very good things for the world, but we need to understand that there must be limits on how that technological power and influence can be applied.’” --------- Edward Snowden.

      SNOWDEN ALSO POINTED OUT THAT THE FOURTH AMENDMENT — WHICH PROTECTS CITIZENS FROM SEARCHES UNLESS LAW ENFORCEMENT HAS A WARRANT OR PROBABLE CAUSE — ONLY APPLIES TO GOVERNMENT, NOT TO COMPANIES. SO WHILE THE FBI MIGHT NEED A WARRANT TO PROBE YOUR INBOX, THERE’S NO CONSTITUTIONAL BARRIER TO A COMPANY LIKE FACEBOOK SEARCHING AND RETRIEVING PEOPLE’S PRIVATE INFORMATION WITHOUT A JUDGE’S APPROVAL.

      RECENTLY, COMPANIES SUCH AS FACEBOOK, GOOGLE, AND AMAZON HAVE COME UNDER FIRE BY REGULATORS FOR THEIR PERCEIVED NEGATIVE EFFECTS ON SOCIETY — FROM ALLEGED MONOPOLISTIC PRACTICES TO DATA BREACHES.

      Capt ajit vadakayil
      ..


With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Machine Learning  enable systems to learn patterns from data and subsequently improve from experience. It is an interdisciplinary field that includes information theory, control theory, statistics, and computer science. 

As it gathers and sorts more information, machine learning constantly gets better at identifying types and forms of data with little or no hard coded rules. For example, through pattern recognition, machine learning will increase the accuracy of identifying specific objects or images.

Multi-Task learning is a sub-field of Machine Learning that aims to solve multiple different tasks at the same time, by taking advantage of the similarities between different tasks. 

AI CANNOT MATCH HUMAN REASONING

At a high level, Machine Learning is the ability to adapt to new data independently and through iterations.  Applications learn from previous computations and transactions and use “pattern recognition” to produce reliable and informed results. 

To better understand the uses of Machine Learning, consider some instances where Machine Learning is applied: the self-driving Google car; cyber fraud detection; and, online recommendation engines from Facebook, Netflix, and Amazon. Machines can enable all of these things by filtering useful pieces of information and piecing them together based on patterns to get accurate results.

The process flow depicted here represents how Machine Learning works:-

Typical results from Machine Learning applications we either see or don’t regularly include web search results, real-time ads on web pages and mobile devices, email spam filtering, network intrusion detection, and pattern and image recognition. All these are by-products of using Machine Learning to analyze massive volumes of data.
.
Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. 

The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

For example, if you give a machine learning program many photos of pregnancy ultrasounds together with a list of indications to identify the gender, it’s likely to learn to analyze ultrasound gender results in the future. ML programs compare different information to find common patterns and come up with correct results.
.
Overall, ML is a learning process, which the machine can achieve on its own without being explicitly programmed to do. Machine learning involves computer learning from experience.

AI involves a computer executing a task a human could do. Machine learning involves the computer learning from its experience and making decisions based on the information. While the two approaches are different, they are often used together to achieve many goals in different industries.

Machine Learning techniques. While there is nothing inherently wrong with Machine Learning, the main caveat for a successful Machine Learning outcome is sufficient and representative historical data for the machine to learn from. 

For example, if an AI model is learning to recognize chairs and has only seen standard dining chairs that have four legs, the model may believe that chairs are only defined by four legs. This means if the model is shown to, say, a desk chair that has one pillar, it will not recognize it as a chair.

Machine learning is predicated on learning from data, so having the right quantity and quality is essential. Security leaders should ask the following questions about their data sources to optimize their machine learning deployments:. 

Machine learning is all about assigning a task and letting a computer decide the most efficient way to do it. Because they don’t understand, it’s easy to end up with a computer “learning” how to solve a different problem from what you wanted.

With machine learning, a computer isn’t programmed to perform a specific task. Instead, it’s fed data and evaluated on its performance at the task.

An elementary example of machine learning is image recognition. Let’s say we want to train a computer program to identify photos that have a dog in them. We can give a computer millions of images, some of which have dogs in them and some don’t. The images are labeled whether they have a dog in them or not. The computer program “trains” itself to recognize what dogs look like based on that data set.

The interpretability of a machine learning model is essential for gaining insight into model behavior. Understanding the rationale behind the model's predictions would certainly help users decide when to trust or not to trust their predictions which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model.

Error in Machine Learning is the difference in the expected output and the predicted output of the model. It is a measure of how well the model performs over a given set of data.

For binary classification problems, there are two primary types of errors. Type 1 errors (false positives) and Type 2 errors (false negatives). It's often possible through model selection and tuning to increase one while decreasing the other, and often one must choose which error type is more acceptable.


In supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error ) is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. 

Because learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive to sampling error. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data. 

Generalization error can be minimized by avoiding overfitting in the learning algorithm. The performance of a machine learning algorithm is measured by plots of the generalization error values through the learning process, which are called learning curves.






Error (statistical error) describes the difference between a value obtained from a data collection process and the 'true' value for the population. The greater the error, the less representative the data are of the population. Data can be affected by two types of error: sampling error and non-sampling error.

There are tradeoffs between the types of errors that a machine learning practitioner must consider and often choose to accept. For binary classification problems, there are two primary types of errors. Type 1 errors (false positives) and Type 2 errors (false negatives)

In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation".

There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency.  

Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. 

Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data

Feature Scaling or Standardization: It is a step of Data Pre Processing which is applied to independent variables or features of data. It basically helps to normalise the data within a particular range. Sometimes, it also helps in speeding up the calculations in an algorithm

In machine learning, a hyperparameter is a parameter whose value is set before the learning process begins. ... Given these hyperparameters, the training algorithm learns the parameters from the data
By contrast, the values of other parameters are derived via training.

Hyperparameters can be classified as model hyperparameters, that cannot be inferred while fitting the machine to the training set because they refer to the model selection task, or algorithm hyperparameters, that in principle have no influence on the performance of the model but affect the speed and quality of the learning process. An example of the first type is the topology and size of a neural network. An example of the second type is learning rate or mini-batch size.

Model selection is the process of choosing between different machine learning approaches - e.g. SVM, logistic regression, etc - or choosing between different hyperparameters or sets of features for the same machine learning approach - e.g. deciding between the polynomial degrees/complexities for linear regression..  

Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection.

Model selection is the process of choosing one among many candidate models for a predictive modeling problem.

There may be many competing concerns when performing model selection beyond model performance, such as complexity, maintainability, and available resources.

The two main classes of model selection techniques are probabilistic measures and resampling methods.

A “good enough” model may refer to many things and is specific to your project, such as:--

A model that meets the requirements and constraints of project stakeholders.
A model that is sufficiently skillful given the time and resources available.
A model that is skillful as compared to naive models.
A model that is skillful relative to other tested models.
A model that is skillful relative to the state-of-the-art.

Some algorithms require specialized data preparation in order to best expose the structure of the problem to the learning algorithm. Therefore, we must go one step further and consider model selection as the process of selecting among model development pipelines.

Each pipeline may take in the same raw training dataset and outputs a model that can be evaluated in the same manner but may require different or overlapping computational steps, such as:--

Data filtering.
Data transformation.
Feature selection.
Feature engineering.

Random errors most often result from limitations in the equipment or techniques used to make a measurement.

Noise is a distortion in data, that is unwanted by the perceiver of data. Noisy data is data with a large amount of additional meaningless information in it called noise. This includes data corruption and the term is often used as a synonym for corrupt data. It also includes any data that a user system cannot understand and interpreted correctly.

Noisy data is data that is corrupted, or distorted, or has a low Signal-to-Noise Ratio. Improper procedures (or improperly-documented procedures) to subtract out the noise in data can lead to a false sense of accuracy or false conclusions.

Data = true signal + noise

Noise often causes the algorithms to miss out patterns in the data.
Data filtering is the task of reducing the content of noise or errors from measured process data. It is an important task because measurement noise masks the important features in the data and limits their usefulness in practice.

Noisy data is data with a large amount of additional meaningless information in it called noise  This includes data corruption and the term is often used as a synonym for corrupt data.  It also includes any data that a user system cannot understand and interpreted correctly.   

Many systems, for example, cannot use unstructured text. Noisy data can adversely affect the results of any data analysis and skew conclusions if not handled properly. Statistical analysis is sometimes used to weed the noise out of noisy data

Any data that has been received, stored, or changed in such a manner that it cannot be read or used by the program that originally created it can be described as noisy. Noisy data unnecessarily increases the amount of storage space required and can also adversely affect the results of any data mining analysis.

Benefits of identifying & treating noise in data:---
enables the DS algorithm to train faster.
reduces the complexity of a model and makes it easier to interpret
improves the accuracy of a model if the right subset is chosen
reduces overfitting

The common method for the removal of noise is optimal linear filtering method, and some algorithms in this method are Wiener filtering, Kalman filtering and spectral subtraction technique. ... All of the above techniques show an increased Signal to Noise Ratio (SNR) after processing, as seen in the simulation results.

Even though many differences exist between AI and ML, they are closely connected.  AI and ML are often viewed as the body and the brain. The body collects information, the brain processes it. The same is with AI, which accumulates information while ML processes it.

When people use these two terms interchangeably, they fail to have a deeper understanding of the concepts while intuitively understanding how closely related they are.

We understand about cause and effect.. All  ML could do was follow the pattern. When the pattern changed, it was helpless.

The data that is obtained from the real world is not ideal or noise-free. It contains a lot of noise, which needs to be filtered out before applying the Machine Learning Algorithms.

Classification: A lot of classes will be misclassified in the training set as well as the validation set. On data visualization, the graph would indicate that if there was a more complex model, more classes would have been correctly classified.

Regression: The final “Best Fit” line will fail to fit the data points in an effective manner. On visualization, it would clearly seem that a more complex curve can fit the data better.
Train a more complex model  Obtain more features: If the data set lacks enough features to get a clear inference, then Feature Engineering or collecting more features will help fit the data better.

Decrease Regularization: Regularization is the process that helps Generalize the model by avoiding overfitting. However, if the model is learning less or underfitting, then it is better to decrease or completely remove Regularization techniques so that the model can learn better.

New Model Architecture: Finally, if none of the above approaches work, then a new model can be used, which may provide better results.

Remove features: As a contrast to the steps to avoid underfitting, if the number of features is too many, then the model tends to overfit. Hence, reducing the number of unnecessary or irrelevant features often leads to a better and more generalized model. Deep Learning models are usually not affected by this. 

There are several methods to calculate error in Machine Learning. One of the most commonly used terminologies to represent the error is called the Loss/Cost Function. It Mean Squared Error formula definition

The necessity of minimization of Errors: As it is obvious from the previously shown graphs, the higher the error, the worse the model performs. Hence, the error of the prediction of a model can be considered as a performance measure: Lower the error of a model, the better it performs.

In addition to that, a model judges its own performance and trains itself based on the error created between its own output and the expected output. The primary target of the model is to minimize the error so as to get the best parameters that would fit the data perfectly. 

An electronic health record (EHR) is a digital version of a patient chart, an inclusive snapshot of the patient’s medical history. It contains input from all the practitioners that are involved in the client’s care, offering a comprehensive view of the client’s health and treatment history.
EHRs are real-time, patient-centered records that make information available instantly and securely to authorized users. ... Allow access to evidence-based tools that providers can use to make decisions about a patient's care
Purpose of HER ?
health information and data.
result management.
order management.
decision support.
electronic communication and connectivity.
patient support.
administrative processes and reporting.
reporting and population health.

An electronic recordkeeping system (ERKS), also known as an electronic records management system in some jurisdictions, is an information/computer system with the necessary records management capabilities designed to electronically collect, organise, classify and control the creation, storage, retrieval, distribution

EMRs are not designed to be shared outside the individual practice.

Using Electronic Health Records (EHRs), all the history of patient’s medical data can be recorded and stored appropriately in a centralized system. In olden days and even now in some healthcare organizations health records of the patients are saved on paper. However, technical advancements have changed this scenario and the health records can be saved in digital formats.

This enhances the quality of medical services and care provided to the patients.
Healthcare organizations like hospitals, clinics and even individual medical practices have been taking advantage of EHRs in several ways. They are very useful not only to the practitioners but also to the patients. 
Top advantages of Electronic Health Records
• Keeping a record of up-to-date and accurate data about the patients in digital format
• Ensuring fast access to the medical data of patients for efficient and coordinated care
• Sharing the electronic medical data in a safe and secure manner with patients or other hospital staff
• Helping proper documentation of all the patient data
• Decreasing paperwork and setting the hospital staff free for other important tasks
• Enabling more effective diagnosis of patients with reduced medical errors and safer operations; thus improving the health of patients
• Boosting interaction and communication between patients and care providers
• Ensuring that the patient data remains safe and secure
• Enabling the patients to review their medical history, take necessary precautions and be alert about their treatments.
• Helping the care providers to enhance their productivity and efficiency while meeting their business goals
• Integrating patient data from different resources for better clinical decision-making

Electronic Health Records ensure that the processes become faster because the entries are not done manually. EHRs increase transparency so that doctors can immediately access previous health issues of the patients and provide better treatment.
Providing remote treatments is possible with EHRs as they can be easily paired with the healthcare management system or health apps or smartphones. It forms a powerful connection and important patient data can be accessed from anywhere. As a result, doctors can follow-up with patients and keep in touch. This facility is extremely helpful in case of an emergency. One more benefit that EHRs provide is improving inventory management of drugs. Recording the data alerts of the concerned person for increasing the drug supply in a timely manner.
Electronic health records are designed to be shared with other providers and authorized users may instantly access a patient’s EHR from across different healthcare providers.
As a rule, EHRs contain the following data:--
Patient’s demographic, billing, and insurance information;
Physical history and physicians’ orders;
Medication allergy information;
Nursing assessments, notes, and graphics of vital signs;
Laboratory and radiology results;
Trending labs, vital signs, results, and activities pages for easy reference
Links to important clinical information and support
Reports for quality and safety personnel

An electronic medical record (EMR) is a digital version of a patient’s chart used by a single practice: a physician, nurse practitioner, specialist, dentist, surgeon or clinic. In its essence, it is digitalized chart that healthcare facilities previously used to keep track of treatments, medications, changes in condition, etc. These medical documents are private and confidential and are not usually shared outside the medical practice where they originated.
Electronic medical records make it easier to track data over time and to monitor the client’s health more reliably, which leads to better long-term care.

Elements of EMRs:--
EMRs usually contain the following information about the client:
Medical history, physicals, notes by providers, and consults from other physicians
Medications and allergies, including immunization history
Alerts to the office and the patients for preventative tests and/or procedures, e.g. lab tests to follow-up colonoscopies

An electronic personal health record (PHR) provides an electronic record of the client’s health-related information and is managed by the client. It is a universally accessible and comprehensible tool for managing health information, promoting health maintenance, and assisting with chronic disease management. 

A PHR may contain information from multiple sources such as physicians, home monitoring devices, wearables, and other data furnished by the client. With PHRs, each client can view and control their medical data in a secure setting and share it with other parties.

However, it is not a legal record unless so defined and is subject to various legal limitations. Besides, though PHRs can provide important insights and give a fuller view of the client’s health and lifestyle, its inaccuracy and lack of structure lead to limited use of it in the clinical and medical studies.

Digital medical records may offer significant advantages both to patients and healthcare providers:--
Medical errors are reduced and healthcare is improved thanks to accurate and up-to-date information;
Patient charts are more complete and clear — without the need to decipher illegible scribbles;
Information sharing reduces duplicate testing;
Improved information access makes prescribing medication safer and more reliable;Promoting patient participation can encourage healthier lifestyles;
More complete information improves diagnostics;
Facilitating communication between the practitioner and client;
Enabling secure sharing of client’s medical information among multiple providers;
Increasing administrative efficiency in scheduling, billing, and collections, resulting in lower business-related costs for the organization
Improving EHR/EMR design and handling requires mapping complaints to specific EHR/EMR features and design decisions, which is not always a straightforward process. 

Over the last year, more informatics researchers and software vendors have turned their attention to EHR/EMR systems, and more of them have started to rely on AI to give deeper insights into the design and handling of the electronic records. 

The free structure of clinical notes is notoriously difficult to read and categorize with straightforward algorithms. AI and natural language processing, however, can handle the heterogeneity of unstructured or semistructured data making them a useful part of EHRs.

Flatiron Health’s ( owned by Roche ) human “abstractors” review provider uses AI to recognize key terms and uncover insights from unstructured documents. Amazon Web Services recently announced a cloud-based service that uses AI to extract and index data from clinical notes.

As healthcare costs grow and new methods are tested, home devices such as glucometers or blood pressure cuffs that automatically measure and send the results to the EHR are gaining momentum. 

Moreover, data streams from the Internet of Things, including home monitors, wearables, and bedside medical devices, can auto-populate notes and provide data for predictive analytics. Some companies have even more advanced devices such as the smart t-shirts of Hexoskin, which can measure several cardiovascular metrics and are being used in clinical studies and at-home disease monitoring. This means

Besides, electronic patient-reported outcomes and personal health records are also being leveraged more and more as providers emphasize the importance of patient-centered care and self disease management; all of these data sources are most useful when they can be integrated into the existing EHR.

Hexoskin is an open data smart shirt for monitoring EKG, heart rate, heart rate variability, breathing rate, breathing volume, actigraphy and other activity measurements like step counting and cadence. 

Hexoskin launched in 2013 the first washable smart shirts that capture cardiac, respiratory, and activity body metrics.

As you sleep at night, you cycle through periods of REM and non-REM sleep. Non-REM sleep occurs in three stages, and then you will enter REM sleep.

Non-REM Sleep

According to WebMD, the three phases of non-REM sleep are:--

Phase 1: As you first drift off to sleep you are entering phase 1 of non-REM sleep. You are relaxed, but may stir or awake easily for about five to ten minutes.
Phase 2: Phase 2 prepares your body for deep sleep. Your heart rate and body temperature will lower as you begin to sleep lightly.
Phase 3: Deep sleep begins in phase 3, and you will not be easily woken up as your body works to repair tissue and bones and strengthen your immune system.

REM Sleep

REM stands for Rapid Eye Movement. During this cycle of your sleep, your eyes will move and dart quickly beneath your eyelids. During REM sleep, your brain activity increases, your pulse quickens, and you have dreams. REM sleep first takes place after you’ve been sleeping for around 90 minutes. The first cycle usually lasts about 10 minutes, and each cycle time will increase to as long as one hour in the last phase before you awake.

The importance of REM sleep, in particular, is attributed to the fact that during this phase of sleep, your brain exercises important neural connections which are key to mental and overall well-being and health.

REM Sleep Behavior Disorder is a sleep disorder that causes you to physically act out vivid dreams through erratic and violent arm and leg movements. This disorder can come about suddenly, and impact your sleep several times a night.

During REM sleep, your body usually remains motionless, but the symptoms of REM Sleep Behavior 
Disorder include:--

Movements such as flailing, kicking, or punching in response to especially vivid or frightening dreams.
Noises such as yelling, talking, or crying while you are sleeping.

Ability to vividly remember the dream you were experiencing if you are woken up.


AI vs ML
The key difference between the two concepts involve---

•           Goal – The goal of AI is to increase the chances of success. Meanwhile, ML’s aim is to improve accuracy without caring for success.
•           Nature- AI is a computer program doing smart work. ML is the way for the computer program to learn from experience.
•           Future – The future goal of AI is to stimulate intelligence for solving highly complex programs. The ML’s goal is to keep learning from data to maximize the performance.
•           Approach – AI involves decision-making. ML allows the computer to learn new things from the available information.

•           Solutions – AI looks for optimal solutions. ML looks for the only solution.


Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations.

The following are a few major types of reasoning.:---
Deductive Reasoning.
Inductive Reasoning.
Abductive Reasoning.
Backward Induction.
Critical Thinking.
Counterfactual Thinking.
Intuition.

In theoretical research, deduction, induction and abduction can be also known as modes of argumentation:

Deduction: Data finding first to support an argument.
Induction: From argument finding to  explanation of data;
Abduction: Supplying a permit or license that enables or allows us to move from data to argument.

The difference between abductive reasoning and inductive reasoning is a subtle one; both use evidence to form guesses that are likely, but not guaranteed, to be true. However, abductive reasoning looks for cause-and-effect relationships, while induction seeks to determine general rules

Abductive reasoning is a form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation for the observations. It is a  process of research in which the logic of creativity/justification is given more importance than the logic of justification.  It is more like a process rather than a conclusion/outcome in hypothesis formation.

Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do not guarantee the conclusion.

Example:- 
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet. 
Conclusion It is raining.


Abductive reasoning typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation for the set.   It yields the kind of daily decision-making that does its best with the information at hand, which often is incomplete. It is the opposite of deductive reasoning. 

Inductive reasoning makes broad generalizations from specific observations. Basically, there is data, then conclusions are drawn from the data. ... An example of inductive logic is, "The coin I pulled from the bag is a penny"

Inductive approach is common but less effective when compared to deductive. In this approach you research the topic first and then argument comes up which is based on your research. This is also called reactive writing.

Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general.

Inductive reasoning arrives at a conclusion using limited sets of facts by the process of generalization. It starts with the series of specific facts or data and reaches to a general statement or conclusion.


Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up reasoning.   We use historical data or various premises to generate a generic rule, for which premises support the conclusion. Premises provide probable supports to the conclusion, so the truth of premises does not guarantee the truth of the conclusion.

Example:--

Premise: All of the pigeons we have seen in the zoo are white. 
Conclusion: Therefore, we can expect all the pigeons to be white.


Inductive research works the other way, from specific to broader generalizations and theories. This type of research sometimes informally called a bottom up research.  Conclusions are most likely based on premises.  This type of research commonly involves some degree of uncertainty. Observations likely to be used for it.  Inferring ‘effects’ from ‘causes’..  There are usually two variables in the research which are X and Y.


Inductive research starts with the findings which are very limited and specific in scope, and then continues to an outcome or conclusion which is generalized but not very certain in the light of collected data. In inductive approach the premises are there to support the result or conclusion but they do no ensure it. Therefore the conclusion is known as hypothesis.

Deductive reasoning is when you move from a general statement to a more specific statement through a logical thought process. ...  One of the  examples of deductive reasoning is from Aristotle: All men or mortal. Socrates is a man. Therefore, Socrates is mortal
Deductive reasoning starts with the assertion of a general rule and proceeds from there to a guaranteed specific conclusion.   It is about deducing new information from logically related known information. It is the form of valid reasoning, which means the argument's conclusion must be true when the premises are true.

Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is sometimes referred to as top-down reasoning, and contradictory to inductive reasoning. The truth of the premises guarantees the truth of the conclusion.


Deductive reasoning uses given information, premises or accepted general rules to reach a proven conclusion. On the other hand, inductive logic or reasoning involves making generalizations based upon behavior observed in specific cases.

Example:--
Premise-1: All the human eats veggies
Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.


Deductive research goes from theories to data. Task is to find theories which are very well-known or generally defined and then apply them to a specific phenomenon. The collected data will then either approve or disapprove these theories which you have tried to apply; both of these outcomes will be then valuable. 

Quantitative research is most commonly linked or associated with Deductive approach.
In this approach you highlight or outline an argument based on your existing knowledge about the selected subject or topic, then research on it to fill in the gaps. This is also called active reading.

The difference between abductive reasoning and inductive reasoning is a subtle one; both use evidence to form guesses that are likely, but not guaranteed, to be true. However, abductive reasoning looks for cause-and-effect relationships, while induction seeks to determine general rules.


It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (which is why they are non-necessary inferences), but in abduction there is an implicit or explicit appeal to explanatory .

Both  induction and abduction  are ampliative, which means that the conclusion goes totally beyond what is basically contained in the researched premises. But in abduction there is an understood or clear appeal to descriptive reflections, whereas it’s not the same in induction. In induction there is only an appeal to experiential statistics. (“Only” is emphasized because there may also in abduction).

Deduction-- In writing, argument is used in an attempt to convince the reader of the truth or falsity of some proposal or thesis. Two of the methods used are induction and deduction. Induction is a process of reasoning (arguing) which infers a general conclusion based.

Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three methods of reasoning are the deductive, inductive, and abductive approaches. Deductive reasoning: conclusion guaranteed

Analytics – encompasses the discovery, interpretation, and communication of meaningful patterns in data. It relies on the simultaneous application of statistics, computer programming and operations research to quantify performance and is particularly valuable in areas with large amounts of recorded information. The goal of this exercise is to guide decision-making based on the business context. The analytics flow comprises descriptive, diagnostic, predictive analytics and eventually prescriptive steps.

In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does not decrease the set of prepositions that can be derived.

Monotonic learning is when an agent may not learn the knowledge that contradicts with what it already known or exists, it will not replace a statement with its negation. Thus, the knowledge base may only grow with new facts in a monotonic fashion.

The advantages of monotonic learning are:--
1.greatly simplified truth-maintenance 
2.greater choice in learning strategies

In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does not decrease the set of prepositions that can be derived.

To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be affected by new facts.

Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot use monotonic reasoning. It is  used in conventional reasoning systems, and a logic-based system is monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:-- 
Earth revolves around the Sun.
It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like, "The moon revolves around the earth" Or "Earth is not round," etc.

Advantages of Monotonic Reasoning:--
In monotonic reasoning, each old proof will always remain valid.
If we deduce some facts from available facts, then it will remain valid for always.

Disadvantages of Monotonic Reasoning:--
We cannot represent the real world scenarios using Monotonic reasoning.
Hypothesis knowledge cannot be expressed with monotonic reasoning, which means facts should be true.

Since we can only derive conclusions from the old proofs, so new knowledge from the real world cannot be added. 

The term “non-monotonic logic”  covers a family of formal frameworks devised to capture and represent defeasible inference. Reasoners draw conclusions defeasibly when they reserve the right to retract them in the light of further information.  Defeasible reasoning is dynamic in that it allows for a retraction of inferences.

A non-monotonic logic is a formal logic whose consequence relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences (cf. defeasible reasoning), i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence.   

Most studied formal logics have a monotonic consequence relation, meaning that adding a formula to a theory never produces a reduction of its set of consequences. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. 

A monotonic logic cannot handle various reasoning tasks such as reasoning by default (consequences may be derived only because of lack of evidence of the contrary), abductive reasoning (consequences are only deduced as most likely explanations), some important approaches to reasoning about knowledge (the ignorance of a consequence must be retracted when the consequence becomes known), and similarly, belief revision (new knowledge may contradict old beliefs).

In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our knowledge base.

Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into our knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain models.

"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.

Example: Let suppose the knowledge base contains the following knowledge:

Birds can fly
Penguins cannot fly
Pitty is a bird
So from the above sentences, we can conclude that Pitty can fly.

However, if we add one another sentence into knowledge base "Pitty is a penguin", which concludes "Pitty cannot fly", so it invalidates the above conclusion.

Advantages of Non-monotonic reasoning:--
For real-world systems such as Robot navigation, we can use non-monotonic reasoning.
In Non-monotonic reasoning, we can choose probabilistic facts or can make assumptions.

Disadvantages of Non-monotonic Reasoning:--
In non-monotonic reasoning, the old facts may be invalidated by adding new sentences.
It cannot be used for theorem proving.

Abductive reasoning is the process of deriving the most likely explanations of the known facts. An abductive logic should not be monotonic because the most likely explanations are not necessarily correct. 

For example, the most likely explanation for seeing wet grass is that it rained; however, this explanation has to be retracted when learning that the real cause of the grass being wet was a sprinkler. Since the old explanation (it rained) is retracted because of the addition of a piece of knowledge (a sprinkler was active), any logic that models explanations is non-monotonic.

If a logic includes formulae that mean that something is not known, this logic should not be monotonic. Indeed, learning something that was previously not known leads to the removal of the formula specifying that this piece of knowledge is not known. This second change (a removal caused by an addition) violates the condition of monotonicity. A logic for reasoning about knowledge is the autoepistemic logic.

Common Sense Reasoning--
Common sense reasoning is an informal form of reasoning, which can be gained through experiences. It simulates the human ability to make presumptions about events which occurs on every day.

It relies on good judgment rather than exact logic and operates on heuristic knowledge and heuristic rules.

Example:--- 
One person can be at one place at a time.
If I put my hand in a fire, then it will burn.

The above two statements are the examples of common sense reasoning which a human mind can easily understand and assume.

The hypothetico-deductive model or method is a proposed description of the scientific method. 

According to it, scientific inquiry proceeds by formulating a hypothesis in a form that can be falsifiable, using a test on observable data where the outcome is not yet known. A test outcome that could have and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. 

A test outcome that could have, but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.




###### SUBJECT--- SUPEREME COURT ALLOWS CRYPTO CURRENCY TRADING #########

WHY IS SUPREME COURT PLAYING GOD?...

WE THE PEOPLE ASK CJI BOBDE...

WHY IS CHANDRACHUD AND NARIMAN LOVED BY THE JEWISH DEEP STATE ?..

WE KNOW THE TRAITOR JUDGES IN FOREIGN PAYROLL..

WHY DOES PM MODI AND LAW MINISTER PRASAD GIVE SO MUCH OF LEEWAY TO THESE TWO "LIBERAL JUDGES" WHO KICKED BHARATMATA INTO THE KOSHER ADULTERY/ HOMOSEXUAL MANDI?...

WE THE PEOPLE DO NOT WANT CHANDRACUD AS INDIAs CJI.. THE DEEP STATE HAS KEPT HIM IN POLE POSITION.. WE KNOW WHY !!
SUPREME COURT HAS NO POWERS TO INTERFERE IN GOVTS FINANCIAL DECISIONS ..

DESH DROHI INDIANS ARE USING BITCOIN FOR MONEY LAUNDERING , PEDOPHILIA AND FUNDING TERRORISTS AND RUNNING DRUGS..

BITCOIN IS NOT DECENTRALIZED. … KOSHER MINERS ARE THE DE FACTO CENTRAL AUTHORITY IN BITCOIN EXCHANGES….. THEY’RE ACTUALLY CENTRALLY CONTROLLED BY A MAFIA POOL OF KOSHER MINERS, WHO CAN BASICALLY REWRITE BLOCKCHAIN HISTORY AT WILL. …
IN 10 YEARS OF ITS EXISTENCE ALL THAT BLOCKCHAIN HAS ACHIEVED IS TO REGULARIZE GRABBED LAND IN ISRAEL AND GEORGIA BY WHITE JEWS. 

CHANDRABABU NAIDU TRIED TO USE BITCOIN/ BLOCKCHAIN TO REGULARIZE GRABBED LAND AT AMARAVATI….
OUR PM AND LAW MINISTER ARE USELESS ..

OUR JUDICIARY IS THE WORST ON THE PLANET… TRAITORS WHO CREATED THE NAXAL RED CORRIDOR AND CAUSED ETHNIC CLEANSING OF KASHMIRI PANDITS ..

SINCE 1947, RBI HAS BEEN CONTROLLED BY JEW ROTHSCHILD.. ARE WE A BANANA REPUBLIC?...

https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html

READ ALL 18 PARTS OF THE POST BELOW

http://ajitvadakayil.blogspot.com/2018/04/blockchain-bitcoin-must-be-kicked-out.html

WE WATCH..

capt ajit vadakayil
..'

  1. PUT ABOVE COMMENT IN WEBSITES OF--
    RBI
    RBI GOVERNOR
    FINANCE MINISTER/ MINISTER
    TRUMP
    PUTIN
    ABASSADORS TO AND FROM ABOVE NATIONS.
    UN CHIEF
    CHANDRACHUD
    NARIMAN
    I&B MINISTER / MINISTRY
    NCERT
    EDUCATION MINISTRY/ MINISTER
    PMO
    PM MODI
    NSA
    AJIT DOVAL
    RAW
    IB CHIEF
    IB OFFICERS
    CBI
    NIA
    ED
    AMIT SHAH
    HOME MINISTRY
    DEFENCE MINISTER/ MINISTRY
    ALL 3 ARMED FORCE CHIEFS-- PLUS TOP CDS CHIEF
    ALL DGPs OF INDIA
    ALL IGs OF INDIA
    ALL STATE HIGH COURT CHIEF JUSTICES
    CJI BOBDE
    SUPREME COURT JUDGES/ LAWYERS
    ATTORNEY GENERAL
    LAW MINISTER PRASAD / MINISTRY CENTRE AND STATES
    ALL CMs OF INDIA
    ALL STATE GOVERNORS
    MOHANDAS PAI
    RAJEEV CHANDRASHEKHAR
    PGURUS
    SWAMY
    RAJIV MALHOTRA
    DAVID FRAWLEY
    STEPHEN KNAPP
    WILLIAM DALRYMPLE
    KONRAED ELST
    FRANCOIS GAUTIER
    NITI AYOG
    AMITABH KANT
    PRESIDENT OF INDIA
    VP OF INDIA
    SPEAKER LOK SABHA
    SPEAKER RAJYA SABHA
    THAMBI SUNDAR PICHAI
    SATYA NADELLA
    CEO OF WIKIPEDIA
    QUORA CEO ANGELO D ADAMS
    QUORA MODERATION TEAM
    KURT OF QUORA
    GAUTAM SHEWAKRAMANI
    ALL INDIAN THINK TANKS
    CHETAN BHAGAT
    PAVAN VARMA
    RAMACHANDRA GUHA
    RSS
    AVBP
    VHP
    MOHAN BHAGWAT
    RAM MADHAV
    SOLI BABY
    FALI BABY
    KATJU BABY
    SALVE BABY
    ANGREZ KA AULAAD- SUHEL SETH
    NALIN KOHLI
    GVL NARASIMHA RAO
    SAMBIT PATRA
    VIVEK OBEROI
    GAUTAM GAMBHIR
    ASHOK PANDIT
    ANUPAM KHER
    KANGANA RANAUT
    VIVEK AGNIHOTRI
    KIRON KHER
    MEENAKSHI LEKHI
    SMRITI IRANI
    PRASOON JOSHI
    MADHUR BHANDARKAR
    SWAPAN DASGUPTA
    SONAL MANSINGH
    MADHU KISHWAR
    SUDHIR CHAUDHARY
    GEN GD BAKSHI
    RSN SINGH
    E SREEDHARAN
    MOHANLAL
    SURESH GOPI
    CHANDAN MITRA
    THE QUINT
    THE SCROLL
    THE WIRE
    THE PRINT
    MK VENU
    MADHU TREHAN
    RAJDEEP SARDESAI
    PAAGALIKA GHOSE
    NAVIKA KUMAR
    ANAND NARASIMHAN
    SRINIVASAN JAIN
    SONAL MEHROTRA KAPOOR
    VIKRAM CHANDRA
    NIDHI RAZDAN
    FAYE DSOUZA
    ZAKKA JACOB
    RAVISH KUMAR
    PRANNOY JAMES ROY
    AROON PURIE
    VINEET JAIN
    RAGHAV BAHL
    SEEMA CHISTI
    DILEEP PADGOANKAR
    VIR SANGHVI
    KARAN THAPAR
    SHEKHAR GUPTA
    ARUDHATI ROY
    SHOBHAA DE
    JULIO RIBEIRO
    ADVANI
    MURLI MNOHAR JOSHI
    KAMALAHASSAN
    PRAKASH KARAT
    BRINDA KARAT
    SITARAM YECHURY
    D RAJA
    ANNIE RAJA
    SUMEET CHOPRA
    DINESH VARSHNEY
    PINARAYI VIYAYAN
    KODIYERI BALAKRISHNAN
    JOHN BRITTAS
    THOMAS ISAAC ( KERALA FINANCE MINISTER)
    SIDHARTH VARADARAJAN
    NANDINI SUNDAR
    SHEHLA RASHID
    ROMILA THAPAR
    IRFAN HABIB
    NIVEDITA MENON
    AYESHA KIDWAI
    SWARA BHASKAR
    ADMIRAL RAMDAS
    KAVITA RAMDAS
    LALITA RAMDAS
    JOHN DAYAL
    KANCHA ILAIH
    TEESTA SETALVAD
    JEAN DREZE
    JAVED AKTHAR
    SHABANA AZMI
    KUNHALIKKUTTY
    ASADDUDIN OWAISI
    FAZAL GHAFOOR ( MES )
    FATHER CEDRIC PRAKASH
    ANNA VETTICKAD
    DEEPIKA PADUKONE
    ARURAG KASHYAP
    RAHUL GANDHI
    SONIA GANDHI
    PRIYANKA VADRA
    SANJAY HEDGE
    KAPILSIBAL
    ABHI SEX MAANGTHA SINVI
    DIG VIJAY SINGH
    AK ANTONY
    TEHSIN POONAWAALA
    SANJAY JHA
    AATISH TASEER
    MANI SHANGARAN AIYYERAN
    STALIN
    ZAINAB SIKANDER
    RANA AYYUB
    BARKHA DUTT
    SHEHLA RASHID
    TAREK FATAH
    UDDHAV THACKREY
    RAJ THACKREY
    KARAN THAPAR
    ASHUTOSH
    KAVITA KRISHNAN
    JAIRAM RAMESH
    SHASHI THAROOR
    JEAN DREZE
    BELA BHATIA
    FARAH NAQVI
    KIRAN MAJUMDAR SHAW
    RAHUL BAJAJ
    HARDH MANDER
    ARUNA ROY
    UMAR KHALID
    MANISH TEWARI
    PRIYANKA CHATURVEDI
    RAJIV SHUKLA
    SANJAY NIRUPAM
    PAVAN KHERA
    RANDEEP SURJEWALA
    DEREK O BRIEN
    ADHIR RANJAN CHOWDHURY
    MJ AKBAR
    ARUN SHOURIE
    SHAZIA ILMI
    CHANDA MITRA
    MANISH SISODIA
    ASHISH KHETAN
    SHATRUGHAN SINHA
    RAGHAV CHADDHA
    ATISHI MARLENA
    YOGENDRA YADAV
    MUKESH AMBANI
    RATA TATA
    ANAND MAHINDRA
    KUMARAMANGALAMBIRLA
    LAXMI MNARAYAN MITTAL
    AZIM PREMJI
    KAANIYA MURTHY
    RAHUL BAJAJ
    RAJAN RAHEJA
    NAVEEN JINDAL
    GOPICHAND HINDUJA
    DILIP SHANGHVI
    GAUTAM ADANI
    SRI SRI RAVISHANKAR
    SADGURU JAGGI VASUDEV
    MATA AMRITANANDA MAYI
    BABA RAMDEV

    SPREAD ON SOCIAL MEDIA

    SPREAD MESSAGE VIA WHATS APP

    ALL MUST PARTICIPATE

    ASK RBI, RBI GOVERNOR, FINANCE MINISTER, PMO, PM MODI, LAW MINISTER FOR AN ACK..  WITH THIS RULING INDIA WILL NOT BE ABLE TO BECOME THIS PLANETs NO 1 SUPERPOWER IN 13 YEARS ... MODI WILL RUN AWAY TO HIS SAFE HOUSE IN ISRAEL ..
Ransomware  is a form malware that, when downloaded to a device, scrambles or deletes all data until a ransom is paid to restore it. 

In 2020 a new organization will be hit by a ransomware attack every 13.2  seconds.  It has the potential to cripple networks and cause catastrophic harm to infrastructure. 


Ransomware begins with malicious software being downloaded onto an endpoint device, like a desktop computer, laptop or smartphone.  This usually happens because of user error and ignorance of security risks.

One common method of distributing malware is through phishing attacks. This involves an attacker attaching an infected document or URL to an email, while disguising it as being legitimate to trick users into opening it, which will install the malware on their device.

Another popular method of spreading ransomware is using a ‘trojan horse’ virus style. This involves disguising ransomware as legitimate software online, and then infecting devices after users install this software.

Ransomware typically works very quickly. In seconds, the malicious software will take over critical process on the device, and search for files to be encrypted, meaning all of the data within them is scrambled. The ransomware will likely delete any files it cannot encrypt.

The ransomware will then infect any other hard-drives or USB devices connected to the infected host machine. Any new devices or files added to the infected device will also be encrypted after this point. Then, the virus will begin sending out signals to all of the other devices on the network, to attempt to infect them as well.

This whole process happens extremely quickly, and in just a few minutes the device will display a message that looks like this:



As you can see, it’s a ‘cyber blackmail’ note, which tells users that their files are locked, and that if a payment is not made, they will be deleted.


The traitor judges of India have made crypto currency legal in india..


One of the most famous examples of ransomware is the WannaCry ransomware attack. WannaCry was a piece of malware that infected over 242,000 computers across 155 companies within a single day. 

It encrypted all files it found on a device and requested users pay $320 worth of bitcoin payments to restore them.


Payments will be demanded in bitcoin, as this payment method cannot be traced, and there is often a countdown, which puts pressure on companies to act quickly in making payments to attackers.

There are different types of ransomware. Some threaten to release the encrypted data to the public, which may be damaging to companies who need to protect customer or business data. 

Ransomware can be hugely damaging to businesses, causing loss of productivity and often financial losses.  

Most obviously there is the loss of files and data, which may represent hundreds of hours of work, or customer data that is critical to the smooth running of your organization. 

If ISRO or DRDO is affected we know the grave implications.. 

Our 75 year old brain dead NSA Ajit Doval must be put out to pasture…


By targeting people with phishing attacks, attackers can bypass traditional security technologies with ransomware. Email is a weak point in many businesses’ security infrastructure, and hackers can exploit this by using phishing emails to trick users into opening malicious files and attachments. By using trojan horse viruses, hackers also target human error by causing them to inadvertently to download malicious files.


CAN WE EVER EXPECT ILLITERATE CHAIWAALA MODI TO TALK ABOUT RANSOMWARE INSTEAD OF GANDHI AND BR AMBEDKAR IN HIS PATHETIC NAY BULLSHIT MANN KE BAATH ( MEANT FOR MILKING VOTES ? )

As ransomware is commonly delivered through email, email security is crucial to stop ransomware. Secure Email gateway technologies filter email communications with URL defences and attachment sandboxing to identify threats and block them from being delivered to users. This can stop ransomware from arriving on endpoint devices and block users from inadvertently installing ransomware onto their device.   

If a homosexual  gets an “I LOVE YOU” message by email attachment , he is bound to open it.

Ransomware is also commonly delivered through phishing. Secure email gateways can block phishing attacks, but there is also Post-Delivery Protection technologies, which use machine learning and AI algorithms to detect phishing attacks, and display warning banners within emails to alert them that an email may be suspicious. This helps users to avoid phishing emails which may contain a ransomware attack.

DNS Web filtering solutions stop users from visiting dangerous websites and downloading malicious files. This helps to block viruses that spread ransomware from being downloaded from the internet, including trojan horse viruses that disguise malware as legitimate business software. DNS filtering – or Domain Name System filtering to give it its full title – is a technique of blocking access to certain websites, webpages, or IP addresses. DNS is what allows easy to remember domain names to be used – such as Wikipedia.com – rather than typing in very difficult to remember IP addresses – such as 198.35.26.96. DNS maps IP addresses to domain names.

DNS filters can also block malicious third party adverts. Web filters should be configured to aggressively block threats, and to stop users from visiting dangerous or unknown domains. Utilizing Isolation can also be an important tool to stop ransomware downloads. Isolation technologies completely remove threats away from users by isolating browsing activity in secure servers and displaying a safe render to users. This can help to prevent ransomware as any malicious software is executed in the secure container and does not affect the users themselves. The main benefit of Isolation is that it doesn’t impact the user’s experience whatsoever, delivering high security efficacy with a seamless browsing experience.

When a domain is purchased from a domain register and that domain is hosted, it is assigned a unique IP address that allows the site to be located. When you attempt to access a website, a DNS query will be performed. Your DNS server will look up the IP address of the domain/webpage, which will allow a connection to be made between the browser and the server where the website is hosted. The webpage will then be loaded.

So how does DNS web filtering work? With DNS filtering in place, rather than the DNS server returning the IP address if the website exists, the request will be subjected to certain controls. DNS blocking occurs if a particular webpage or IP address is known to be malicious via blacklists or is determined to be potentially malicious by the web filter. Instead of being connected to the website the user was attempting to access, the user is instead directed to a local IP address that displays a block page explaining why the site cannot be accessed.

This control could be applied at the router level, via your ISP, or a third party – a web filtering service provider. In the case of the latter, the user – a business for instance – would point their DNS to the service provider. That service provider maintains a blacklist of malicious webpages/IP addresses. If a site is known to be malicious, access to malicious sites will be blocked.

Since the service provider will also categorize webpages, the DNS filter can also be used to block access to certain categories of webpages – pornography, child pornography, file sharing websites, gambling, and gaming sites for instance. Provided a business creates an acceptable usage policy (AUP)and sets that policy with the service provider, the AUP will be enforced. Since DNS filtering is low-latency, there will be next to no delay in accessing safe websites that do not breach an organization’s acceptable Internet usage policies.

Will a DNS Filter Block All Malicious Websites?
Unfortunately, no DNS filtering solution will block all malicious websites, as in order to do so, a webpage must first be determined to be malicious. If a cybercriminal sets up a brand-new phishing webpage, there will be a delay between the page being created and it being checked and added to a blocklist. However, a DNS web filter will block the majority of malicious websites.

Can DNS Filtering be Bypassed?
The short answer is yes. Proxy servers and anonymizer sites could be used to mask traffic and bypass the DNS filter unless the chosen solution also blocks access to these anonymizer sites. An end user could also manually change their DNS settings locally unless they have been locked down. Determined individuals may be able to find a way to bypass DNS filtering, but for most end users, a DNS filter will block any attempt to access forbidden or harmful website content.

No single cybersecurity solution will allow you to block 100% of malicious websites or all NSFW websites, but DNS filtering should certainly be part of your cybersecurity defences as it will allow the majority of malicious sites and malware to be blocked.

Security Awareness Training solutions typically also provide phishing simulation technologies. This means admins can create customized simulated phishing emails, and send them out to employees to test how effectively they can detect attacks. Phishing simulation is an ideal way to help view your security efficacy across the organization, and is a useful tool to help identify users that need more security training to help stop the spread of ransomware.

If a ransomware attack succeeds and your data is compromised, the best way to protect your organization is to be able to restore the data you need quickly and minimize the downtime. The best way to protect data is to ensure that it is backed up in multiple places, including in your main storage area, on local disks, and in a cloud continuity service. In the event of a ransomware attack, backing up data means you will be able to mitigate the loss of any encrypted files and regain functionality of systems.

The best Cloud Data Backup and Recovery platforms will allow businesses to recover data in the case of a disaster, will be available anytime, and will be easily integrated with existing cloud applications and endpoint devices, with a secure and stable global cloud infrastructure.  Cloud data backup and recovery is an important tools to remediating against Ransomware.


I WAS ONCE ASKED TO FLY TO EUROPE THE MEET THE REAL OWNER OF THE SHIPPING COMPANY ON WHOSE SHIP I WAS TO WORK. 

AS A CAPTAIN YOU RARELY GET TO TALK TO THE SHIPOWNER..

THE OWNER, A YOUNG BLOKE, TOOK ME FOR DINNER AND THEN WHILE WE  WERE SIPPING  PREMIUM WINE, ASKED ME IF THERE IS ANY WAY TO STOP EMBEZZELMENT IN HIS COMPANY..   

HIS COMPANY AND SHIP HAD ALL WHITES..  I WAS A OUTSIDER ,, A BROWN MAN..

HE SAID THAT HE WAS TOLD ONLY CAPT AJIT VADAKAYIL CAN PUT HIS FINGER AT THE RIGHT PLACE AND CONNECT DOTS .

SO I TOLD HIM..

“YOUR COMPANY HAS A SOFTWARE—A BLANKET ONE ENCOMPASSING ALL SHIP AND SHORE OPERATIONS —  WHICH IS “DELIBERATELY “ KEPT TOO COMPLICATED..

THE BIGGEST DRAW BACK IS THAT YOU DON’T GET A BIRDs  EYE VIEW OF THE SITUATION..  YOU ARE FORCED TO BE IN TUNNEL VISION MODE WITH NARROW SPECTRUM POP UP WINDOWS.   NO NORMAL BRAIN CAN COPE UP WITH THIS LOAD.. YOU ARE ARM TWISTED TO TRUST THE SOFTWARE..

UNLESS YOU ARE EXTREMELY BRIGHT, YOU CANT FIGURE OUT HOW MONEY IS BEING BLED..

PEOPLE IN YOUR OFFICE ARE STEALING LEFT, RIGHT AND CENTRE..   THE TECHNICAL SUPERINTENDENTS ARE IN LEAGUE WITH SHIP STAFF TO MAKE FALSE REQUISITIONS , SIGN FAKE INVOICES  ( FOR SHORT SUPPLY/ SERVICES  )  

THIS SHIPOWNER NEARLY CHOKED .. AND HE BEGGED ME TO GIVE SOLUTIONS TO MITIGATE


I DID !


























SOMEBODY CALLED ME UP AND ASKED ME..

CAPTAIN—

WHO IS MUHAMMAD IBN MUSA AL-KHWARIZMI WHOM MODERN HISTORIANS ARE CALLING THE “FATHER OF COMPUTER SCIENCE” AND THE “FATHER OF ALGORITHMS”??.

LISTEN –

ARAB MUHAMMAD IBN MUSA AL-KHWARIZMI WAS A BRAIN DEAD FELLOW WHOSE ENTIRE WORK WAS SOLD TO HIM TRANSLATED INTO ARABIC BY THE CALCIUT KING FOR GOLD.

THE CALICUT KING MADE HIS MONEY BY NOT ONLY SELLING SPICES –BUT KNOWLEDGE TOO.

HE MAMANKAM FEST HELF AT TIRUNAVAYA KERALA BY THE CALICUT KING EVERY 12 YEARS WAS AN OCCASION WHERE KNOWLEDGE WAS SOLD FOR GOLD.

http://ajitvadakayil.blogspot.com/2019/10/perumal-title-of-calicut-thiyya-kings.html

EVERY ANCIENT GREEK SCHOLAR ( PYTHAGORAS/ PLATO/ SOCRATES ETC ) EXCEPT ARISTOTLE STUDIED AT KODUNGALLUR UNIVERSITY.. THE KERALA SCHOOL OF MATH WAS PART OF IT.

OUR ANCIENT BOOKS ON KNOWLEDGE DID NOT HAVE THE AUTHORs NAME AFFIXED ON THE COVER AS WE CONSIDERED BOOKS AS THE WORK OF SOULS , WHO WOULD BE BORN IN ANOTHER WOMANs WOMB AFTER DEATH.

THE GREEKS TOOK ADVANTAGE OF THIS , STOLE KNOWLEDGE FROM KERALA / INDIA AND PATENTED IT IN THEIR OWN NAMES, WITH HALF BAKED UNDERSTANDING .

WHEN THE KING OF CALICUT CAME TO KNOW THIS, HE BLACKBALLED GREEKS FROM KODUNGALLUR UNIVERSITY .. AND SUDDENLY ANCIENT GREEK KNOWLEDGE DRIED UP LIKE WATER IN THE HOT DESERT SANDS.

LATER THE CALICUT KING SOLD TRANSLATED INTO ARABIC KNOWLEDGE TO BRAIN DEAD ARABS LIKE MUHAMMAD IBN MUSA AL-KHWARIZMI FOR GOLD..

THESE ARAB MIDDLE MEN SOLD KNOWLEDGE ( LIKE MIDDLEMEN FOR SPICES) TO WHITE MEN FOR A PREMIUM.

FIBONACCI TOOK HIS ARABIC WORKS TO ITALY FROM BEJAYA , ALGERIA.

http://ajitvadakayil.blogspot.com/2010/12/perfect-six-pack-capt-ajit-vadakayil.html

EVERY VESTIGE OF ARAB KNOWLEDGE IN THE MIDDLE AGES WAS SOLD IN TRANSLATED ARABIC BY KODUNGALLUR UNIVERSITY FOR GOLD..

FROM 800 AD TO 1450 AD KODUNGALLUR UNIVERSITY OWNED BY THE CALICUT KING EARNED HUGE AMOUNT OF GOLD FOR SELLING READY MADE TRANSLATED KNOWLEDGE ..

THIS IS TIPU SULTANS GOLD WHO STOLE IT FROM NORTH KERALA TEMPLE VAULTS.. ROTHSCHILD BECAME THE RICHEST MAN ON THIS PLANET BY STEALING TIPU SUTANs GOLD IN 1799 AD.

http://ajitvadakayil.blogspot.com/2011/10/tipu-sultan-unmasked-capt-ajit.html

WHEN TIPU SULTAN WAS BLASTING TEMPLE VAULTS, LESS THAN 1% OF THE GOLD WAS SECRETLY TRANSFERRED TO SOUTH KERALA ( TRADITIONAL ENEMIES ) OF THE CALICUT KING. LIKE HOW SADDAM HUSSAIN FLEW HIS FIGHTER JETS TO ENEMY IRAN .

THIS IS THE GOLD WHICH WAS UNEARTHED FROM PADMANABHASWAMY TEMPLE..

http://ajitvadakayil.blogspot.com/2013/01/mansa-musa-king-of-mali-and-sri.html

ALGORITHMS ARE SHORTCUTS PEOPLE USE TO TELL COMPUTERS WHAT TO DO. AT ITS MOST BASIC, AN ALGORITHM SIMPLY TELLS A COMPUTER WHAT TO DO NEXT WITH AN “AND,” “OR,” OR “NOT” STATEMENT.

THE ALGORITHM IS BASICALLY A CODE DEVELOPED TO CARRY OUT A SPECIFIC PROCESS. ALGORITHMS ARE SETS OF RULES, INITIALLY SET BY HUMANS, FOR COMPUTER PROGRAMS TO FOLLOW.

A PROGRAMMING ALGORITHM IS A COMPUTER PROCEDURE THAT IS A LOT LIKE A RECIPE (CALLED A PROCEDURE) AND TELLS YOUR COMPUTER PRECISELY WHAT STEPS TO TAKE TO SOLVE A PROBLEM OR REACH A GOAL.

THERE IS NO ARTIFICIAL INTELLIGENCE WITHOUT ALGORITHMS. ALGORITHMS ARE, IN PART, OUR OPINIONS EMBEDDED IN CODE.

ALGORITHMS ARE AS OLD AS DANAVA CIVILIZATION ITSELF – THIEF GREEK EUCLID’S ALGORITHM BEING ONE OF THE FIRST EXAMPLES DATING BACK SOME 2300 YEARS

EUCLID JUST PATENTED MATH HE LEARNT IN THE KERALA SCHOOL OF MATH IN HIS OWN NAME.. EUCLID IS A THIEF LIKE PYTHAGORAS WHO LEARNT IN THE KERALA SCHOOL OF MATH.

http://ajitvadakayil.blogspot.com/2011/01/isaac-newton-calculus-thief-capt-ajit.html

ALGEBRA DERIVED FROM BRAIN DEAD AL-JABR, ONE OF THE TWO OPERATIONS HE USED TO SOLVE QUADRATIC EQUATIONS.

ALGORISM AND ALGORITHM STEM FROM ALGORITMI, THE LATIN FORM OF HIS NAME.


CONTINUED TO 2--

  1. CONTINUED FROM 1-

    BRAIN DEAD CUNT AL-KHWARIZMI DEVELOPED THE CONCEPT OF THE ALGORITHM IN MATHEMATICS -WHICH IS A REASON FOR HIS BEING CALLED THE GRANDFATHER OF COMPUTER SCIENCE ( SIC ).. THEY SAY THAT THE WORD “ALGORITHM” IS ACTUALLY DERIVED FROM A LATINIZED VERSION OF AL-KHWARIZMI’S NAME BRAAAYYYYYYY.

    ALGORITMI DE NUMERO INDORUM IN ENGLISH AL-KHWARIZMI ON THE HINDU ART OF RECKONING GAVE RISE TO THE WORD ALGORITHM DERIVING FROM HIS NAME IN THE TITLE. THE WORK DESCRIBES THE HINDU PLACE-VALUE SYSTEM OF NUMERALS BASED ON 1, 2, 3, 4, 5, 6, 7, 8, 9, AND 0. THE FIRST USE OF ZERO AS A PLACE HOLDER IN POSITIONAL BASE NOTATION WAS DUE TO AL-KHWARIZMI IN THIS WORK.

    ANOTHER IMPORTANT WORK BY AL-KHWARIZMI WAS HIS WORK SINDHIND ZIJ ON ASTRONOMY. THE WORK, DESCRIBED IN DETAIL IN , IS BASED IN INDIAN ASTRONOMICAL WORKS..

    THE MAIN TOPICS COVERED BY AL-KHWARIZMI IN THE SINDHIND ZIJ ARE CALENDARS; CALCULATING TRUE POSITIONS OF THE SUN, MOON AND PLANETS, TABLES OF SINES AND TANGENTS; SPHERICAL ASTRONOMY; ASTROLOGICAL TABLES; PARALLAX AND ECLIPSE CALCULATIONS; AND VISIBILITY OF THE MOON. A RELATED MANUSCRIPT, ATTRIBUTED TO AL-KHWARIZMI, ON SPHERICAL TRIGONOMETRY IS DISCUSSED..

    PTOLEMY’ ENTIRE WORKS ARE LIFTED FROM KODUNGALLUR UNIVERSITY KERALA OWNED BY THE CALICUT KING. AL-KHWARIZMI'S TABLES WERE CAST ON PTOLEMY’S TABLES.

    AL-KHWARIZMI WROTE ON THE ASTROLABE AND SUNDIALS ,WHICH ARE HINDU INSTRUMENTS

    THERE IS A STATUE OF MUHAMMAD IBN MUSA AL-KHWARIZMI HOLDING UP AN ASTROLABE IN FRONT OF THE FACULTY OF MATHEMATICS OF AMIRKABIR UNIVERSITY OF TECHNOLOGY IN TEHRAN . HE GOT AN ASTROLABE INSTRUMENT AND TRANSLATED INTO ARABIC NOTES OF THE MANUAL ( BOTH CONSTRUCTION AND OPERATIONAL ) FOR GOLD .. HIS ASTROLABE INSTRUMENT HAD PLATES FOR MECCA/ ISTANBUL/ ALEXANDRIA.

    ASTROLABE BRASS INSTRUMENTS WERE SOLD BY KODUNGALLUR UNIVERSITY PROFESSORS AT THE LIBRARY OF CORDOBA IN SPAIN..

    THESE SIMPLE BRASS DEEP SEA NAVIGATION INSTRUMENTS WERE PRODUCED MUCH BEFORE THE COMPLICATED ANTIKYTHERA AUTOMATIC ( PERPETUAL MOTION ) MECHANISM..

    THE DEEP SEA NAVIGATING SHIPS OF QUEEN DIDO , A KERALA THIYYA PRINCESS WHO TAUGHT AT THE UNIVERSITY OF ALEXANDRIA IN 1600 BC ( ON DEPUTATION FROM KODUNGALLUR UNIVERSITY ) CARRIED THESE INSTRUMENTS..

    http://ajitvadakayil.blogspot.com/2019/05/the-ancient-7000-year-old-shakti.html

    ASTROLABE IT IS AN ELABORATE INCLINOMETER, HISTORICALLY USED BY ASTRONOMERS AND NAVIGATORS TO MEASURE THE ALTITUDE ABOVE THE HORIZON OF A CELESTIAL BODY, DAY OR NIGHT.

    IT CAN BE USED TO IDENTIFY STARS OR PLANETS, TO DETERMINE LOCAL LATITUDE GIVEN LOCAL TIME (AND VICE VERSA), TO SURVEY, OR TO TRIANGULATE. ASTROLABE WAS CALLED SITARA YANTRA..

    http://ajitvadakayil.blogspot.com/2019/09/onam-our-only-link-to-planets-oldest.html

    AN ASTROLABE (SOLD IN CORDOBA SPAIN ) WAS EXCAVATED FROM THE WRECK SITE OF A PORTUGUESE ARMADA SHIP AS THE OLDEST IN THE WORLD. THEY ALSO CERTIFIED A SHIP'S BELL -- DATED 1498 -- RECOVERED FROM THE SAME WRECK SITE ALSO AS THE OLDEST IN THE WORLD.

    DONT EVER THINK THAT VASCO DA GAMA AND COLUMBUS NAVIGATED ON WESTERN TECHNOLOGY.. THEY USED ANCIENT DEEP SEA NAVIGATING INSTRUMENTS OF ANCIENT KERALA THIYYA NAVIGATORS..

    DIOPHANTUS STUDIED IN KODUNGALLUR UNIVERSITY. HE IS THE AUTHOR OF A SERIES OF BOOKS CALLED ARITHMETICA, ALL LIFTED FROM KERALA SCHOOL OF MATH.

    THIEF DIOPHANTUS WAS THE FIRST GREEK MATHEMATICIAN WHO RECOGNIZED FRACTIONS AS NUMBERS; THUS HE ALLOWED POSITIVE RATIONAL NUMBERS FOR THE COEFFICIENTS AND SOLUTIONS.

    IN MODERN USE, DIOPHANTINE EQUATIONS ARE USUALLY ALGEBRAIC EQUATIONS WITH INTEGER COEFFICIENTS, FOR WHICH INTEGER SOLUTIONS ARE SOUGHT. DIOPHANTUS WAS A BRAIN DEAD FELLOW WHO STOLE HIS ALGEBRA FROM THE KERALA SCHOOL OF MATH.

    MEDIOCRE BRAIN JEW ALBERT EINSTEIN WAS A THIEF… HE STOLE FROM PART TWO ( BRAHMANAS ) AND PART THREE ( ARANYAKAS ) OF THE VEDAS..

    http://ajitvadakayil.blogspot.com/2018/11/albert-einstein-was-thief-plagiarist.html

    LIES WONT WORK.. A BROWN BLOGGER IS IN TOWN !

    Capt ajit vadakayil
    ..



  1. JEW GEORGE SOROS IS A WEE AGENT OF JEW ROTHSCHILD WHO RULED INDIA...

    JEW GEORGE SOROS IS BEING USED BY THE JEWISH DEEP STATE TO MAKE INDIA IMPLODE FROM WITHIN..

    THE WHITE JEW KNOWS THAN IN 13 YEARS INDIA WILL BE THIS PLANETs NO 1 SUPERPOWER AND IT PLANS TO MAKE INDIA IMPLODE FROM WITHIN..

    HARSH MANDER WHO TRIGGERED THE DELHI ANTI-CAA MUSLIM RIGHTS RIOTS IS AN AGENT OF JEW SOROS WHO HAS DONATED ONE BILLION USD TO FIGHT HINDUS AND CREATE DISCORD IN INDIA....

    https://www.opensocietyfoundations.org/who-we-are/boards/human-rights-initiative-advisory-board

    HARDH MANDAR IS A BOARD MEMBER OF OPEN SOCIETY FOUNDATIONS -- A JEWISH DEEP STATE ORGANISATION LED BY GEORGE SOROS ..

    https://www.opindia.com/2020/01/george-soros-1-billion-dollar-fight-nationalists-pm-modi-usa-china-russia/

    ONE BILLION USD DONATION BY GEORGE SOROS TO CREATE ANTI-HINDU SENTIMENTS IN INDIA IS THE TIP OF THE ICEBERG.. AS GEORGE SOROS HAS ALREADY DONATED 32 BILLION TO OPEN SOCIETY FOUNDATIONS..

    https://www.opensocietyfoundations.org/george-soros

    WE ASK AJIT DOVAL.. AS NSA WHAT IS YOUR JOB?.. IS IT TO GO AROUND CONSOLING MUSLIMS AFTER CAA DELHI RIOTS ?..

    MANY INDIAN JOURNALISTS , COLLEGIUM JUDGES , PROFESSORS OF SOCIAL SCIENCES IN ELITE INDIAN COLLEGES ARE IN DEEP STATE PAYROLL.

    WHY IS HARDH MANDAR NOT IN JAIL ?..

    WHY HAS JUDICIARY LEGALIZED BITCOIN WHICH IS USED TO FUND ISLAMIC MERCENARIES IN KASHMIR AND DESH DROHIS IN INDIA?...

    HARSH MANDER WHO IS SPONSORED BY LIBERAL INDIAN JUDGES IS THE CHAIRMAN OF GEORGE SOROS’S OPEN SOCIETY FOUNDATION’S HUMAN RIGHTS INITIATIVE ADVISORY BOARD..

    WE WANT THIS FOREIGN FUNDED DESH DROHI ORG KARWAN E MOHABBAT TO BE PROFILED..

    MODI IS NAIVE TO BELIEVE THAT JEWS WHO SPONSORED HIM WITH A SIKH TURBAN IN 1976, IS ON HIS SIDE.. WE ASK MODI TO WORK FOR BHARATMATA NOT HIS JEWISH MASTERS ..

    WE WATCH..

    https://ajitvadakayil.blogspot.com/2020/01/we-people-are-done-with-illegal.html

    capt ajit vadakayil
    ..
    1. PUT ABOVE COMMENT IN WEBSITES OF--
      HARSH MANDAR
      EXTERNAL MINISTER/ MINISTRY
      TRUMP
      PUTIN
      AMBASSADORS TO FROM USA/ RUSSIA
      PMO
      PM MODI
      AJIT DOVAL
      RAW
      IB
      NIA
      ED
      CBI
      AMIT SHAH
      HOME MINISTRY
      DEFENCE MINISTER/ MINISTRY
      ALL 3 ARMED FORCE CHIEFS
      CDS
      FINANCE MINISTER/ MINISTRY
      DAVID FRAWLEY
      STEPHEN KNAPP
      WILLIAM DALRYMPLE
      KONRAED ELST
      FRANCOIS GAUTIER
      CJI BOBDE
      ATTORNEY GENERAL
      ALL SUPREME COURT JUDGES
      ALL SUPREME COURT LAWYERS
      LAW MINISTER/ MINISTRY CENTRE AND STATE
      ALL HIGH COURT CHIEF JUSTICES
      ALL MPs OF INDIA
      ALL MLAs OF INDIA
      CMs OF ALL INDIAN STATES
      DGPs OF ALL STATES
      GOVERNORS OF ALL STATES
      PRESIDENT OF INDIA
      VP OF INDIA
      SPEAKER LOK SABHA
      SPEAKER RAJYA SABHA
      NITI AYOG
      AMITABH KANT
      NCERT
      EDUCATION MINISTER/ MINISTRY
      NALIN KOHLI
      GVL NARASIMHA RAO
      SAMBIT PATRA
      VIVEK OBEROI
      GAUTAM GAMBHIR
      ASHOK PANDIT
      ANUPAM KHER
      KANGANA RANAUT
      VIVEK AGNIHOTRI
      KIRON KHER
      MEENAKSHI LEKHI
      SMRITI IRANI
      PRASOON JOSHI
      MADHUR BHANDARKAR
      SWAPAN DASGUPTA
      SONAL MANSINGH
      MADHU KISHWAR
      SUDHIR CHAUDHARY
      GEN GD BAKSHI
      RSN SINGH
      E SREEDHARAN
      MOHANLAL
      SURESH GOPI
      CHANDAN MITRA
      RAHUL EASWWAR
      TOM VADAKKAN
      E SREEDHARAN
      PC GEORGE MLA
      SRIDHARAN PILLAI
      PARASARAN
      SAI DEEPAK
      VIDYASAGAR GURUMURTHY
      RAJEEV CHANDRASHEKAR
      MOHANDAS PAI
      CLOSET COMMIE ARNAB GOSWMI
      RAJDEEP SARDESAI
      BARKHA DUTT
      NAVIKA KUMAR
      ZAKKA JACOB
      ANAND NARASIMHAN
      FAYE DSOUZA
      BARKHA DUTT
      SHEKHAR GUPTA
      PRANNOY JAMES ROY
      AROON PURIE
      VINEET JAIN
      RAGHAV BAHL
      SIDHARTH VARADARAJAN
      N RAM
      KAMALAHASSAN
      ALPESH THAKORE
      CHANDRASEKHAR OF BHIM ARMY
      TEESTA SETALVAD
      KAVITA KRISHNAN
      SHUBHA MUDGAL
      NALINI SINGH
      DILIP CHERIAN
      SUMEET CHOPRA
      DINESH VARSHNEY
      VC OF JNU
      VC OF DU/ JU/ TISS
      DEAN OF FTII
      KANCHA ILAIH
      BRINDA KARAT
      PRAKASH KARAT
      SITARAM YECHURY
      MANI SHANKAR AIYERAN
      ROMILA THAPAR
      IRFAN HABIB
      NIVEDITA MENON
      AYESHA KIDWAI
      DANIEL RAJA
      KARAN THAPAR
      SHOBHAA DE
      ARUNDHATI ROY
      SHASHI THAROOR
      RANA AYYUB
      THAMBI SUNDAR PICHAI
      SATYA NADELLA
      CEO OF WIKIPEDIA
      QUORA CEO ANGELO D ADAMS
      QUORA MODERATION TEAM
      KURT OF QUORA
      GAUTAM SHEWAKRAMANI
      ALL INDIAN THINK TANKS
      SPREAD ON SOCIAL MEDIA

JEW ROTHSCHILD WHO COOLED UP PALI SPEAKING / PIG EATING GAUTAMA BUDDHA AND PALI SCRIPT FOR THIS BURMESE DIALECT , CONVERTED—

SHRADDHA TO SADDA
CHAKRA TO CHAKKA,
DHARMA TO DHAMMA,
KARMA TO KAMMA,
SUTRA TO SUTTA ,
SATVA TO SATTA ,
PUTRA TO PUTTA ,
VASTU TO VATTU ETC..

SO ROMILA BABY , LET US ALL DANCE TO HUMMA ( HARAMI ).

https://www.youtube.com/watch?v=IhdUyiK-TTI

capt ajit vadakayil
..

SEND THIS COMMENT TO ROMILA THAPAR




BELOW:  EUROPE AND SCANDINAVIA HAVE CHRISTIAN POPULATION.. BUT THEIR RULERS ARE CRYPTO JEWS INSTALLED BY THE JEWISH DEEP STATE.. 

SEAMLESS BOUNDARIES AND SINGLE EURO CURRENCY WAS A ROTHSCHILD CONSPIRACY..

PIR ALI MUSLIM TAGORE WAS ROTHSCHILDs AGENT FOR "SEAMLESS BOUNDARIES" AND "HEAVEN OF FREEDOM"..







THIS POST IS NOW CONTINUED TO PART 16, BELOW--


https://ajitvadakayil.blogspot.com/2020/03/what-artificial-intelligence-cannot-do.html






CAPT AJIT VADAKAYIL
..