THIS POST IS CONTINUED FROM PART 11, BELOW—
ARTIFICIAL INTELLIGENCE GIVE US THE BULLSHIT OF GLOBAL WARMING AND CLIMATE CHANGE …
BEFORE PARIS COP 21 WAS CALLED , I WAS THE ONLY PERSON ON THE INTERNET WHO WROTE THAT CARBON DI OXIDE IS A GOOD GAS , EVERY BODY KNOWS THIS..
DONALD TRUMP READ IT AND TOOK A DECISION..
JEWESS GRETA THUNBERG WAS REJECTED AT DAVOS .
IT IS OBVIOUS THAT A PERCEPTIVE SHIP CAPTAIN WHO COMMANDED SHIPS ALLAROUND THE PLANET F0R 30 YEARS KNOWS MORE THAN THIS 16 YEAR OLD MENTAL MIDGET..
CAPT AJIT VADAKAYIL SAYS AI MUST MEAN “INTELLIGENCE AUGUMENTATION “ IN FUTURE ..
Let this be IA
Let this be IA
OBJECTIVE AI CANNOT HAVE A VISION,
IT CANNOT PRIORITIZE,
IT CANT GLEAN CONTEXT,
IT CANT TELL THE MORAL OF A STORY ,
IT CANT RECOGNIZE A JOKE, OR BE A JUDGE IN A JOKE CONTEST
IT CANT DRIVE CHANGE,
IT CANNOT INNOVATE,
IT CANNOT DO ROOT CAUSE ANALYSIS ,
IT CANNOT MULTI-TASK,
IT CANNOT DETECT SARCASM,
IT CANNOT DO DYNAMIC RISK ASSESSMENT ,
IT IS UNABLE TO REFINE OWN KNOWLEDGE TO WISDOM,
IT IS BLIND TO SUBJECTIVITY,
IT CANNOT EVALUATE POTENTIAL,
IT CANNOT SELF IMPROVE WITH EXPERIENCE,
IT CANNOT UNLEARN
IT IS PRONE TO CATASTROPHIC FORGETTING
IT DOES NOT UNDERSTAND BASICS OF CAUSE AND EFFECT,
IT CANNOT JUDGE SUBJECTIVELY TO VETO/ ABORT,
IT CANNOT FOSTER TEAMWORK DUE TO RESTRICTED SCOPE,
IT CANNOT MENTOR,
IT CANNOT BE CREATIVE,
IT CANNOT THINK FOR ITSELF,
IT CANNOT TEACH OR ANSWER STUDENTs QUESTIONS,
IT CANNOT PATENT AN INVENTION,
IT CANNOT SEE THE BIG PICTURE ,
IT CANNOT FIGURE OUT WHAT IS MORALLY WRONG,
IT CANNOT PROVIDE NATURAL JUSTICE,
IT CANNOT FORMULATE LAWS
IT CANNOT FIGURE OUT WHAT GOES AGAINST HUMAN DIGNITY
IT CAN BE FOOLED EASILY USING DECOYS WHICH CANT FOOL A CHILD,
IT CANNOT BE A SELF STARTER,
IT CANNOT UNDERSTAND APT TIMING,
IT CANNOT FEEL
IT CANNOT GET INSPIRED
IT CANNOT USE PAIN AS FEEDBACK,
IT CANNOT GET EXCITED BY ANYTHING
IT HAS NO SPONTANEITY TO MAKE THE BEST OUT OF SITUATION
IT CAN BE CONFOUNDED BY NEW SITUATIONS
IT CANNOT FIGURE OUT GREY AREAS,
IT CANNOT GLEAN WORTH OR VALUE
IT CANNOT UNDERSTAND TEAMWORK DYNAMICS
IT HAS NO INTENTION
IT HAS NO INTUITION,
IT HAS NO FREE WILL
IT HAS NO DESIRE
IT CANNOT SET A GOAL
IT CANNOT BE SUBJECTED TO THE LAWS OF KARMA
ON THE CONTRARY IT CAN SPAWN FOUL AND RUTHLESS GLOBAL FRAUD ( CLIMATE CHANGE DUE TO CO2 ) WITH DELIBERATE BLACK BOX ALGORITHMS, JUST FEW AMONG MORE THAN 60 CRITICAL INHERENT DEFICIENCIES.
HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE, REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK, A GUT WHICH INTUITS, 30 TRILLION BODY CELLS WHICH HOLD MEMORY, A VAGUS NERVE , AN AMYGDALA , 73% WATER IN BRAIN FOR MEMORY, 10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.
SINGULARITY , MY ASS !
Spoofing is a disruptive algorithmic trading activity employed by traders to outpace other market participants and to manipulate markets.
Spoofers feign interest in trading futures, stocks and other products in financial markets creating an illusion of the demand and supply of the traded asset.
In an order driven market, spoofers post a relatively large number of limit orders on one side of the limit order book to make other market participants believe that there is pressure to sell (limit orders are posted on the offer side of the book) or to buy (limit orders are posted on the bid side of the book) the asset
Spoofing is where the algorithm puts orders between bid and ask prices in an intention to cancel them before getting executed. The aim of this scheme is to create a false sentiment about the security.
Where an algorithm is designed to use simple buy and sell transactions at rapid speeds to manipulate prices, such intent tests would find no liability.
For example, an algorithm may be designed to enter into legitimate transactions 90% of the time and “spoof” the other 10% of the time — that is, place market orders only to rapidly withdraw them.
In such a case, it will be difficult to demonstrate that the algorithm was designed to engage in spoofing, particularly when the designer of the algorithm can point to hundreds of thousands of legitimate transactions on a motion for summary judgment.
The user of a trading AI that, for example, frequently places and then cancels orders, may lead the trader using the AI to suspect that it may be spoofing and in such a case he may be considered willfully blind to the spoofing if he does not then monitor or limit the AI’s conduct
Computers are programmed to trade in a micro-second once they detect certain triggering quantitative data. Obviously, this is how high frequency traders have come to dominate the market.
High frequency trading (“HFT”) algorithms, which were used by financial firms to trade securities and commodities in fractions of seconds, were some of the first algorithms to expose the potential problems with the intent tests.
HFTs are for the most part based on hardcoded rules that allow computer systems to react faster than any human being. The central strategy for many of these HFTs is to identify an inefficiency in the market and to trade them away before anyone else.
The same speed that allows these algorithms to exploit market inefficiencies also allows them to engage in conduct that may be unethical or border on being unlawful.
For example, an HFT algorithm can be used to beat other orders to market by fractions of a second, allowing the algorithm to (a) determine that someone was seeking to buy a security at a certain price, (b) buy the security before the other person does, and (c) sell it to them at a higher price.
They may also be used to engage in conduct such as “spoofing,” where the algorithm places phantom orders on markets, only to withdraw them once the market has moved in a desired direction.
"Pump and dump" (P&D) is a form of securities fraud that involves artificially inflating the price of an owned stock through false and misleading positive statements, in order to sell the cheaply purchased stock at a higher price.
Once the operators of the scheme "dump" (sell) their overvalued shares, the price falls and investors lose their money. This is most common with small cap cryptocurrencies and very small corporations, i.e. "microcaps"
There are a few warnings signs that you should be aware of that may help you spot a potential scam.
The investment on offer may be a scam if the person selling to you:--
does not have an Australian financial services (AFS) licence or says they don’t need one rings you repeatedly and tries to keep you on the phone,
or emails you a lot to keep you engaged says you need to make a quick decision or you’ll miss out on the deal
claims to be a professional broker or portfolio manager and sounds professional, but does not have an AFS licence
uses a name or claims to be associated with a reputable organisation to gain credibility e.g. NASDAQ, Bloomberg offers you a glossy prospectus or brochures, professional-looking share certificates or receipts, but their prospectus is not registered with ASIC.
Boiler rooms scams, using the pump and dump technique, also tend to focus on micro and small cap stocks. This is because their value is generally easier to manipulate. Information on these types of stocks is often hard to find, which can make it difficult for investors to thoroughly research the company and refute the claims of fraudsters.
While fraudsters in the past relied on cold calls, the Internet now offers a cheaper and easier way of reaching large numbers of potential investors through spam email, bad data, social media, and false information
The Anti-Spoofing Statute
Spoofing, which Congress criminalized in 2010 as part of the Dodd-Frank Wall Street Reform and Consumer Protection Act,10 is an illegal shortcut through the risky volatility of these markets. Title 7 of the United States Code, § 6c(a)(5)(C) makes it unlawful for "any person to engage in trading, practice, or conduct on or subject to the rules of a registered entity that " "is, is of the character of, or is commonly known to the trade as, "spoofing" (bidding or offering with the intent to cancel the bid or offer before execution)." Spoofing is illegal because it "can be employed to artificially move the market price of a stock or commodity up and down, instead of taking advantage of natural market events."
A spoofing trader can create artificial supply and demand by "placing large and small orders on opposite sides of the market. The small order is placed at a desired price, which is either above or below the current market price, depending on whether the trader wants to buy or sell. If the trader wants to buy, the price on the small batch will be lower than the market price; if the trader wants to sell, the price on the small batch will be higher. Large orders are then placed on the opposite side of the market at prices designed to shift the market toward the price at which the small order was listed."
"In short, the trader shifts the market downward through the illusion of downward market movement resulting from a surplus of supply. Importantly, the large, market-shifting orders that he places to create this illusion are ones that he never intends to execute; if they were executed, our unscrupulous trader would risk extremely large amounts of money by selling at suboptimal prices”
United States v. Coscia
United States v. Coscia, the first spoofing case ever brought, is instructive. Michael Coscia was a longtime commodities futures trader at a high-frequency trading firm. Coscia enlisted the help of a computer programmer to design two computer programs that would work in 17 different CME Group markets and three different European futures markets Coscia's high-frequency trading strategy allowed him to enter and cancel large-volume orders in a matter of milliseconds, which created an artificial supply and demand. Accordingly, Coscia was able to purchase contracts at lower prices or sell contracts at higher prices by artificially pumping and then deflating the market by placing and cancelling orders. Within milliseconds of achieving the desired downward market effect," Coscia cancelled the large orders. The computer programs detected when conditions were ripest and operated through a system of trade orders and quote orders and the "entire series of transactions would take place in a matter of milliseconds."
In August of 2011, Coscia employed this strategy with various futures commodities, including gold, soybean meal, soybean oil, high-grade copper, Euro FX and Pounds FX currency futures. In one instance involving copper futures, Coscia risked up to $50 million by placing large orders, which drove the price up. The buy orders created the illusion of market movement in copper futures and Coscia sold the contracts at his desired price point. Coscia then placed large volume orders to sell the contracts, which created downward momentum on copper future prices by fostering the appearance of abundant supply. This allowed Coscia to buy his small orders at the artificially deflated price. Coscia then immediately cancelled the large orders. Coscia's whole process was repeated tens of thousands of times resulting in over 450,000 large contract orders. This all took place in two-thirds of one second, earning Coscia profits of over $1.4 million.
What makes spoofing criminal is the trader's intent when placing the order. "Prosecutors can charge only a person whom they believe a jury will find possessed the requisite specific intent to cancel orders at the time they were placed." "Legitimate good-faith cancellation of partially filled orders" does not violate the anti-spoofing statute. In Coscia's case, he would be guilty of spoofing if he knowingly entered trades with the present intent to cancel them prior to execution. This is harder to prove than one might think and can lead to competing inferences. Coscia argued that his trades were "concededly conditional" and that his trading strategy was "virtually identical to other durational or contingent orders routinely permitted by exchange trading interfaces." As Coscia pointed out, cancelled trades are common in the high-frequency trading world where 98% of orders are cancelled before execution.
A Chicago jury disagreed and convicted Coscia. The U.S. Court of Appeals for the Seventh Circuit affirmed the conviction and found that Coscia had the requisite criminal intent. The Court found that Coscia's conduct was spoofing based on several key pieces of evidence introduced at trial: (1) Coscia was responsible for 96 percent of all cancellations on the European exchange during the two months when he used the computer program; (2) on the Chicago Mercantile Exchange 35.61% of Coscia's small orders were filled, but only 0.08% of his large orders were filled; (3) the designer of the computer program offered devastating testimony that the computer programs were made to avoid large orders being filled and that "quote orders" were used to "pump" the market; (4) only 0.57% of Coscia's large orders were on the market for more than a single second, but 65% of Coscia's large orders were open on the market for more than a second; and (5) Coscia's order-to-trade ratio was 1,592% and that ratio for other market participants was between only 91% and 264%, which meant that Coscia's average order was far greater than his average trade. The Seventh Circuit found that while "no single piece of evidence necessarily established spoofing" the proof taken together allowed jurors to determine that Coscia entered his orders with the intent to cancel them before execution.
Coscia argued that the anti-spoofing statute was impermissibly vague and attempted to equate his conduct with various legal trading strategies such as "stop-loss orders" (an order to sell when a certain price is reached), "fill-or-kill orders" (an order that must be filled immediately or it's fully cancelled), "partial-fill" orders (a pre-programmed order that cancels the balance of an order once a portion is filled), "Good-til-date orders" (orders that cancel with a defined period of time), "ping orders" (small orders used to detect trading activity), or "iceberg" or hidden quantity orders" (orders designed to obscure the underlying supply or demand). The Court, unpersuaded by these arguments, found that those trading strategies were markedly different from what Coscia did because they are designed to be executed under certain conditions whereas Coscia's large orders "were designed to evade execution" altogether.
Despite mounting a vigorous defense, Coscia was found to have "designed a system that used large orders to inflate or deflate prices, while also structuring that system to avoid the filling of large orders." Coscia essentially made, pumped, and then dumped the market in milliseconds, profiting enormously in a matter of seconds. While Coscia's evidence of spoofing was circumstantial it was also clear and the U.S. Supreme Court denied his petition for certiorari.42 However, there may be closer calls in the near future. The second spoofing trial involving a Swiss trader in the District of Connecticut resulted in six of the seven counts being dismissed for lack of venue and an acquittal on the remaining charge. This past spring federal prosecutors in Chicago dismissed spoofing charges against a software developer after his trial ended in a hung jury. The software developer’s $24,200 program was alleged to have aided a trader with a $40 million spoofing scheme of S&P 500 futures that caused a “flash crash” on the equity markets in May of 2010.
There may be more challenges in the future as high-frequency trading and technology continue to evolve. The introduction and use of artificial intelligence to effectuate high-frequency trading could obfuscate spoofing's intent element and allow for a more persuasive defense about an individual's intent to cancel trades before execution.
Some forms of intent require an examination of a decisionmaker or actor’s basis for conduct. The AI in that section placed trading orders, consummating some of them and rapidly withdrawing or changing others. It is possible that the algorithm was engaging in spoofing. The evidence, however, will likely be equivocal.
The designer of the algorithm will have tens of thousands of legitimate transactions to point to for every dozen or so withdrawn orders. The Black Box Problem ensures that there is no way to determine what the AI’s particular strategy is.
Unlike the first generation of algorithms discussed earlier, there will not be instructions somewhere in the AI’s programming that are designed to engage in spoofing, so there will not be testimony from the AI’s designer to that effect, as there was in Coscia.
Flash crash is a very rapid, deep, and volatile fall in security prices occurring within an extremely short time period. A flash crash frequently stems from trades executed by black-box trading, combined with high-frequency trading, whose speed and interconnectedness can result in the loss and recovery of billions of dollars in a matter of minutes and seconds.
May 6, 2010, is a date that should live in infamy. On that day, the US stock market suffered a trillion-dollar collapse..
In just 15 minutes from 2:45 pm, the Dow Jones plunged by almost 9 percent. Had it stuck, it would have been among the famous index’s biggest declines in a single day. Hundreds of billions of dollars were wiped off the face value of famous companies in the S&P 500. Stocks in some companies were trading for a single cent.
Almost as soon as the sleepless Wall Street traders had time to adjust to the nightmare scenario unfolding in front of them, the market began to rebound. Within another fifteen minutes, almost all of the losses had been recovered.
It was one of those moments where you can only marvel at the absurdity of what it is to be a modern human. The numbers on the Dow Jones index are just numbers; the value they represent is, more than ever before, based on perceptions: stories we tell each other about companies and the economy.
A flash crash, like the one that occurred on May 6, 2010, is exacerbated as computer trading programs react to aberrations in the market, such as heavy selling in one or many securities, and automatically begin selling large volumes at an incredibly rapid pace to avoid losses
What started as a statement from Apple citing a weak Chinese economy caused investors and traders to sell out of currencies that were viewed as risky. The result was selling out of the Australian dollar, which is a key trading partner of China
Sarao and his company, Nav Sarao Futures Limited, allegedly made more than $40 million in profit from trading from 2009 - 2015.
Algorithmic trading (also called automated trading, black-box trading, or algo-trading) uses a computer program that follows a defined set of instructions (an algorithm) to place a trade. The trade, in theory, can generate profits at a speed and frequency that is impossible for a human trader.
The defined sets of instructions are based on timing, price, quantity, or any mathematical model. Apart from profit opportunities for the trader, algo-trading renders markets more liquid and trading more systematic by ruling out the impact of human emotions on trading activities.
Algorithmic trading is a method of executing orders using automated pre-programmed trading instructions accounting for variables such as time, price, and volume to send small slices of the order (child orders) out to the market over time. They were developed so that traders do not need to constantly watch a stock and repeatedly send those slices out manually.
Popular "algos" include Percentage of Volume, Pegged, VWAP, TWAP, Implementation shortfall, Target close. In the twenty-first century, algorithmic trading has been gaining traction with both retail and institutional traders.
Algo-trading provides the following benefits:---
Trades are executed at the best possible prices.
Trade order placement is instant and accurate (there is a high chance of execution at the desired levels).
Trades are timed correctly and instantly to avoid significant price changes.
Reduced transaction costs.
Simultaneous automated checks on multiple market conditions.
Reduced risk of manual errors when placing trades.
Algo-trading can be backtested using available historical and real-time data to see if it is a viable trading strategy.
Reduced the possibility of mistakes by human traders based on emotional and psychological factors.
Most algo-trading today is high-frequency trading (HFT), which attempts to capitalize on placing a large number of orders at rapid speeds across multiple markets and multiple decision parameters based on preprogrammed instructions.
Algo-trading is used in many forms of trading and investment activities including:---
Mid- to long-term investors or buy-side firms—pension funds, mutual funds, insurance companies—use algo-trading to purchase stocks in large quantities when they do not want to influence stock prices with discrete, large-volume investments.
Short-term traders and sell-side participants—market makers (such as brokerage houses), speculators, and arbitrageurs—benefit from automated trade execution; in addition, algo-trading aids in creating sufficient liquidity for sellers in the market.
Systematic traders—trend followers, hedge funds, or pairs traders (a market-neutral trading strategy that matches a long position with a short position in a pair of highly correlated instruments such as two stocks, exchange-traded funds (ETFs) or currencies)—find it much more efficient to program their trading rules and let the program trade automatically.
Algorithmic trading provides a more systematic approach to active trading than methods based on trader intuition or instinct.
It is widely used by investment banks, pension funds, mutual funds, and hedge funds because these institutional traders need to execute large orders in markets that cannot support all of the size at once.
Quantitative trading consists of trading strategies based on quantitative analysis, which rely on mathematical computations and number crunching to identify trading opportunities. Price and volume are two of the more common data inputs used in quantitative analysis as the main inputs to mathematical models.
Quant trading strategies exist, ranging from arbitrage to high-frequency trading,
.Quant traders develop trading strategies such as algorithmic trading and high-frequency trading based on quantitative analysis that rely on mathematical computations to identify optimal trading opportunities. ... Enhancing existing machine learning and AI models for price, volume and risk management strategies.
Spoofing is the act of placing orders to give the impression of wanting to buy or sell shares, without ever having the intention of letting the order execute to temporarily manipulate the market to buy or sell shares at a more favorable price. This is done by creating limit orders outside the current bid or ask price to change the reported price to other market participants. The trader can subsequently place trades based on the artificial change in price, then canceling the limit orders before they are executed.
Suppose a trader desires to sell shares of a company with a current bid of $20 and a current ask of $20.20. The trader would place a buy order at $20.10, still some distance from the ask so it will not be executed, and the $20.10 bid is reported as the National Best Bid and Offer best bid price.
The trader then executes a market order for the sale of the shares they wished to sell. Because the best bid price is the investor’s artificial bid, a market maker fills the sale order at $20.10, allowing for a $.10 higher sale price per share. The trader subsequently cancels their limit order on the purchase he never had the intention of completing.
Broadly defined, high-frequency trading (aka, “black box” trading) refers to automated, electronic systems that often use complex algorithms (strings of coded instructions for computers) to buy and sell much faster and at much greater scale than any human could do (though, ultimately, people oversee these systems)..
According to Nasdaq, there are two types: execution trading (when an order is executed via a computerized algorithm designed to get the best possible price), and a second type that seeks “small trading opportunities.”
HFT comprises about half of overall U.S. equity trading, according to several estimates, and it’s been around that level for a few years now. Billions of shares changes hands each day on major U.S. exchanges including NYSE, Nasdaq, and Cboe,
How “fast” is fast? Blink, and you’ll miss it. Today’s increasingly powerful computers can execute thousands, if not millions, of transactions in seconds, and HFT is often measured in milliseconds (thousandths of a second) or microseconds (millionths of a second).
For perspective, a blink of your eye takes about 400 milliseconds, or four-tenths of a second. In high-frequency trading, “we’re talking about unfathomably small amounts of time,”
“High-frequency” is often in the eye of the beholder. In a primitive example of high-frequency trading, European traders in the 1600s used carrier pigeons to relay price information ahead of competitors, according to market historians. The advent of telegraph service in the 1800s and early 1900s further accelerated the flow of market information.
Rothschild used a pigeon to fool and screw the John Bulls after the Battle of Waterloo- by a BACK SWING.
The image or notion of algorithmic traders as predators fleecing the average investor still lingers. Certain episodes, such as the “flash crash” of May 2010, or, more recently, the U.S. market’s sharp swings in December 2018, often raise questions in financial media about whether algos exacerbate volatility.
And indeed, regulators such as the U.S. Securities and Exchange Commission have in recent years fined some high-frequency traders for price manipulation or other fraudulent trading.
How is high-frequency trading beneficial to the markets?
Such traders contribute vital liquidity to markets, helping narrow bid-ask spreads and bringing buyers and sellers together efficiently. Ultimately, this can help bring down costs for investors,
Investment firms need to stay one step ahead in order to be the first to recognize trends and take advantage of opportunities.
To stay competitive they are looking to employ the most advanced tools to enhance performance. Algorithmic trading is now a growing trend filling this void. Algorithmic “Buy” and “Sell” orders account for 70% of the US equity market volume. Previously, only large investment firms and hedge funds were able to utilize these advanced mathematical models
There are however, two types of algorithmic trading that are very distinguishable. The first is high frequency trading (HFT). The advantage of this form of algotrading is to be quicker than the rest of the market, but it only can be utilized by a select group of traders and there are extensive consequences that affect the entire market.
The system is not “smart” or provides any real valuable insight for investors as it only blindly follows short-term trends. This form of algotrading is also publically ethically debated. The second form of algotrading, utilized by the I Know First self-learning algorithm is called quantitative trading. This form provides valuable market insight to retail and professional trader alike that is used in conjunction with traditional forms of analysis.
Algorithmic traders benefit from this “second opinion” in their decision making process by verifying their own analysis or discovering new market opportunities while still maintaining complete control of their portfolio.
The main goal of High Frequency trading is to extract a lot of small returns gained in very short period of time that do the actual trading for investment firms with real time intelligence and can trade in milliseconds. HFT technological costs are enormous and there is high competition amongst firms.
Typically an HFT operates online using real-time data about transactions utilizing tremendous amounts of data in real time. In order to extract these small returns, the algorithm must be able to execute very quickly, which claims high demand on CPU performance (central processor unit) and memory handling. The algorithm then must minimize the amount of data used for decision-making.
The most common type of algorithm used is called “one-pass.” As the title suggests, this algorithm reads each new piece of information only once and then tosses it away. Such algorithms usually operate with aggregated data dating back to five minutes of history to create a one-minute projection into the future. Based on this projection the algorithm makes decision on quotes and trades.
As competition is increasing with HFT, performance of each HFT algorithm becomes more decisive in the effectiveness of the system. This performance is highly dependent on infrastructure, representing a large proportion of expenses related to High Frequency Trading. Literally every meter of cable matters and the algorithm may lose its advantage of speed in any moment, significantly threatening the ability to make a profit.
The reason speed is imperative is that HFT works by placing and quickly canceling orders to find the price buyers and sellers are ready to trade at. This price-volume information is used to recognize developing trends. At the end of the day, they liquidate positions. Since these algorithms are so effective at moving capital, they have become quite controversial because retail investors are not able to compete.
These algorithms can simultaneously process volumes of information at a rate no human can process, giving investment firms a huge advantage. This allows HFT traders to have the first choice in trades, a form of scalping. All in all, these algorithms add an extra element of volatility and have been at least partially responsible for market crashes in the past. In a crisis, these HFT algorithms liquidate positions in seconds, causing huge imbalances and price swings.
This risk is currently present. One notable example is the May 6, 2010 Flash Crash. As the US stocks traded down most of the day due to the debt crisis in Greece, the market dropped 600 points at 2:42 pm, on top of the 300 points it was already down for an almost 1000 point loss for the day. By 3:07 pm the market regained most of the 600-point drop back.
Even though the algorithms were able to correct most of the damage the flash crash left a scar on the market. In the 17 of the 25 months since then investors have withdrawn a net $137 billion out of the U.S. stock market according to Lipper. Since a level field is needed to give everyone an equal chance, several European countries and Canada are curtailing or banning HFT due to concerns about volatility and fairness.
The second form of algorithmic trading is known as quantitative trading or longer term trading. These algorithms analyze the structure and the trends in the market, find predictable patterns, and investors trade upon these machine-derived forecasts. This form of trading is very suitable for most investors, retail or professional. The I Know First predictive algorithm belongs to the second form of algorithmic trading.
While we cannot speak on every algorithm meant to predict the market, the I Know First market prediction system is based on artificial intelligence (AI), machine learning (ML), as well as utilizes elements of artificial neural networks and genetic algorithms. Machine learning provides an innate acumen to our comprehension of market dynamics and behavior. The algorithm has a built-in general mathematical framework that generates and verifies statistical hypotheses about stock price development.
Machine learning tools such as artificial neural networks make this prediction system self-learning, and consistently determined to become more precise. This framework is used to generate initial testing models over a test sample of data.
The goal of this phase is to validate the accuracy of the algorithm as well as to fine-tune the fitness function, which represents the actual goal of the algorithm expressed as a mathematical function. When the algorithm finds the global minimum of the fitness function attached to one of the models generated, it fulfills its goal.
From the mathematical perspective finding a global minimum is a very complex task and bears a risk of finding a local minimum, which seems to be global from the perspective of surroundings of the point but there can be found points less than that one. The situation is illustrated in the figure above.
To increase the chance of finding the global minimum, we need to combine multiple searching procedures. When the algorithm proves its ability to generate valid results over the sample data, we can then use it to real data analysis. Every run the algorithm improves its predictive ability, as it generates new models and verifies them to the fitness function, therefore providing better and better results.
Algorithmic Trading With The I Know First Market Prediction System
Many investors hear about algorithmic trading in the news but are not sure how they can use algorithms to their advantage. There are many successful strategies that can be easily applied with the I Know First algorithm that are discussed in detail here. In general however, algorithms can help protect investors from human factors such as their own biases, psychological pressures, the omnipresent market risk, and fluctuating volatility.
The I Know First algorithm has two indicators that guide investors to making better financial decisions. The first indicator is the signal, which gives the direction and the relative “scale” of the predicted movement of an asset, while also giving the level of confidence the algorithm has for that prediction, called the predictably indicator. This indicator is based on the past performance of the corresponding predictor. Both of these parameters are important and as a general rule, regardless of the type of forecast, the higher both are the better. It is recommended to consider both the signal strength and predictability.
Investors stand to benefit from using the I Know First advanced algorithmic system because algotrading allows for objective valuation of assets, quantitative forecasts of the future trends for six different time horizons at a very low expense to receive daily predictions.
Plus investors can have the option to get up to twenty top recommendations of stocks, currencies, ETF’s, commodities, world indices and even customized predictions that can be added with any forecast. Utilizing the immense amount of selections available, an investor can design a well-diversified portfolio that has scalable predictions to support each decision for every component.
Algorithms play a role in virtually every technology we have contact with including our favorite mobile applications, to search engines and social media networks that we use daily. So it is natural that we as investors in this hi-tech world utilize algorithms to help us invest better.
Even in the 21st century, some are skeptical about whether or not an algorithm can really make a difference in a portfolios return. Well the I Know First algorithm returned 60.66% in 2013, beating the S&P 500 by over 30%. These results are constantly improving as the algorithm learns from its successes and failures. However in the skeptic’s defense, not all algorithms are created equally but the I Know First self-learning algorithm is unique and cannot be whipped up by published formulas.
The I Know First self-learning algorithm is a predictive model based on artificial intelligence, machine learning, incorporating elements of artificial neural networks and genetic algorithms.
HFT algorithms typically involve two-sided order placements (buy-low and sell-high) in an attempt to benefit from bid-ask spreads. HFT algorithms also try to “sense” any pending large-size orders by sending multiple small-sized orders and analyzing the patterns and time taken in trade execution
High-frequency trading (HFT) is an automated trading strategy that uses decision making algorithms, supercomputing power, and low-latency trading technology to exploit market pricing inefficiencies for profit. HFT strategies require investors to trade in high volumes and are most profitable in volatile markets, making HFT a convenient scapegoat for market instability.
Anomaly detection is a technique used to identify unusual patterns that do not conform to expected behavior, called outliers
Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.
Anomaly detection is applicable in a variety of domains, such as intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, and detecting ecosystem disturbances. It is often used in preprocessing to remove anomalous data from the dataset.
Anomaly Detection is the technique of identifying rare events or observations which can raise suspicions by being statistically different from the rest of the observations. Such “anomalous” behaviour typically translates to some kind of a problem like a credit card fraud, failing machine in a server, a cyber attack, etc.
Conventionally, businesses use fixed set of thresholds to identify metrics that cross the threshold, to mark them as anomalies.
An anomaly can be broadly categorized into three categories –
Point Anomaly: A tuple in a dataset is said to be a Point Anomaly if it is far off from the rest of the data.
Contextual Anomaly: An observation is a Contextual Anomaly if it is an anomaly because of the context of the observation.
Collective Anomaly: A set of data instances help in finding an anomaly.
It can be done in the following ways –
Supervised Anomaly Detection: This method requires a labeled dataset containing both normal and anomalous samples to construct a predictive model to classify future data points. The most commonly used algorithms for this purpose are supervised Neural Networks, Support Vector Machine learning, K-Nearest Neighbors Classifier, etc.
Unsupervised Anomaly Detection: This method does require any training data and instead assumes two things about the data ie Only a small percentage of data is anomalous and Any anomaly is statistically different from the normal samples. Based on the above assumptions, the data is then clustered using a similarity measure and the data points which are far off from the cluster are considered to be anomalies. .
Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set.
Since detecting anomalies is a fairly generic task, a number of different machine learning algorithms have been created to tailor the process to specific use cases.
Here are a few common types:--
Detecting suspicious activity in a time series, for example a log file. Here, the dimension of time plays a huge role in the data analysis to determine what is considered a deviation from normal patterns.
Detecting credit card fraud based on a feed of transactions in a labeled dataset of historical frauds. In this type of supervised learning problem, we can train a classifier to classify a transaction as anomalous or fraudulent given that we have a historical dataset of known transactions, authentic and fraudulent.
Detecting a rare and unique combination of a real estate asset’s attributes — for instance, an apartment building from a certain vintage year and a rare unit mix. At Skyline AI, we use these kinds of anomalies to capture interesting rent growth correlations and track down interesting properties for investment.
An anomaly is an extremely rare episode, hard to assign to a specific class, and hard to predict. It is an unexpected event, unclassifiable with current knowledge. It's one of the hardest use cases to crack in data science because:
The current knowledge is not enough to define a class. More often than not, no examples are available in the data to describe the anomaly
So, the problem of anomaly detection can be easily summarized as looking for an unexpected, abnormal event of which we know nothing and for which we have no data examples. As hopeless as this may seem, it is not an uncommon use case.
Fraudulent transactions, for example, rarely happen and often occur in an unexpected modality
Expensive mechanical pieces in IoT will break at some point without much indication on how they will break
A new arrhythmic heart beat with an unrecognizable shape sometimes shows up in ECG tracks
A cybersecurity threat might appear and not be easily recognized because it has never been seen before
Anomaly detection is a monitoring mechanism, in which a system keeps an eye on important key metrics of the business, and alerts users whenever there is a deviation from normal behavior.
Conventionally, businesses use fixed set of thresholds to identify metrics that cross the threshold, to mark them as anomalies. However, this method is reactive in nature, which means by the time businesses recognize threshold violations, the damage caused would have amplified multi-fold. What is needed, is a system that constantly monitors data streams for anomalous behavior, and alert users in real-time to facilitate timely action.
Anomaly detection algorithms are capable of analyzing huge volumes of historical data to establish a ‘Normal’ range, and raising red flags when outliers are seen to be deviating from the tolerable range.
A good anomaly detection system should be able to perform the following tasks:--
Identification of signal type and select appropriate model
Anomaly identification and scoring
Finding root cause by correlating various identified anomalies
Obtaining feedback from users to check quality of anomaly detection
Re-training of the model with new data
Anomalies are identified whenever a particular metric moves beyond the specified threshold. However, it is important to quantify the magnitude of deviation of the anomaly, in order to prioritize which anomaly needs to be investigated/solved first. In the scoring phase, each anomaly is scored as per the magnitude of deviation from median or based on how long the deviated metric sustains from normal behavior. Larger the deviation, higher the score.
Anomaly detection systems are usually designed around tight bounds to highlight deviation quickly, but in the process sometimes these systems raise many false alarms. In fact, false positives is known to be one of the prevalent issues in the area of anomaly detection.
One cannot underrate the flexibility that needs to be provided to end user, to change the status of a data point from anomaly to normal. After receiving this feedback, models needed to be updated/retrained to avoid identified false positives from recurring.
The system needs to re-train on new data continuously, to adapt as per the newer trends. It is possible that the pattern itself does change due to the change in operating environment, rather than anomalous deviating behavior. However, there should be a balance in the mechanism. Updating the model too frequently requires excessive amount of computational resources, and lower frequency of updating results in a deviation of the model from the actual trend.
Overall, anomaly detection is gaining increased importance in recent years, due to exponential growth of available data, and the absence of impactful mechanisms to use this data. Anomaly detection systems are better fit in identifying significant deviations, and at the same time ignoring the not worthy noises from the ocean of data — enabling business with the right alarms and insights at the right time.
AI and ML have made it a bit easier to detect the proliferation of malware and identify early on in the lifecycle if a file/resource is showing signs of belligerent behaviour. This level of automation has been possible with pattern detection, behaviour-based anomaly detection and advanced use of heuristics – all based on Machine-learned solutions – to keep the intruders out.
Anomaly detection is applicable in a variety of domains, such as intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, and detecting ecosystem disturbances. It is often used in preprocessing to remove anomalous data from the dataset. The most simple, and maybe the best approach to start with, is using static rules.
The Idea is to identify a list of known anomalies and then write rules to detect those anomalies. Rules identification is done by a domain expert, by using pattern mining techniques, or a by combination of both. Unsupervised machine learning algorithms, however, learn what normal is, and then apply a statistical test to determine if a specific data point is an anomaly.
A system based on this kind of anomaly detection technique is able to detect any type of anomaly, including ones which have never been seen before. Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. ...
By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection researchA statistical anomaly occurs when something falls out of normal range for one group, but not as a result of being in that group
Anomalies are problems that can occur in poorly planned, un-normalised databases where all the data is stored in one table (a flat-file database). Insertion Anomaly - The nature of a database may be such that it is not possible to add a required piece of data unless another piece of unavailable data is also added
In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular unsupervised methods) will fail on such data, unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro clusters formed by these patterns.
Anomaly detection is one AI approach in particular that could help banks identify fraudulent transactions and transfers. With predictive analytics, banks could both detect fraud and score transactions by risk level based on a wider range of customer data.
Teradata is an AI firm selling fraud detection solutions to banks. They claim their machine learning platform can enhance banking fraud detection by helping their data analytics software recognize potential fraud cases while avoiding acceptable deviations from the norm. In other cases, these deviations may be flagged and end up as false positives that offer the system feedback to “learn” from its mistakes.
They were able to:--
Reduce their false positives by 60% and were expected to reach 80% as the machine learning model continued to learn.
Increase detection of real fraud by 50%
Refocus their time and resources toward actual cases of fraud and identifying new fraud methods.
Machine learning models for fraud detection can also be used to develop predictive and prescriptive analytics software. Predictive analytics offers a distinct method of fraud detection by analyzing data with a pre-trained algorithm to score a transaction on its fraud riskiness.
Prescriptive analytics takes the predictions made from the correlations of a predictive analytics engine and uses it to provide recommendations for what to do once fraud is detected.
Both predictive and prescriptive analytics software require the same data and training to implement. Banking data experts or data scientists employed by the client bank will need to label a high volume of transactions as either fraudulent or legitimate, and then run all of them though the machine learning model. This allows the machine learning model to be able to recognize fraud methods used in the fraudulent transactions.
Defenses Against Data Poisoning--
Similar to evasion attacks, when it comes to defenses, we’re in a hard spot. Methods exist but none guarantee robustness in 100% of cases.
The most common type of defenses is outlier detection, also knows as “data sanitization” and “anomaly detection”. The idea is simple — when poisoning a machine learning system the attacker is by definition injecting something into the training pool that is very different to what it should include — and hence we should be able to detect that.
The challenge is quantifying “very”. Sometimes the poison injected is indeed from a different data distribution and can be easily isolated:
An interesting twist on anomaly detection is micromodels. The Micromodels defense was proposed for cleaning training data for network intrusion detectors. The defense trains classifiers on non-overlapping epochs of the training set (micromodels) and evaluates them on the training set.
By using a majority voting of the micromodels, training instances are marked as either safe or suspicious. Intuition is that attacks have relatively low duration and they could only affect a few micromodels at a time.
Anomaly detection has been used in predictive analytics now for several years. By analyzing the data, this function pinpoints activities outside the normal operations and expectations, whether those activities were good or bad.
By building upon the foundation of anomaly detection, contribution analysis provides you with the context in which an activity occurred. It will investigate the anomaly and analyze the data, which affords you with the actionable areas on which to focus your efforts. Scientists from Google’s health-tech subsidiary have pioneered innovative ways of creating revolutionary healthcare insights through artificial intelligence prediction algorithms.
WHAT IS "DEATH BY ALGORITHM "?
IT IS THE BLATANT MISUSE OF ARTIFICIAL INTELLIGENCE .. PALESTINIANS HAVE BEEN AT THE RECEIVING END ..
On balakot night Indian airforce shot down their own helicopter .. the blame was squarely put of automation gone awry..
Deliberate economic crashes are now blamed on ai algorithms.. A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason.
But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s. At root, an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing.
When ai automation is first tried out,, it is first used on denizens of third world nations.. In this aircraft humans could not veto automation and take over the control to manual..
We must ban Lethal autonomous weapons (LAWS) –using AI technology to replace the humans with an algorithm that makes the decision of when to shoot or whom to shoot.
Millions of inncocent women and children were killed from the air in Libya/ Iraq/ Syria/ yemen.. the un turns a nelsons’s eye because jews are not killed this way..
As early as 2007, Noel Sharkey published a dire warning in The Guardian titled “Robot Wars Are a Reality.” An expert in artificial intelligence and robotics, Sharkey expressed concern about the use of battlefield robots; electronic soldiers that act independently of any human control.
Sharkey argued that we “are sleepwalking into a brave new world where robots decide who and when to kill.”
War is no longer fought primarily on the battlefield. Developments in computing, cyber warfare and artificial intelligence, have changed the way nations and non-state actors engage in hostilities.
Technologies which enable point and shoot (or click) destruction are growing exponentially year by year, fuelled by a digital revolution that has ricocheted across the globe. Nations have realized that today’s conflicts are waged by 1s and 0s, and that algorithms can be trusted allies in the never ending war. It is now essential for the world to grasp the reality that robot wars are no longer just the fictive imaginings of science fiction..
Can robots and drones be programmed to comply with international human rights law? Since computers are susceptible to viruses and hacking, it is also feasible to assume that the systems which control killer robots could be overtaken by state and non-state actors.
The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead.
Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary.
But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.
At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would.
Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?
When some embedded software experts spent 20 months digging into the code were they able to prove the family’s case, revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output.
The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, not least when the algorithms may also have to defend themselves from hackers?
In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand..
At core, computer programs are bundles of such algorithms. Recipes for treating data. On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent.
Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”). This has revolutionized areas of medicine, science, transport, communication, making it easy to understand the utopian view of computing that held sway for many years.
f we tend to discuss algorithms in almost biblical terms, as independent entities with lives of their own, it’s because we have been encouraged to think of them in this way. Corporations like Facebook and Google have sold and defended their algorithms on the promise of objectivity, an ability to weigh a set of conditions with mathematical detachment and absence of fuzzy emotion. No wonder such algorithmic decision-making has spread to the granting of loans/ bail/benefits/college places/job interviews and almost anything requiring choice.
far from eradicating human biases, algorithms could magnify and entrench them. After all, software is written by overwhelmingly affluent white men – and it will inevitably reflect their assumptions ..Bias doesn’t require malice to become harm, and unlike a human being, we can’t easily ask an algorithmic gatekeeper to explain its decision..
Big companies like Google “algorithmic audits” of any systems directly affecting the public, a sensible idea that the tech industry will fight tooth and nail, because algorithms are what the companies sell; the last thing they will volunteer is transparency.
We might call these algorithms “dumb”, in the sense that they’re doing their jobs according to parameters defined by humans. The quality of result depends on the thought and skill ( and human biases like with Palestinians and roma gypsies ) with which they were programmed.
At the other end of the spectrum is the more or less distant dream of human-like artificial general intelligence, or AGI. A properly intelligent machine would be able to question the quality of its own calculations, based on something like our own intuition (which we might think of as a broad accumulation of experience and knowledge).
To put this into perspective, Google’s DeepMind division has been lauded for creating a program capable of mastering arcade games, starting with nothing more than an instruction to aim for the highest possible score.
Currently, the use of multiple UAVs in drone swarms is garnering huge interest from the research community, leading to the exploration of topics such as UAV cooperation, multi-drone autonomous navigation, etc.
Researchers have been working on UAV pursuit-evasion based on two probable approaches. Researchers propose the use of vision-based deep learning object detection and reinforcement learning for detecting and tracking a UAV (target or leader) by another UAV (tracker or follower).
The proposed framework uses vision data captured by a UAV and deep learning to detect and follow another UAV. The algorithm is divided into two parts, the detection of the target UAV and the control of UAV navigation (follower UAV). The deep reinforcement learning approach uses a deep convolutional neural network (CNN) to extract the target pose based on the previous pose and the current frame.
The network works like a Q-learning algorithm. The output is a probabilistic distribution between a set of possible actions such as translation, resizing (e.g., when a target is moving away), or stopping. For each frame from the captured sequence, the algorithm iterates over its predictions until the network predicts a “stop” when the target is within the desired position.
Q-learning is a model-free reinforcement learning algorithm. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model (hence the connotation "model-free") of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations.
Q-learning algorithm involves an agent, a set of states and a set of actions per state. It uses Q-values and randomness at some rate to decide which action to take.
The second approach uses deep learning object detection for the detection and tracking of a UAV in a pursuit-evasion scenario. The deep object detection approach uses images captured by a UAV to detect and follow another UAV. This approach uses historical detection data from a set of image sequences and inputs this data to a SAP algorithm in order to locate the area with a high probability UAV presence.
The proposed framework uses images captured by a UAV and a deep learning network to detect and follow another UAV in a pursuit-evasion scenario. The position of the detected target UAV (detected bounding box) is sent to a high-level controller that decides on the controls to send to the follower UAV to keep the target close to the centre of its image frame.
In this work, researchers aim to develop architecture capable of tracking moving targets using predictions over time from a sequence of previously captured frames. The proposed algorithm tracks the target and moves a bounding box according to each movement prediction. The architecture is based on an ADNet (action-decision network).
This work presents two approaches for UAV pursuit-evasion. The obtained results show that the proposed approach is effective in tracking moving objects in complex outdoor scenarios.
.SAP improved the detection of distant UAVs. The obtained results demonstrate the efficiency of the proposed algorithms and show that both approaches are interesting for a UAV pursuit-evasion scenario
Actors in our criminal justice system increasingly rely on computer algorithms to help them predict how dangerous certain people and certain physical locations are. These predictive algorithms have spawned controversies because their operations are often opaque and some algorithms use biased data.
Yet these same types of predictive algorithms inevitably will migrate into the national security sphere, as the military tries to predict who and where its enemies are. Because military operations face fewer legal strictures and more limited oversight than criminal justice processes do, the military might expect – and hope – that its use of predictive algorithms will remain both unfettered and unseen
At an innovation conference just outside of Silicon Valley, one of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.
When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out”
IMAGINE THE FRENCH QUEEN WAS BEHEADED FOR SOMETHING SHE DID NOT SAY . SHE WAS FIXED BY ROTHSCHILDs MEDIA - “ IF PEOPLE DON’T HAVE BREAD THEY CAN EAT CAKE”.
GOOGLE SANK MY BLOGPOSTS USING ALGORITHMS AFTER MY BLOG SERIES SUPPORTING TRUMP.. HILLARY WAS ROTHSCHILDs CANDIDATE
Read all 11 part s
THIS POST IS NOW CONTINUED TO PART 13 , BELOW--
CAPT AJIT VADAKAYIL