Decoding Artificial Intelligence with Prof. Ashwin Rao

Image for post
Image for post
Prof. Ashwin Rao

Artificial Intelligence (AI) is one of the most discussed areas today and it will heavily influence the transformation of businesses. These are edited excerpts from an interview with Prof. Ashwin Rao, an Adjunct Professor at Stanford University and Vice President of AI at Target, conducted by Amit Paranjape, Chairman of the IT and ITES Committee at MCCIA as a part of MCCIA’s YouTube series, ‘Decoding Artificial Intelligence with Industry Leaders.’ This is a freewheeling chat about Prof. Rao’s academic work, experiences in the quantitative finance algorithmic trading space at Morgan Stanley and Goldman Sachs, and how deep learning and AI are helping in price determination and the future of AI.

Prof. Rao did his engineering in computer science (CS) from IIT Bombay. He, however, did not quite like the courses in their CS program and didn’t get good grades. On the other hand, he was influenced by theoretical CS and decided to venture into the part of CS that had a mathematical foundation. This led him to do a Ph.D. in algebraic geometry from the University of Southern California. Towards the end of the Ph.D., his story took a twist when he decided not to pursue a career in academia because math was tough. In 1998, the tech industry was reluctant to hire theoreticians and so he went to Wall Street and worked for Goldman Sachs and Morgan Stanley for 14 years. In 2012, he started his own company which got acquired 3 years later. Subsequently, he took a year off work and joined Target in 2016. In 2018, he joined Stanford as an Adjunct Professor of Applied Mathematics.

He begins by explaining the implementation of ML in the retail industry. He says that many retailers are using machine learning to solve their traditional business problems. Demand forecasting is a statistical estimation problem and deep learning is used to create good demand estimates. Inventory and pricing are classical stochastic control problems and reinforcement learning is used there. Though instead of reinforcement learning, simple dynamic programming algorithms or simpler approaches like the newsvendor formula also suffice. The key is to learn the business details well and to capture those in one’s mathematical model

He then speaks about how the new toolkits in AI, deep learning, and hardware revolutions have transformed bottom-line business metrics. Though the retail industry has always collected data, they have started to leverage the data now. Hardware and data are the two things that have facilitated machine learning. The retail industry has become more competitive and this has given ML a big push. This is giving them better insights into the behavior of their consumers. ML can also be used in digital businesses to build recommendation systems.

While talking about the applications of data science in the finance industry, he says that a lot of research is being done in ML to predict movements of financial assets, to place optimal orders on a trading book, for optimal asset allocation, and also derivatives pricing. However, it is a challenge to make ML work in financial markets because we are still in the early stages of ML and there are a lot of nuances involved in the markets. He thinks that there is a lot of potential in using ML in the area of personal finance. In 5–10 years, we will have apps that will make all our financial decisions like what credit card gives the best rewards, what kind of mortgage loans to get, how to save optimally for retirement, how to balance risk and reward in one’s trading portfolios.

When asked about the algorithmic trading aspect, he says that there has been a transition to precision trading. Heuristics and human programmed decisions were there for a long time and then they transitioned into more traditional statistic methods like regressions. And from statistics, we stepped into the world of ML. Explainable ML is a new area that is coming up.

He also speaks about Generative Pre-Trained Transformer 3 (GPT-3) which is created by Open AI and is the largest language model ever made. It is an AI that can follow the language structure and generate content that appears as though it is written by a human being. It is a different experience for most developers because the model can understand the context. However, there are huge biases in our data which GPT-3 has picked up. It has racial and political biases, and sometimes says inappropriate things and hence is not fully ready for mainstream consumption yet. He hopes that the next generation of this model would overcome the biases.

Prof. Rao also elaborates on his experience at Stanford. He teaches a course called Reinforcement Learning for Finance which deals with optimal decision making. He created this course when he joined Stanford in 2012.

He advises people who are interested in the field of AI and ML to invest a lot in the foundational topics. They must get good at linear algebra, probability theory, and optimization. On the CS side, distributed computing, database systems, and Python is very important.

He concludes by saying that we are at the stage of augmented AI and not autonomous AI which means that AI should be looked at as a technology that will help humans do things better and faster rather than replace humans any time soon. He believes that the power of AI in the next decade will come from an effective blend of human and machine intelligence.

(You can watch the complete interview at https://www.youtube.com/watch?v=GfaJCMEjQyQ )

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store