Makkuva Ashok Vardhan

Hello 

PhD Student
Department of Electrical & Computer Engineering
University of Illinois at Urbana-Champaign (UIUC)

Email: makkuva2@illinois.edu
Contact: Room 130, Coordinated Science Laboratory (CSL), Urbana

About me

I am currently a sixth year PhD student in ECE Department at UIUC, advised by Prof. Pramod Viswanath. I also closely collaborate with Prof. Sewoong Oh and Prof. Sreeram Kannan.

Research interests

My research is broadly driven by the theme of developing principled theoretical tools and frameworks to study and understand practical applications and hence further using these applied problems as a lens to better understand theory.

Works along the former theme:

  • Learning in Gated Neural Networks: Inspired by the tremendous successes of gated neural networks like LSTMs, GRUs, and Attention networks in natural language processing, we designed and proved first consistent and efficient algorithms of any kind for an architecture called Mixture-of-Experts, which is at the heart of these gated neural networks: Efficient Algorithms for Mixture-of-Experts, Learning in Gated Neural Networks.

  • Optimal transport: In recent years, we have also seen many interesting developments in the intersection of optimal transport and machine learning. An important problem in this area is to learn an optimal transport map (under a suitable metric) between any two given probability distribution just given their samples. For the Wasserstein-2 metric, our work here on Optimal transport mapping via Input-Convex-Neural-Networks provides a clean mathematical framework and an algorithm to learn optimal transport maps which is robust to initializations and can also learn discontinuous transport maps which are quite common in practice (such as a map transporting a Gaussian to a MNIST).

Deep Code: Along the latter theme, we are currently working on building new state-of-the-art communication codes parametrized by neural networks. The main motive behind this project is two-fold:

  • Designing new state-of-the-art codes is largely human ingenuity driven and hence the progress is sporadic. Hence our goal is to innovate this hard process with the help of neural networks.

  • While the landmark coding schemes such as Convolutional codes, Turbo codes, LDPC codes, Polar codes, etc. are already very good under the AWGN setting, they are not fully robust if the channel changes. Hence we would like to learn new codes in a data-driven manner that is inherently robust to these changes and is as good as these codes on the AWGN.

Background

I graduated from Indian Institute of Technology, Bombay (IIT Bombay) with a B. Tech. (Honors) in Electrical Engineering and Minors in Mathematics in 2015.

I have been extremely fortunate to work with Prof. Vivek Borkar for my Bachelor's thesis at IIT Bombay. I have also had the pleasure to work with Prof. Yihong Wu for my Master's thesis at UIUC (2015-2017).

Prior to coming to US, I interned with Morgan Stanley Strats & Modeling Divsion at Mumbai in Summer 2014 and received a job offer as Securities Analyst from Goldman Sachs, Bangalore in 2014.

Updates