Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Classification vs regression in overparameterized regimes: Does the loss function matter?

We characterize, through matching upper and lower bounds, the generalization error in terms of 0-1 classification loss of solutions associated with minimizing the L2 norm of feature weights in the overparameterized regime, including the (feature space) margin maximizing support vector machine (SVM). We uncover empirical and theoretical evidence for a discrepancy in the performance of classification vs regression. In particular, we show that there exists a regime of moderate overparameterization in which the mean-squared-error (in regression) would diverge to the null risk, but the classification error decays to 0 as the number of samples increases. We also discuss ramifications for the susceptibility of such solutions to adversarial perturbations.

Towards Sample-Efficient Overparameterized Meta-Learning

We characterize, through matching upper and lower bounds, the generalization error in terms of 0-1 classification loss of solutions associated with minimizing the L2 norm of feature weights in the overparameterized regime, including the (feature space) margin maximizing support vector machine (SVM). We uncover empirical and theoretical evidence for a discrepancy in the performance of classification vs regression. In particular, we show that there exists a regime of moderate overparameterization in which the mean-squared-error (in regression) would diverge to the null risk, but the classification error decays to 0 as the number of samples increases. We also discuss ramifications for the susceptibility of such solutions to adversarial perturbations.

talks

teaching