Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
A.Narang , R.Sinha, A.Siththaranjan, F.Yang (2020). "Data Poisoning for linear models." Will submit to ICML .
This is a page not in th emain menu
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
We characterize, through matching upper and lower bounds, the generalization error in terms of 0-1 classification loss of solutions associated with minimizing the L2 norm of feature weights in the overparameterized regime, including the (feature space) margin maximizing support vector machine (SVM). We uncover empirical and theoretical evidence for a discrepancy in the performance of classification vs regression. In particular, we show that there exists a regime of moderate overparameterization in which the mean-squared-error (in regression) would diverge to the null risk, but the classification error decays to 0 as the number of samples increases. We also discuss ramifications for the susceptibility of such solutions to adversarial perturbations.
We characterize, through matching upper and lower bounds, the generalization error in terms of 0-1 classification loss of solutions associated with minimizing the L2 norm of feature weights in the overparameterized regime, including the (feature space) margin maximizing support vector machine (SVM). We uncover empirical and theoretical evidence for a discrepancy in the performance of classification vs regression. In particular, we show that there exists a regime of moderate overparameterization in which the mean-squared-error (in regression) would diverge to the null risk, but the classification error decays to 0 as the number of samples increases. We also discuss ramifications for the susceptibility of such solutions to adversarial perturbations.