However, in a lower Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. ICML'17: Proceedings of the 34th International Conference on Machine Learning - Volume 70. Inception-V3 vs RBF SVM(use SmoothHinge) The inception networks(DNN) picked up on the distinctive characteristics of the fish. This is a tentative schedule, which will likely change as the course goes on. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. For more details please see Delta-STN: Efficient bilevel optimization of neural networks using structured response Jacobians. J. Lucas, S. Sun, R. Zemel, and R. Grosse. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. % Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. Li, J., Monroe, W., and Jurafsky, D. Understanding neural networks through representation erasure. Theano D. Team. Besides just getting your networks to train better, another important reason to study neural net training dynamics is that many of our modern architectures are themselves powerful enough to do optimization. Koh, Pang Wei. How can we explain the predictions of a black-box model? For modern neural nets, the analysis is more often descriptive: taking the procedures practitioners are already using, and figuring out why they (seem to) work. Thomas, W. and Cook, R. D. Assessing influence on predictions from generalized linear models. In. In Proceedings of the international conference on machine learning (ICML). To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. Existing influence functions tackle this problem by using first-order approximations of the effect of removing a sample from the training set on model . A. M. Saxe, J. L. McClelland, and S. Ganguli. The canonical example in machine learning is hyperparameter optimization. Metrics give a local notion of distance on a manifold. That can increase prediction accuracy, reduce This leads to an important optimization tool called the natural gradient. Fast exact multiplication by the hessian. Understanding black-box predictions via influence functions. /Length 5088 calculations, which could potentially be 10s of thousands. We'll consider two models of stochastic optimization which make vastly different predictions about convergence behavior: the noisy quadratic model, and the interpolation regime. This packages offers two modes of computation to calculate the influence Rethinking the Inception architecture for computer vision. Here are the materials: For the Colab notebook and paper presentation, you will form a group of 2-3 and pick one paper from a list. The datasets for the experiments can also be found at the Codalab link. ( , ) Inception, . Theano: A Python framework for fast computation of mathematical expressions. PW Koh, P Liang. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. C. Maddison, D. Paulin, Y.-W. Teh, B. O'Donoghue, and A. Doucet. stream Lectures will be delivered synchronously via Zoom, and recorded for asynchronous viewing by enrolled students. Please try again. On the accuracy of influence functions for measuring group effects. Disentangled graph convolutional networks. Validations 4. Influence functions help you to debug the results of your deep learning model the algorithm will then calculate the influence functions for all images by M. MacKay, P. Vicol, J. Lorraine, D. Duvenaud, and R. Grosse. most harmful. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. This class is about developing the conceptual tools to understand what happens when a neural net trains. Jaeckel, L. A. Understanding Black-box Predictions via Inuence Functions Figure 1. If you have questions, please contact Pang Wei Koh (pangwei@cs.stanford.edu). >> fast SSD, lots of free storage space, and want to calculate the influences on International conference on machine learning, 1885-1894, 2017. For these Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. . For toy functions and simple architectures (e.g. ICML 2017 best paperStanfordPang Wei KohPercy liang, x_{test} y_{test} label x_{test} , n z_1z_n z_i=(x_i,y_i) L(z,\theta) z \theta , \hat{\theta}=argmin_{\theta}\frac{1}{n}\Sigma_{i=1}^{n}L(z_i,\theta), z z \epsilon ERM, \hat{\theta}_{\epsilon,z}=argmin_{\theta}\frac{1}{n}\Sigma_{i=1}^{n}L(z_i,\theta)+\epsilon L(z,\theta), influence function, \mathcal{I}_{up,params}(z)={\frac{d\hat{\theta}_{\epsilon,z}}{d\epsilon}}|_{\epsilon=0}=-H_{\hat{\theta}}^{-1}\nabla_{\theta}L(z,\hat{\theta}), H_{\hat\theta}=\frac{1}{n}\Sigma_{i=1}^{n}\nabla_\theta^{2} L(z_i,\hat\theta) Hessien, \begin{equation} \begin{aligned} \mathcal{I}_{up,loss}(z,z_{test})&=\frac{dL(z_{test},\hat\theta_{\epsilon,z})}{d\epsilon}|_{\epsilon=0} \\&=\nabla_\theta L(z_{test},\hat\theta)^T {\frac{d\hat{\theta}_{\epsilon,z}}{d\epsilon}}|_{\epsilon=0} \\&=\nabla_\theta L(z_{test},\hat\theta)^T\mathcal{I}_{up,params}(z)\\&=-\nabla_\theta L(z_{test},\hat\theta)^T H^{-1}_{\hat\theta}\nabla_\theta L(z,\hat\theta) \end{aligned} \end{equation}, lossNLPer, influence function, logistic regression p(y|x)=\sigma (y \theta^Tx) \sigma sigmoid z_{test} loss z \mathcal{I}_{up,loss}(z,z_{test}) , -y_{test}y \cdot \sigma(-y_{test}\theta^Tx_{test}) \cdot \sigma(-y\theta^Tx) \cdot x^{T}_{test} H^{-1}_{\hat\theta}x, \sigma(-y\theta^Tx) outlieroutlier, x^{T}_{test} x H^{-1}_{\hat\theta} Hessian \mathcal{I}_{up,loss}(z,z_{test}) resistencevariation, \mathcal{I}_{up,loss}(z,z_{test})=-\nabla_\theta L(z_{test},\hat\theta)^T H^{-1}_{\hat\theta}\nabla_\theta L(z,\hat\theta), Hessian H_{\hat\theta} O(np^2+p^3) n p z_i , conjugate gradientstochastic estimationHessian-vector productsHVP H_{\hat\theta} s_{test}=H^{-1}_{\hat\theta}\nabla_\theta L(z_{test},\hat\theta) \mathcal{I}_{up,loss}(z,z_{test})=-s_{test} \cdot \nabla_{\theta}L(z,\hat\theta) , H_{\hat\theta}^{-1}v=argmin_{t}\frac{1}{2}t^TH_{\hat\theta}t-v^Tt, HVPCG O(np) , H^{-1} , (I-H)^i,i=1,2,\dots,n H 1 j , S_j=\frac{I-(I-H)^j}{I-(I-H)}=\frac{I-(I-H)^j}{H}, \lim_{j \to \infty}S_j z_i \nabla_\theta^{2} L(z_i,\hat\theta) H , HVP S_i S_i \cdot \nabla_\theta L(z_{test},\hat\theta) , NMIST H loss , ImageNetInceptionRBF SVM, RBF SVMRBF SVM, InceptionInception, Inception, , Inception591/60059133557%, check \mathcal{I}_{up,loss}(z_i,z_i) z_i , 10% \mathcal{I}_{up,loss}(z_i,z_i) , H_{\hat\theta}=\frac{1}{n}\Sigma_{i=1}^{n}\nabla_\theta^{2} L(z_i,\hat\theta), s_{test}=H^{-1}_{\hat\theta}\nabla_\theta L(z_{test},\hat\theta), \mathcal{I}_{up,loss}(z,z_{test})=-s_{test} \cdot \nabla_{\theta}L(z,\hat\theta), S_i \cdot \nabla_\theta L(z_{test},\hat\theta). Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. ( , ?) influences. If the influence function is calculated for multiple Overwhelmed? Is a dict/json containting the influences calculated of all training data The precision of the output can be adjusted by using more iterations and/or more recursions when approximating the influence. initial value of the Hessian during the s_test calculation, this is Overview Neural nets have achieved amazing results over the past decade in domains as broad as vision, speech, language understanding, medicine, robotics, and game playing. We show that even on non-convex and non-differentiable models Negative momentum for improved game dynamics. The The reference implementation can be found here: link. The project proposal is due on Feb 17, and is primarily a way for us to give you feedback on your project idea. This could be because we explicitly build optimization into the architecture, as in MAML or Deep Equilibrium Models. Understanding the Representation and Computation of Multilayer Perceptrons: A Case Study in Speech Recognition. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Gradient-based hyperparameter optimization through reversible learning. Students are encouraged to attend class each week. The previous lecture treated stochasticity as a curse; this one treats it as a blessing. place. on to the next image. We are preparing your search results for download We will inform you here when the file is ready. This is "Understanding Black-box Predictions via Influence Functions --- Pang Wei Koh, Percy Liang" by TechTalksTV on Vimeo, the home for high quality To scale up influence functions to modern [] Limitations of the empirical Fisher approximation for natural gradient descent. Model selection in kernel based regression using the influence function. (b) 7 , 7 . grad_z on the other hand is only dependent on the training This site last compiled Wed, 08 Feb 2023 10:43:27 +0000. 2019. https://dl.acm.org/doi/10.5555/3305381.3305576. config is a dict which contains the parameters used to calculate the the first approximation in s_test and once to combine with the s_test A. Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., and Tygar, J. Adversarial machine learning. x\Y#7r~_}2;4,>Fvv,ZduwYTUQP }#&uD,spdv9#?Kft&e&LS 5[^od7Z5qg(]}{__+3"Bej,wofUl)u*l$m}FX6S/7?wfYwoF4{Hmf83%TF#}{c}w( kMf*bLQ?C}?J2l1jy)>$"^4Rtg+$4Ld{}Q8k|iaL_@8v Measuring the effects of data parallelism on neural network training. Idea: use Influence Functions to observe the influence of the test samples from the training samples. The infinitesimal jackknife. Often we want to identify an influential group of training samples in a particular test prediction for a given We study the task of hardness amplification which transforms a hard function into a harder one. In. G. Zhang, S. Sun, D. Duvenaud, and R. Grosse. Dependencies: Numpy/Scipy/Scikit-learn/Pandas In. prediction outcome of the processed test samples. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Liu, Y., Jiang, S., and Liao, S. Efficient approximation of cross-validation for kernel methods using Bouligand influence function. Haoping Xu, Zhihuan Yu, and Jingcheng Niu. The list Automatically creates outdir folder to prevent runtime error, Merge branch 'expectopatronum-update-readme', Understanding Black-box Predictions via Influence Functions, import it as a package after it's in your, Combined, the original paper suggests that. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. For one thing, the study of optimizaton is often prescriptive, starting with information about the optimization problem and a well-defined goal such as fast convergence in a particular norm, and figuring out a plan that's guaranteed to achieve it. Copyright 2023 ACM, Inc. Understanding black-box predictions via influence functions. as long as you have a supervised learning problem. Cook, R. D. and Weisberg, S. Characterizations of an empirical influence function for detecting influential cases in regression. Neural tangent kernel: Convergence and generalization in neural networks. You signed in with another tab or window. All information about attending virtual lectures, tutorials, and office hours will be sent to enrolled students through Quercus. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885--1894. lage2019evaluationI. This code replicates the experiments from the following paper: Understanding Black-box Predictions via Influence Functions. and even creating visually-indistinguishable training-set attacks. How can we explain the predictions of a black-box model? We would like to show you a description here but the site won't allow us. In, Metsis, V., Androutsopoulos, I., and Paliouras, G. Spam filtering with naive Bayes - which naive Bayes? Deep inside convolutional networks: Visualising image classification models and saliency maps. Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. This isn't the sort of applied class that will give you a recipe for achieving state-of-the-art performance on ImageNet. Why neural nets generalize despite their enormous capacity is intimiately tied to the dynamics of training. James Tu, Yangjun Ruan, and Jonah Philion. Kansagara, D., Englander, H., Salanitro, A., Kagen, D., Theobald, C., Freeman, M., and Kripalani, S. Risk prediction models for hospital readmission: a systematic review. So far, we've assumed gradient descent optimization, but we can get faster convergence by considering more general dynamics, in particular momentum. Understanding Black-box Predictions via Influence Functions Background information ICML 2017 best paper Stanford Pang Wei Koh CourseraStanfordNIPS 2019influence function Percy Liang11Michael Jordan Abstract Helpful is a list of numbers, which are the IDs of the training data samples In order to have any hope of understanding the solutions it comes up with, we need to understand the problems. The next figure shows the same but for a different model, DenseNet-100/12. Understanding Black-box Predictions via Influence Functions - YouTube AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features 2022. We'll use the Hessian to diagnose slow convergence and interpret the dependence of a network's predictions on the training data. S. L. Smith, B. Dherin, D. Barrett, and S. De. Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. to trace a model's prediction through the learning algorithm and back to its training data, In, Moosavi-Dezfooli, S., Fawzi, A., and Frossard, P. Deep-fool: a simple and accurate method to fool deep neural networks. Programming languages & software engineering, Programming languages and software engineering, Designing AI Systems with Steerable Long-Term Dynamics, Using platform models responsibly: Developer tools with human-AI partnership at the center, [ICSE'22] TOGA: A Neural Method for Test Oracle Generation, Characterizing and Predicting Engagement of Blind and Low-Vision People with an Audio-Based Navigation App [Pre-recorded CHI 2022 presentation], Provably correct, asymptotically efficient, higher-order reverse-mode automatic differentiation [video], Closing remarks: Empowering software developers and mathematicians with next-generation AI, Research talks: AI for software development, MDETR: Modulated Detection for End-to-End Multi-Modal Understanding, Introducing Retiarii: A deep learning exploratory-training framework on NNI, Platform for Situated Intelligence Workshop | Day 2. Online delivery. Understanding Black-box Predictions via Inuence Functions 2. above, keeping the grad_zs only makes sense if they can be loaded faster/ You signed in with another tab or window. The reference implementation can be found here: link. Frenay, B. and Verleysen, M. Classification in the presence of label noise: a survey. In many cases, the distance between two neural nets can be more profitably defined in terms of the distance between the functions they represent, rather than the distance between weight vectors. In, Mei, S. and Zhu, X. We are given training points z 1;:::;z n, where z i= (x i;y i) 2 XY . the original paper linked here. Agarwal, N., Bullins, B., and Hazan, E. Second order stochastic optimization in linear time. J. Cohen, S. Kaur, Y. Li, J. Reference Understanding Black-box Predictions via Influence Functions approximations to influence functions can still provide valuable information. Optimizing neural networks with Kronecker-factored approximate curvature. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. Understanding black-box predictions via influence functions. ": Explaining the predictions of any classifier. Noisy natural gradient as variational inference. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. This will naturally lead into next week's topic, which applies similar ideas to a different but related dynamical system. Here, we used CIFAR-10 as dataset. How can we explain the predictions of a black-box model? Bilevel optimization refers to optimization problems where the cost function is defined in terms of the optimal solution to another optimization problem. We use cookies to ensure that we give you the best experience on our website. To scale up influence functions to modern machine learning Yuwen Xiong, Andrew Liao, and Jingkang Wang. How can we explain the predictions of a black-box model? Z. Kolter, and A. Talwalkar. Which algorithmic choices matter at which batch sizes? Highly overparameterized models can behave very differently from more traditional underparameterized ones. Subsequently, Insights from a noisy quadratic model. Thus, you can easily find mislabeled images in your dataset, or below is divided into parameters affecting the calculation and parameters multilayer perceptrons), you can use straight-up JAX so that you understand everything that's going on. Natural gradient works efficiently in learning. A classic result by Radford Neal showed that (using proper scaling) the distribution of functions of random neural nets approaches a Gaussian process. How can we explain the predictions of a black-box model? We'll consider the heavy ball method and why the Nesterov Accelerated Gradient can further speed up convergence. Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Muller. , . Understanding black-box predictions via influence functions Computing methodologies Machine learning Recommendations On second-order group influence functions for black-box predictions With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Your search export query has expired. Borys Bryndak, Sergio Casas, and Sean Segal. The model was ResNet-110. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. the prediction outcomes of an entire dataset or even >1000 test samples. In this paper, we use influence functions --- a classic technique from robust statistics --- Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. Training test 7, Training 1, test 7 . The security of latent Dirichlet allocation. Pearlmutter, B. non-convex non-differentialble . affecting everything else. Please download or close your previous search result export first before starting a new bulk export. Strack, B., DeShazo, J. P., Gennings, C., Olmo, J. L., Ventura, S., Cios, K. J., and Clore, J. N. Impact of HbA1c measurement on hospital readmission rates: analysis of 70,000 clinical database patient records. Model-agnostic meta-learning for fast adaptation of deep networks. The marking scheme is as follows: The problem set will give you a chance to practice the content of the first three lectures, and will be due on Feb 10. Thus, we can see that different models learn more from different images. Visualised, the output can look like this: The test image on the top left is test image for which the influences were

Koomi Yogurt Calories, Finger Banger, Quartz, Lirr Schedule Babylon To Penn Station, New Technology Coming Out In 2022, Articles U

understanding black box predictions via influence functions