{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "\n", "# %matplotlib inline\n", "import numpy as np\n", "# import matplotlib.pyplot as plt\n", "from sklearn.cross_validation import train_test_split\n", "from sklearn.ensemble import RandomForestClassifier\n", "from sklearn import datasets\n", "from sklearn.calibration import CalibratedClassifierCV\n", "np.set_printoptions(formatter={'float': lambda x: \"{0:0.3f}\".format(x)})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Classification with Refusals: Controlling the Error Rate\n", "\n", "We want to control the error rate for a classification task to a prespecified error rate $\\epsilon$ by refusing to make predictions on some inputs. In the following, we will use two different approaches to this problem:\n", "\n", "**1. Probability calibration:** We will calibrate our classifier by using existing probability calibration tools (sigmoid and isotonic) in the scikit-learn package and decide to refuse or predict based on the calibrated probability estimates.\n", "\n", "**2. Conjugate prediction:** Calibrate a threshold instead of probability distributions: we discover an appropriate threshold value (which we call the \"acceptance threshold\") on calibration data and refuse to predict on the test data when multiple labels are above the threshold. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Mathematical Description\n", "\n", "Given the training data $(X_1,Y_1), \\ldots, (X_n,Y_n)$ $\\sim$ $\\mathcal{D}^n$ for some fixed but unknown $\\mathcal{D}$ and tolerance level $\\epsilon \\in (0,1)$ train a _selective_ classifier $C:\\mathcal{X}\\rightarrow \\mathcal{Y} \\cup \\{refuse\\}$, such that:\n", "\\begin{eqnarray} P\\left(C(X) \\neq Y ~ | ~ C(X) \\neq refuse \\right) & \\leq & \\epsilon \\end{eqnarray}\n", "where $(X,Y) \\sim \\mathcal{D}$ and independent from the training data. Intuitively, our error on the data items on which we guess is less than $\\epsilon$." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# CHOOSE A TARGET ERROR RATE\n", "epsilon = 0.01\n", "\n", "# LOAD THE DATA\n", "# MNIST DATA SET (MULTI-LABEL)\n", "digits = datasets.fetch_mldata('mnist-original')\n", "X, y = digits.data, digits.target\n", "y = y.astype(int)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# SPLIT THE DATA TRAIN:TEST 3:1\n", "train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.75)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Probability Calibration:\n", "\n", "In this section we first approach the problem by employing probability calibration methods from Scikit-Learn. For detailed information about these methods we refer to [the documentation of the calibration module](http://scikit-learn.org/stable/modules/calibration.html) and the references therein. \n", "\n", "Most ML algorithms compute probability estimates as well as the simple point predictions, i.e. they estimate $\\hat{P}\\left(Y~|~X\\right)$ and then make the prediction $\\hat{Y} = argmax_{y}~\\hat{P}\\left(y~|~X\\right)$. \n", "In scikit-learn these probability estimates can be obtained by predict_proba method for many of the classifiers. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Intuitive idea of calibration:\n", "* Calibration is needed because the machine learning tools give inaccurate estimates of probabilities (usually systematically under- or over-confident) so Scikit-Learn uses calibration to correct for those inaccuracies. We will show later that these corrections aren't always, well, correct.\n", "\n", "* Probability calibration is typically implemented as learning an appropriate monotonic transformation of the estimated probabilities from an hold-out set (\"calibration set \" from now on). The goal is finding the transformation such that in the calibration set among the points with transformed probabilities $x$, roughly fraction $x$ of them are correctly labelled. \n", "\n", "In this demo, we will use two of the most common calibration techniques Platt's regression and isotonic regression. The main difference between those two is while Platt's regression assumes the transformation we try to learn has a sigmoid shape (i.e, $f(x) = (1+e^{ax+b})^{-1}$) and infers the parameters of the sigmoid ($a$ and $b$); isotonic regression assumes a non-parametric transformation, in particular it fits a piecewise constant function. \n", "\n", "In the following, we first split the training data as 2:1 as the core training and calibration sets, train a random forest classifier on the core training set, and calibrate that classifier using the calibration set.\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n", " max_depth=None, max_features='auto', max_leaf_nodes=None,\n", " min_samples_leaf=10, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=-1,\n", " oob_score=False, random_state=None, verbose=0,\n", " warm_start=False)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# SPLIT THE TRAINING DATA AS THE CORE TRAINING AND CALIBRATION SETS (2:1)\n", "core_X, cal_X, core_y, cal_y = train_test_split(train_X, train_y, train_size=0.66)\n", "n_core, n_features = core_X.shape\n", "\n", "# TRAIN YOUR CLASSIFIER ON THE CORE SET\n", "classifier = RandomForestClassifier(n_estimators=100, min_samples_leaf=10, oob_score=False, n_jobs=-1)\n", "classifier.fit(core_X, core_y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Up to here all three methods (isotonic, sigmoid, and our method) do the same thing. Below we use sigmoid and isotonic (the current Scikit-Learn methods)." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "CalibratedClassifierCV(base_estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n", " max_depth=None, max_features='auto', max_leaf_nodes=None,\n", " min_samples_leaf=10, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=-1,\n", " oob_score=False, random_state=None, verbose=0,\n", " warm_start=False),\n", " cv='prefit', method='isotonic')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# CALIBRATE THE CLASSIFIER [Do not need if using our method]\n", "classifier_sigmoid = CalibratedClassifierCV(base_estimator = classifier, method = 'sigmoid', cv = 'prefit')\n", "classifier_sigmoid.fit(cal_X,cal_y)\n", "classifier_isotonic = CalibratedClassifierCV(base_estimator = classifier, method = 'isotonic', cv = 'prefit')\n", "classifier_isotonic.fit(cal_X,cal_y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, predict the labels of the points in the test set while refusing if the classifier is not confident enough. To quantify that we followed two different strategies (for each of isotonic and sigmoid):\n", "* **Maximum Likelihood:** Refuse if $(1 - max_{y}~\\hat{P}\\left(y~|~X\\right)) > \\epsilon$. Intuitively, refuse if not sure enough about any $y$ value.\n", "* **Tolerance Sets:** Refuse if $\\hat{P}\\left(y~|~X\\right) > \\epsilon$ for more than one label. Intuitively, refuse if discarded $y$ values have too high a probability.\n", "\n", "Note that these two methods are equivalent for the binary classification, but for larger label sets the maximum likelihood based method is more conservative than the tolerance set approach." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# UNDERLYING ALGORITM (e.g. RANDOM FORESTS) PREDICTS THE LABELS FOR THE TEST SET\n", "pred = classifier.predict(test_X)\n", "err = pred != test_y\n", "\n", "# NOW FILTER OUT REFUSED PREDICTIONS USING THE ABOVE TWO STRATEGIES.\n", "\n", "# PROBABILITY CALIBRATION (Maximum Likelihood) \n", "# REFUSE PREDICTIONS IF NO LABEL HAS CONFIDENCE MORE THAN 1-epsilon\n", "ref_sigmoid = (classifier_sigmoid.predict_proba(test_X) >= 1-epsilon).sum(axis=1) == 0\n", "ref_isotonic = (classifier_isotonic.predict_proba(test_X) >= 1-epsilon).sum(axis=1) == 0\n", "\n", "# PROBABILITY CALIBRATION (Tolerance Sets) \n", "# REFUSE TO PREDICT IF THERE ARE MULTIPLE (AT LEAST TWO) LABELS WITH CONFIDENCE MORE THAN epsilon \n", "ref_sigmoid2 = (classifier_sigmoid.predict_proba(test_X) > epsilon).sum(axis=1) >= 2\n", "ref_isotonic2 = (classifier_isotonic.predict_proba(test_X) > epsilon).sum(axis=1) >= 2" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "MNIST\n", "Dimensions: (70000, 784) \n", "Number of Labels: 10 \n", "Error rate on all predictions: 0.0497\n", "\n", "PLATT'S REGRESSION (Maximum Likelihood)\n", "Error Rate on non-refused predictions: nan\n", "Refusal Rate: 1.0000\n", "\n", "PLATT's REGRESSION (Tolerance Sets)\n", "Error Rate on non-refused predictions: 0.0021\n", "Refusal Rate: 0.3656\n", "\n", "ISOTONIC REGRESSION (Maximum Likelihood) \n", "Error Rate on non-refused predictions: 0.0011\n", "Refusal Rate: 0.4229\n", "\n", "ISOTONIC REGRESSION (Tolerance Sets)\n", "Error Rate on non-refused predictions: 0.0015\n", "Refusal Rate: 0.3873\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/dennisshasha/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:5: RuntimeWarning: invalid value encountered in double_scalars\n" ] } ], "source": [ "print \"MNIST\"\n", "print \"Dimensions:\" ,X.shape, \"\\nNumber of Labels:\",10, \"\\nError rate on all predictions: {:0.4f}\".format(np.mean(err))\n", "print \n", "print \"PLATT'S REGRESSION (Maximum Likelihood)\"\n", "print \"Error Rate on non-refused predictions: {:0.4f}\".format((np.sum(err*(1-ref_sigmoid))+0.0)/np.sum(1-ref_sigmoid))\n", "print \"Refusal Rate: {:0.4f}\".format(np.mean(ref_sigmoid))\n", "print\n", "print \"PLATT's REGRESSION (Tolerance Sets)\"\n", "print \"Error Rate on non-refused predictions: {:0.4f}\".format((np.sum(err*(1-ref_sigmoid2))+0.0)/np.sum(1-ref_sigmoid2))\n", "print \"Refusal Rate: {:0.4f}\".format(np.mean(ref_sigmoid2))\n", "print\n", "print\"ISOTONIC REGRESSION (Maximum Likelihood) \"\n", "print \"Error Rate on non-refused predictions: {:0.4f}\".format((np.sum(err*(1-ref_isotonic))+0.0)/np.sum(1-ref_isotonic))\n", "print \"Refusal Rate: {:0.4f}\".format(np.mean(ref_isotonic))\n", "print\n", "print \"ISOTONIC REGRESSION (Tolerance Sets)\"\n", "print \"Error Rate on non-refused predictions: {:0.4f}\".format((np.sum(err*(1-ref_isotonic2))+0.0)/np.sum(1-ref_isotonic2))\n", "print \"Refusal Rate: {:0.4f}\".format(np.mean(ref_isotonic2))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For now, please observe that the error rates are below our target rate (1%) and, as a result, the refusal rate is unnecessarily high." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CONJUGATE PREDICTION\n", "\n", "Instead of trying to calibrate probabilities, infer an acceptance threshold $\\alpha$ directly based on the errors in the calibration set. That is, \n", "\n", "\n", "\n", "* **Calibration Step:** For each $\\alpha$, refuse data points based on the tolerance method. That is, refuse if $\\hat{P}\\left(y~|~X\\right) > \\alpha$ for more than one label.Operationally, we start from the maximum $\\alpha$ (=1, for which we won't refuse any data point, so our error rate will equal the error rate on the whole calibration set) and decrease until the error rate on non-refused data points in the calibration set falls below $\\epsilon$. \n", "\n", "We are looking for the maximum such $\\alpha$ because that will have the smallest number of refusals.\n", "\n" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def conj_calibrate(C,Xcal, ycal, eps):\n", " # COMPUTE THE ACCEPTANCE THRESHOLD FOR CLASSIFIER C, CALIBRATION DATA (Xcal,ycal) AND TARGET RATE eps\n", " # NOTE: WORKS ONLY FOR CLASSIFIERS THAT SUPPORT predict_proba()\n", " A = C.predict_proba(Xcal)\n", " pred_cal = C.predict(Xcal)\n", " n_cal, n_features = Xcal.shape\n", " \n", " # COMPUTE THE POTENTIAL THRESHOLD POINTS\n", " pt = np.empty(n_cal)\n", " for i in range(n_cal):\n", " ai = np.flipud(np.argsort(A[i,:]))\n", " pt[i] = A[i,ai[1]] # Looking for the second highest probability\n", "\n", " # FIND THE SMALLEST THRESHOLD TO SATISFY THE CONDITION\n", " a = np.flipud(np.argsort(pt))\n", " errors = sum(pred_cal != ycal) + 1\n", " predictions = n_cal + 1\n", " threshold = 1\n", " if (errors+0.0)/predictions > eps:\n", " for i in a: # As we change the threshold, we refuse one more data point each time.\n", " if pred_cal[i] != ycal[i]:\n", " errors = errors - 1 # We got rid of one error\n", " predictions = predictions - 1\n", " if (errors+0.0)/predictions <= eps:\n", " threshold = pt[i]\n", " break\n", " return threshold" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* **Test Step:** Given $\\alpha$ discovered in the calibration step, use it on the test set. So we will refuse to predict if and only if there are multiple labels with $\\hat{P}\\left(y~|~X\\right) > \\alpha$. \n" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "collapsed": true, "nbpresent": { "id": "e08205d9-9e81-464d-a068-f70a2c73a41f" } }, "outputs": [], "source": [ "def conj_test(C,Xtest, threshold):\n", " # PREDICT LABELS OF THE POINTS IN Xtest AND CHOOSE THE POINTS TO REFUSE RELATIVE TO THE ACCEPTANCE THRESHOLD threshold\n", " test_A = C.predict_proba(Xtest)\n", " pred_test = C.predict(Xtest)\n", " refused = np.sum(test_A>threshold, axis=1)>=2\n", " return pred_test, refused" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "By combining these two steps:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "collapsed": false, "nbpresent": { "id": "4c43a165-eb79-474f-98fe-eede08f2e63f" } }, "outputs": [], "source": [ "def conj_calibrate_test(C,Xcal, ycal, Xtest, eps):\n", " # CALIBRATE CLASSIFIER C ON (Xcal, ycal) WITH TARGET RATE eps AND PREDICT THE CORRESPONDING LABELS FOR Xtest\n", " th = conj_calibrate(C,Xcal, ycal, eps)\n", " return conj_test(C,Xtest, th)" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "collapsed": false, "nbpresent": { "id": "ccda2bb3-e8b3-48d5-b021-d50c3ddd93c3" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "MNIST\n", "Dimensions: (70000, 784) \n", "Number of Labels: 10 \n", "Error rate on all predictions: 0.0497\n", "\n", "CONJUGATE PREDICTION\n", "Selective Error Rate: 0.0106\n", "Refusal Rate: 0.1865\n" ] } ], "source": [ "# PREDICT THE TEST LABELS\n", "pred, ref = conj_calibrate_test(classifier, cal_X,cal_y,test_X,epsilon)\n", "err = pred != test_y\n", "\n", "print\"MNIST\"\n", "print \"Dimensions: \" ,X.shape, \"\\nNumber of Labels: \",10, \"\\nError rate on all predictions: {:0.4f}\".format(np.mean(err))\n", "print\n", "print \"CONJUGATE PREDICTION\" \n", "print \"Selective Error Rate: {:0.4f}\".format((np.sum(err*(1-ref))+0.0)/np.sum(1-ref))\n", "print \"Refusal Rate: {:0.4f}\".format(np.mean(ref))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Discussion\n", "* The basic idea is based on _Conformal Prediction Framework_ and extends to \n", " * Arbitrary score functions\n", " * Online prediction (stochastic)\n", " * ...\n", "* PAC-type guarantees for conditioned probability of error.\n", "* Paper is under review, please reach one of the authors.\n", "* Ongoing/Future Work:\n", " * Focus on the online prediction framework \n", " * Asymptotic error guarantees on adversarial/non-stochastic setups\n", " * Efficient and reliable predictors for changing environments, i.e. concept drift" ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Raw Cell Format", "kernelspec": { "display_name": "Python [conda root]", "language": "python", "name": "conda-root-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" }, "nbpresent": { "slides": {}, "themes": {} } }, "nbformat": 4, "nbformat_minor": 1 }