# pROC 1.9.1

After nearly two years since the previous release, pROC 1.9.1 is finally available on CRAN. Here is a list of the main changes:

`subset`

and`na.action`

arguments now handled properly in`roc.formula`

. This means you can now do something like this:data(aSAH) roc(outcome ~ s100b, data=aSAH, subset=(gender == "Male")) roc(outcome ~ s100b, data=aSAH, subset=(gender == "Female"))

Thanks Terry Therneau for the report.- Added policies to handle the case where a ROC curve has multiple "best" threshold in
`ci.coords`

. The following policies are available:- "stop" will abort the processing and throw an error (with
`stop`

). This is the default. - "omit" will ignore the sample (as in
`NULL`

). This can lead to a reduced effective number of usable sample in the final statistic. - "random" will select one of the threshold randomly.

data(aSAH) ci.coords(aSAH$outcome, aSAH$s100b, x="best", input = "threshold", ret=c("specificity", "ppv", "tp"), best.policy = "random")

Thanks Nicola Toschi for the report. - "stop" will abort the processing and throw an error (with
- Support
`xlim`

and`ylim`

gracefully in`plot.roc`

. - Improved validation of input class
`levels`

and`direction`

; A message can be printed when auto-detecting, use the`quiet`

argument to turn on. - Removed extraneous
`name`

attribute on the`p.value`

(thanks Paweł Kleka for the report). - Faster DeLong algorithm (code contributed by Stefan Siegert). The code is based on the algorithm by Xu Sun and Weichao Xu (2014) that has an O(N log N) complexity instead of O(N
^{2}).

The DeLong algorithm is now always faster than bootstrapping, even in the previous edge case of ROC curve with large number of samples and few thresholds where bootstrapping used to be faster. Here is a quick example with 200000 data points:library(pROC) n <- 200000 a <- as.numeric(cut(rnorm(n), c(-Inf, -1, 0, 1, Inf))) b <- round(runif(n)) r <- roc(b, a, algorithm = 3) # With Bootstrap > system.time(var(r, method = "b", progress = "none")) utilisateur système écoulé 25.896 0.136 26.027 # With old DeLong algorithm > system.time(var(r, method = "d")) utilisateur système écoulé 47.352 0.008 47.353 # With new DeLong algorithm > system.time(var(r, method = "d")) utilisateur système écoulé 0.016 0.008 0.023

## Obtaining the update

To update your installation, simply type:

install.packages("pROC")

## References

Xu Sun and Weichao Xu (2014) "Fast Implementation of DeLongs Algorithm for Comparing
the Areas Under Correlated Receiver Operating Characteristic Curves". *IEEE Signal
Processing Letters*, **21**, 1389-1393. DOI: 10.1109/LSP.2014.2337313.

Xavier Robin

Published Monday, February 6, 2017 09:08 CET

Permalink: /blog/2017/02/06/proc-1.9.1

Tags:
pROC

Comments: 0

# pROC 1.8 is coming with some potential backward-incompatible changes in the namespace

The last significant update of pROC, 1.7, was released a year ago, followed by some minor bug fix updates. In the meantime, the policies of the CRAN repository evolved, and are requiring a significant update of pROC.

Specifically, S3 methods in pROC have always been exported, which means that you could call `auc.roc`

or `roc.formula`

directly. This is not allowed any longer, and methods must now to be registered as such with `S3method()`

calls in the `NAMESPACE`

file. The upcoming version of pROC (1.8) will therefore feature a major cleanup of the namespace.

In practice, this could potentially break some of your code. Specifically, direct call to S3 methods will not work any longer. For instance, the following is incorrect:

rocobj <- roc(...) smooth.roc(rocobj)

Although not documented, it used to work but that will no longer be the case. Instead, you should call the generic function that will dispatch to the proper method:

smooth(rocobj)

Other examples include for instance:

# Incorrect: auc.roc(rocobj) # Correct: auc(rocobj) # Incorrect: var.roc(rocobj) # Correct: var(rocobj)

Please make sure you replace any call to a method with the generic. In doubt, consult the *Usage* section of pROC's manual.

Xavier Robin

Published Monday, February 23, 2015 23:13 CET

Permalink: /blog/2015/02/23/proc-1.8-is-coming-with-some-potential-backward-incompatible-changes-in-the-namespace

Tags:
pROC

Comments: 0

# pROC 1.7.3 bugfix release

pROC 1.7.3 was pushed to the CRAN a few minutes ago. It is a bugfix release that solves two issues with smoothing, the first of which is a significant numeric issue:

- Fixed AUC of binomial-smoothed ROC off by 100^2 (thanks Bao-Li Chang for the report)
- Fix print of logcondens-smoothed ROC

It should be available for update from CRAN in a few hours / days, depending on your operating system.

Xavier Robin

Published Thursday, June 12, 2014 20:34 CEST

Permalink: /blog/2014/06/12/proc-1.7.3

Tags:
pROC

Comments: 0

# pROC 1.7.2

pROC 1.7.2 was published this morning. It is a bugfix release that primarily solves various issues with `coords`

and `ci.coords`

. It also warns when computing confidence intervals / roc tests of a ROC curves with AUC == 1 (the CI will always be 1-1 / p value 0) as this can potentially be misleading.

- Fixed bug where
`ci.coords`

with`x="best"`

would fail if one or more resampled ROC curve had multiple "best" thresholds (thanks Berend Terluin for the report) - Fixed bug in
`ci.coords`

: passing more than one value in`x`

now works - Fixed typo in documentation of
`direction`

argument to`roc`

(thanks Le Kang for the report) - Add a warning when computing statistics of ROC curve with AUC = 1
- Require latest version of Rcpp to avoid weird errors (thanks Tom Liptrot for the report)

Xavier Robin

Published Sunday, April 6, 2014 08:49 CEST

Permalink: /blog/2014/04/06/proc-1.7.2

Tags:
pROC

Comments: 0

# pROC 1.7 released

pROC 1.7 was released. It provides additional speed improvements with the DeLong calculations now implemented with Rcpp, improved behaviour with math operations, and various bug fixes. It is now possible to pass multiple predictors in a formula: a list of ROC curves is returned. In details:

- Faster algorithm for DeLong
`roc.test`

,`power.roc.test`

,`ci.auc`

,`var`

and`cov`

function (no large matrix allocation) - Handling Math and Operations correctly on
`auc`

and`ci`

objects (see`?groupGeneric.pROC`

) - The
`formula`

for`roc.formula`

can now provide several predictors and a list of ROC curves will be returned - Fixed documentation of
`ci.coords`

with examples - Fixed binormal AUC computed with triangulation despite the claim in the documentation
- Fixed unstated requirement on Rcpp >= 0.10.5

pROC 1.7.1 is an quick fix release to get the package on CRAN.

- Close SOCK cluster on Windows with parallel=TRUE
- Fixed really use algorithm 1 when microbenchmark fails

Xavier Robin

Published Thursday, February 20, 2014 21:48 CET

Permalink: /blog/2014/02/20/proc-1.7-released

Tags:
pROC

Comments: 0

# pROC 1.6.0.1 bugfix release

I just pushed pROC 1.6.0.1 to the CRAN, as version 1.6 was breaking the vignette of the Causata package with sanity checks (thanks Kurt Hornick for the report). Those tests appeared to be too stringent in some cases (`matrix`

inputs to `roc()`

are working OK), and yet appeared not to catch all possible errors by testing for `vector`

predictors and responses, which can let some mistakes pass (for instance `list`

inputs).

The erroneous checks were removed. Please keep in mind that pROC is designed to take *atomic vectors* as `predictor`

and `response`

inputs. Future versions of pROC may not accept other inputs as they currently do, however this will be announced in advance.

The new version is already available on the CRAN. To update, type `update.packages()`

or `install.packages("pROC")`

if you want to update pROC only.

Xavier Robin

Published Saturday, December 28, 2013 18:23 CET

Permalink: /blog/2013/12/28/proc-1.6.0.1-released

Tags:
pROC

Comments: 0

# pROC 1.6 released

Two years after the last major release 1.5, pROC 1.6 is finally available. It comes with several major enhancements:

- Power ROC tests
- Confidence intervals for arbitrary coordinates
- Speed enhancements
- Dropped S+ support
- Other changes

## Power ROC tests

This is probably the main feature of this version: power tests for ROC curves. It is now possible to compute sample size, power, significance level or minimum AUC with pROC.

library(pROC) data(aSAH) roc1 <- roc(aSAH$outcome, aSAH$ndka) roc2 <- roc(aSAH$outcome, aSAH$wfns) power.roc.test(roc1, roc2, power=0.9)

It is implemented with the methods proposed by Obuchowski and colleagues^{1, 2}, with the added possibility to use bootstrap or the DeLong^{3} method to compute variance and covariances. For more details and examples, see `?power.roc.test`

.

As a side effect, a new `method="obuchowski"`

has been implemented in the `cov`

and `var`

functions. More details in `?var.roc`

and `?cov.roc`

.

## Confidence intervals for arbitrary coordinates

It is now possible to compute confidence intervals of arbitrary coordinates, with a syntax much similar to that of the `coords`

function.

library(pROC) data(aSAH) ci.coords(aSAH$outcome, aSAH$s100b, x="best") # Or for much more information: rets <- c("threshold", "specificity", "sensitivity", "accuracy", "tn", "tp", "fn", "fp", "npv", "ppv", "1-specificity", "1-sensitivity", "1-accuracy", "1-npv", "1-ppv") ci.coords(aSAH$outcome, aSAH$wfns, x=0.9, input = "sensitivity", ret=rets)

## Speed enhancements

- A faster implemententation of the DeLong test was kindly contributed by Kazuki Yoshida. It is used in
`roc.test`

,`ci`

,`var`

and`cov`

. - Two new algorithms have been introduced to speed-up ROC analysis, and specifically the computation of sensitivity and specificity. The same code as before is used by default (
`algorithm=1`

), that goes in O(T*N) (N = number of data points and T = number of thresholds of the curve), is well tested and safe. If speed is an issue for you, you may want to consider the following alternatives:`algorithm=2`

is a pure-R algorithm that goes in O(N) instead of O(T*N). It is typically faster when the number of thresholds of the ROC curve is above 1000, but slower otherwise.`algorithm=3`

is a a C++ implementation of the standard algorithm of pROC, with a 3-5x speedup. It is typically the fastest for ROC curves with less than 3000-5000 thresholds.- The special values
`0`

means the fastest algorithm for the specific dataset will be determined with the microbenchmark package, while`4`

is a debug feature that tests all 3 algorithms and ensures they produce the same results.

NOTE: because of this change, `roc`

objects created with an earlier version will have to be re-created before they can be used in any bootstrap operation.

## Dropped S+ support

S+ support was dropped, due to diverging code bases and apparent drop of support of S+ by TIBCO. A version 1.5.9 will be released in the next few days on ExPaSy with an initial work on ROC tests. It will work only on 32bits versions of S+ 8.2 for Windows.

## Other changes

`coords`

(and`ci.coords`

) now accepts a new`ret`

value`"1-accuracy"`

`are.paired`

now also checks for identical`levels`

- Fixed a warning generated in the examples
- Fixed several bugs related with
`smooth.roc`

curves - Additional input data sanity checks
- Now requires R >= 2.13 (in fact, since 1.5.1, thanks Emmanuel Curis for the report)
- Progress bars now defaults to text on Macs where 'tcltk' seems broken (thanks Gerard Smits for the report)

As usual, you will find the new version on ExPASy (please give a few days for the update to be propagated there) and on the CRAN. To update, type `update.packages()`

or `install.packages("pROC")`

if you want to update pROC only.

- 1. Nancy A. Obuchowski, Donna K. McClish (1997). “Sample size determination for diagnostic accurary studies involving binormal ROC curve indices”. Statistics in Medicine, 16, 1529–1542. DOI: 10.1002/(SICI)1097-0258(19970715)16:13<1529::AID-SIM565>3.0.CO;2-H.
- 2. Nancy A. Obuchowski, Micharl L. Lieber, Frank H. Wians Jr. (2004). “ROC Curves in Clinical Chemistry: Uses, Misuses, and Possible Solutions”. Clinical Chemistry, 50, 1118–1125. DOI: 10.1373/clinchem.2004.031823.
- 3. Elisabeth R. DeLong, David M. DeLong and Daniel L. Clarke-Pearson (1988) “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach”.
*Biometrics***44**, 837–845.

Xavier Robin

Published Thursday, December 26, 2013 18:10 CET

Permalink: /blog/2013/12/26/proc-1.6-released

Tags:
pROC

Comments: 0

# Transcend class 10 vs. SanDisk Extreme Pro: a real-case scenario

I own a Pentax K-5 with Transcend Class 10 cards (2 x 16 GB + 1 x 64 GB) and I am mostly satisfied with it. However I have been wondering if a better SD card (such as a SanDisk Extreme Pro 95 MB/s) would make a noticeable difference. I mean, it does on the paper in controlled tests. But is it really any better inside the camera? I mean, isn't the limiting factor the camera itself? So I just bought a 8 GB Extreme Pro one and tested the time necessary to write 10 pictures to the card in the camera and display the photo on the screen. This is a rather typical scenario: you shoot and wait to see the result. I also tested a slower SanDisk Class 4 card just for the fun.

## Test setup

The Pentax K-5 was fixed on a tripod, aiming at a white paper sheet and manually focused. The mode was set to M, 80 ISO, 1/80s and F/2.8. The “all-manual” settings should eliminate most variations coming from refocusing or different exposure times. The drive mode was set to continuous shooting (Lo), taking about 1.5 image/second.

With each card, I repeat the following procedure 5 times. I put the card in the camera, turn it on, format the card, and press the trigger until 10 shots are taken. I then start the chronometer as soon as the last shot is taken (with the Android App Chronometer by REmaxer), wait until the last image appears on the screen and stop the chronometer as soon as possible. The resulting images (DNG + JPG) for each shot are about 25 MB, summing up to a total of 250 MB for each test. I should note that the cards were put in the camera in more or less random order, and changed each time (so I didn't do all the test for a card in a row).

The following 5 cards were tested:

- SanDisk Extreme Pro 8GB Class 10 UHS-I
- SanDisk Ultra 4GB Class 4
- 2 Transcend 16GB Class 10
- Transcend 64GB Class 10

## Results

Time to display the last photo after the last of 10 shots (in seconds). The lower the better.

Unsurprisingly, the class 4 card is the worst. I had to wait more than 13 seconds after the last shot before I could see anything displayed on the screen. It is more than 10 seconds more than with the best cards. Clearly not a good choice.

Next thing, and still quite unsurprising, the SanDisk Extreme Pro was the fastest card, with an average of 3 seconds between the last shot and its display on screen. This is slightly better than the Transcend Class 10 cards.

The real surprise came from the Transcend cards. First, the 64 GB was significantly slower than the 16 GB ones, with about 7 seconds required to display the last photo versus only 3.5 – 4. Probably the controller isn't able to cope with all this space to allocate? Second, the two 16 GB cards performed fairly differently, one being noticeably slower than the other. More precisely, it had a few “outlier” points where it would take up to 5 seconds to display the last shot. I repeated the test several more times and came to the same conclusion: only one of the card displayed this feature. All other cards had much more stable results.

## Conclusion

So, are the more expensive cards really better?

Well, the first thing we can conclude is that cheap class 4 cards are clearly slower. At least the SanDisk Ultra. Since Transcend class 10 cards are about the same price but much, much faster (even in the worst-case outlier scenario), the latter should be prefered.

Then, is it worth buying a SanDisk ExtremePro that is 2.5–3 times more expensive (at the same capacity)? Well, it depends if the 0.5–1 second gain really means something to you. It may be significant on the field when the action is taking place *right now* and you quickly need to check your photos are OK before continuing.

Several questions remain.

- Is the bad Transcend 16 GB an exception or is it a frequent issue? Is it because it's already more than 1.5 years old (and used pretty intensively)?
- Would I have different results if I tested a second Extreme Pro? And what about 16 GB ones?
- If I had a 8 GB Transcend, would it be faster than the 16 GB ones (and thus could compete with the Extreme Pro)?
- I bought the 64 GB card more than a year ago. Did Transcend improve their controller since?
- How would a 64 GB Extreme Pro perform? Would it have the same controller issues than the Transcend?

As you can see, my testings raise more questions than they solve. For now, I will keep the Extreme Pro in my K-5, with spare Transcends in the bag for when the 8 GB are full (with > 150 photos it shouldn't happen that often). I think this kind of setup is quite efficient: a small, fast card for everyday photos, and big, cheap ones available when more space is needed, at the cost of slightly slower shooting.

Xavier Robin

Published Friday, December 28, 2012 18:34 CET

Permalink: /blog/2012/12/28/transcend-class-10-vs-sandisk-extreme-pro-a-real-case-scenario

Tags:
Photo

Comments: 2

# Unifying a new Logitech mouse on Ubuntu 12.10

I just received my new Logitech Performance MX mouse. It is a wonderful mouse, one of the rare being big enough to fit comfortably in my hand. Unfortunately, it comes with a *Unifying* receiver that is not well supported on Linux. Because I already have an other Unifying Logitech mouse (a small portable M325), I wanted both to be associated with the same receiver (don't want to switch the receiver each time I change my mouse).

Rather than booting on Windows to pair both mouses, I followed Tycho Andersen's instructions to pair new devices in Linux. I saved the code in a file, compiled it, found that the receiver was on hidraw0 (I have a Logitech Unifying Device. Wireless PID:400a on hidraw1, but it is really the device named *Logitech USB Receiver* that you must select), turned off the MX mouse, ran the program, quickly turned the mouse on… and it works!

The next step is to setup the additional buttons (and there are quite a few of them).

Xavier Robin

Published Friday, December 28, 2012 14:06 CET

Permalink: /blog/2012/12/28/unifying-a-new-logitech-mouse-on-ubuntu-12.10

Tags:
Ubuntu

Comments: 0

# pROC 1.5.3 released

I just released a minor revision of pROC, version 1.5.3.

This version fixes the following bugs:

- AUC specification was lost when
`roc.test`

,`cov`

or`var`

was passed an`auc`

' object. - Incorrect computation of "accuracy" in
`coords`

.

As usual, you can find the new version on ExPASy and on the CRAN (please allow up to a few days before it is available for Windows). To update, type `update.packages()`

or `install.packages("pROC")`

if you want to update pROC only.

Xavier Robin

Published Friday, August 31, 2012 11:48 CEST

Permalink: /blog/2012/08/31/proc-1.5.3-released

Tags:
pROC

Comments: 0