楼主: oliyiyi
91940 2410

【latex版】水贴   [推广有奖]

1041
oliyiyi 发表于 2015-10-2 15:27:51
We study $k$-divisible partition structures, which are families of random set partitions whose block sizes are divisible by an integer $k=1,2,\ldots$. In this setting, exchangeability corresponds to the usual invariance under relabeling by arbitrary permutations; however, for $k>1$, the ordinary deletion maps.

1042
oliyiyi 发表于 2015-10-2 15:28:47
8h
Supervised Hashing Using Graph Cuts and Boosted Decision Trees
To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space.

1043
oliyiyi 发表于 2015-10-2 15:31:09
Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixels Measurement Models
This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF

1044
oliyiyi 发表于 2015-10-2 15:36:59
Abstract
Papers evaluating measures of explained variation, or similar indices, almost invariably use independence from censoring as the most important criterion. And they always end up suggesting that some measures meet this criterion, and some do not, most of the time leading to a conclusion that the first is better than the second. As a consequence, users are offered measures that cannot be used with time-dependent covariates and effects, not to mention extensions to repeated events or multi-state models. We explain in this paper that the aforementioned criterion is of no use in studying such measures, because it simply favors those that make an implicit assumption of a model being valid everywhere. Measures not making such an assumption are disqualified, even though they are better in every other respect. We show that if these, allegedly inferior, measures are allowed to make the same assumption, they are easily corrected to satisfy the ‘independent-from-censoring’ criterion. Even better, it is enough to make such an assumption only for the times greater than the last observed failure time τ, which, in contrast with the ‘preferred’ measures, makes it possible to use all the modeling flexibility up to τ and assume whatever one wants after τ. As a consequence, we claim that some of the measures being preferred as better in the existing reviews are in fact inferior.

1045
oliyiyi 发表于 2015-10-2 15:38:48
1 Effect of censoring
New measures of explained variation for survival data keep being proposed since 1982, when Frank Harrell adapted the concordance index for use with survival data [1]. From time to time, selected measures are compared, and some measures are declared better than others. Almost invariably (one exception is Schemper and Henderson [2]), the main criterion in such comparisons, and in fact even in papers proposing new measures, is bias under censoring [3-6].

This sounds reasonable, but, unfortunately, the effect of censoring on measures of explained variation is essentially misunderstood. Things are easily explained via an example. The left plot in Figure 1 shows two survival curves generated from the exponential model with a binary covariate. The data are censored after a certain time point τ (type I censoring). We fitted a Cox model to this data and calculated the Kent and O'Quigley measure [7] (denoted KOQ from here on), usually called a measure of information gain. The measure is regarded to be unbiased under censoring. This means that we would obtain the same results if there was no censoring (in expectation, in samples we of course always see random variation) if the model was correct after τ. And as we see on the right plot, where data are complete, this is exactly what happens with KOQ measure (its value is 0.173). Most of the other measures will not have this property, which seemingly leads to a conclusion that KOQ measure and its derivatives are better than others. What is forgotten here is that this property is a consequence of the fact that the measure is unbiased IF the model holds after τ and that the measure inherently makes such an assumption. In other words, KOQ extrapolates the data beyond τ.

1046
oliyiyi 发表于 2015-10-2 15:39:26
Kent and O'Quigley (KOQ) measure on censored and non-censored data sets.

1047
oliyiyi 发表于 2015-10-2 15:54:48
This paper presents, extends, and studies a model for repeated, overdispersed time-to-event outcomes, subject to censoring. Building upon work by Molenberghs, Verbeke, and Demétrio (2007) and Molenberghs et al. (2010), gamma and normal random effects are included in a Weibull model, to account for overdispersion and between-subject effects, respectively. Unlike these authors, censoring is allowed for, and two estimation methods are presented. The partial marginalization approach to full maximum likelihood of Molenberghs et al. (2010) is contrasted with pseudo-likelihood estimation. A limited simulation study is conducted to examine the relative merits of these estimation methods. The modeling framework is employed to analyze data on recurrent asthma attacks in children on the one hand and on survival in cancer patients on the other.

1048
oliyiyi 发表于 2015-10-2 15:55:25
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

1049
oliyiyi 发表于 2015-10-2 15:56:02
In this paper, we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross-sectional units jointly go to infinity. The main result of the paper is that under certain assumptions, the limiting distribution of the LS estimator is independent of the number of factors used in the estimation as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients, one does not necessarily need to estimate the number of interactive fixed effects consistently.

1050
oliyiyi 发表于 2015-10-3 07:26:51
n the aforementioned example, we used type I censoring to illustrate our point. In practice, we usually see random censoring, but the problem does not go away because of that. There is, often, a time of the last failure, and censoring occurs before and after that time. Apart from increasing variation, censoring before the last observed failure time should not be a problem in estimation of any quantities of interest, and measures of explained variation are included. But censoring after the last observed failure time can be a problem. For example, if a study ends at a certain pre-specified time τ, and there is a positive probability that times to event are greater than τ, then we are in a similar situation as with type I censoring. This is also always true when in simulations, we censor data using a uniform censoring distribution defined on an interval (0,τ). In this paper, we have such τ in mind.

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-3-3 01:55