楼主: oliyiyi
1143 2

How to correctly select a sample from a huge dataset in machine learning [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
271951 个
通用积分
31269.3519
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383775 点
帖子
9598
精华
66
在线时间
5468 小时
注册时间
2007-5-21
最后登录
2024-4-18

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

By Gianluca Malato, Data Scientist, fiction author and software developer..

Photo by Lukas from Pexels

In machine learning, we often need to train a model with a very large dataset of thousands or even millions of records. The higher the size of a dataset, the higher its statistical significance and the information it carries, but we rarely ask ourselves: is such a huge dataset really useful? Or we could reach a satisfying result with a smaller, much more manageable one? Selecting a reasonably small dataset carrying the good amount of information can really make us save time and money.

Let’s make a simple mental experiment. Imagine that we are in a library and want to learn Dante Alighieri’s Divina Commedia word by word.

We have two options:

  • Grab the first edition we find and start studying from it
  • Grab as many editions as possible and study from all of them

The right answer is very clear. Why would we want to study from different books when just one of them is enough?

Machine learning is the same thing. We have a model that learns something and, exactly like us, it needs some time. What we want is the minimum amount of information that is required to learn properly from the phenomenon, without wasting our time. Information redundancy doesn’t contain any business value for us.

But how can we be sure that our edition isn’t corrupted or incomplete? We must perform some kind of high-level comparison with the population made by the other editions. For example, we could check the number of canti and cantiche. If our book has three cantiche and each one of them has 33 canti, maybe it’s complete and we can safely learn from it.

What we are doing is learn from a sample (the single Divina Commedia edition) and check its statistical significance (the macro comparison with the other books).

The same, exact concept can be applied in machine learning. Instead of learning from a huge population of many records, we can make a sub-sampling of it keeping all the statistics intact.

Statistical framework

In order to take a small, easy to handle dataset, we must be sure we don’t lose statistical significance with respect to the population. A too small dataset won’t carry enough information to learn from, a too huge dataset can be time-consuming to analyze. So how can we choose the good compromise between size and information?

Statistically speaking, we want that our sample keeps the probability distribution of the population under a reasonable significance level. In other words, if we take a look at the histogram of the sample, it must be the same as the histogram of the population.

There are many ways to accomplish this goal. The simplest thing to do is taking a random sub-sample with uniform distribution and check if it’s significant or not. If it’s reasonably significant, we’ll keep it. If it’s not, we’ll take another sample and repeat the procedure until we get a good significance level.

Multivariate vs. multiple univariate

If we have a dataset made by N variables, it can be clustered in a N-variate histogram and so can be every sub-sample we can take from it.

This operation, although academically correct, can be really difficult to perform in reality, especially if our dataset mixes numerical and categorical variables.

That’s why I prefer a simpler approach, that usually introduces an acceptable approximation. What we are going to do is consider each variable independently from the others. If each one of the single, univariate histograms of the sample columns is comparable with the correspondent histogram of the population columns, we can assume that the sample is not biased.

The comparison between sample and population is then made this way:

  • Take one variable from the sample
  • Compare its probability distribution with the probability distribution of the same variable of the population
  • Repeat with all the variables

Some of you could think that we are forgetting the correlation between variables. It’s not completely true, in my opinion, if we select our sample uniformly. It’s widely known that selecting a sub-sample uniformly will produce, with large numbers, the same probability distribution of the original population. Powerful resampling methods like bootstrap are built around this concept (see my previous article for more information).

Comparing sample and population

As I said before, for each variable we must compare its probability distribution on the sample with the probability distribution on the population.

The histograms of categorical variables can be compared using Pearson’s chi-square test, while the cumulative distribution functions of the numerical variables can be compared using the Kolmogorov-Smirnov test.

Both statistical tests work under the null hypothesis that the sample has the same distribution of the population. Since a sample is made by many columns and we want all of them to be significative, we can reject the null hypothesis if the p-value of at least one of the tests is lower than the usual 5% confidence level. In other words, we want every column to pass the significance test in order to accept the sample as valid.

R Example

Let’s move from theory to practice. As usual, I’ll use an example in R language. What I’m going to show you is how the statistical tests can give us a warning when sampling is not done properly.

Data simulation

Let’s simulate some (huge) data. We’ll create a data frame with 1 million records and 2 columns. The first one has 500.000 records taken from a normal distribution, while the other 500.000 records are taken from a uniform distribution. This variable is clearly biased and it will help me explain the concepts of statistical significance later.

The other field is a factor variable created by using the first 10 letters from the alphabet uniformly distributed.

Here follows the code to create such a dataset.

set.seed(100)N = 1e6dataset = data.frame(  # x1 variable has a bias. The first 500k values are taken  # from a normal distribution, while the remaining 500k  # are taken from a uniform distribution  x1 = c(    rnorm(N/2,0,1) ,    runif(N/2,0,1)    ),    # Categorical variable made by the first 10 letters  x2 = sample(LETTERS[1:10],N,replace=TRUE))

Create a sample and check its significance

Now we can try to create a sample made by 10.000 records from the original dataset and check its significance.

Remember: numerical variables must be checked with the Kolmogorov-Smirnov test, while categorical variables (i.e. factors in R) need Pearson’s chi-square test.

For each test, we’ll store its p-value in a named list for the final check. If all the p-values are greater than 5%, we can say that the sample is not biased.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:correctly Learning machine correct earning

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
沙发
hifinecon 发表于 2019-5-7 07:08:01 |只看作者 |坛友微信交流群

使用道具

藤椅
wahahapinggu456 发表于 2020-6-14 19:54:36 |只看作者 |坛友微信交流群
good 03

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-25 22:03