请选择 进入手机版 | 继续访问电脑版
楼主: oliyiyi
1185 1

MDL Clustering: Unsupervised Attribute Ranking, Discretization, and Clustering [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
271951 个
通用积分
31269.3519
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383775 点
帖子
9598
精华
66
在线时间
5468 小时
注册时间
2007-5-21
最后登录
2024-4-18

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

oliyiyi 发表于 2016-8-27 10:10:31 |显示全部楼层 |坛友微信交流群

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

MDL Clustering is a free software suite for unsupervised attribute ranking, discretization, and clustering based on the Minimum Description Length principle and built on the Weka Data Mining platform.

By Zdravko Markov, Central Connecticut State University.

MDL Clustering is a free software suite for unsupervised attribute ranking, discretization, and clustering built on the Weka Data Mining platform. It implements learning algorithms as Java classes compiled in a JAR file, which can be downloaded or run directly online provided that the Java runtime environment is installed. It starts the Weka command-line interface (Simple CLI), which provides access to the new algorithms and to the full functionality of the Weka data mining software as well. The data should be provided in the Weka ARFF or CSV format.



The basic idea of the algorithms is to use of the Minimum Description Length (MDL) principle to compute the encoding length (complexity) of a data split, which is used as an evaluation measure to rank attributes, discretize numeric attributes or cluster data. Below is a brief description of the algorithms. More details, evaluations and comparisons can be found in recent papers and a book chapter [1, 2, 3].

  • Unsupervised attribute ranking (MDLranker.class). For each attribute the data are split using its values, i.e. a clustering is created such that each cluster contains the instances that have the same value for that attribute. The MDL of this clustering is used as a measure to evaluate the attribute assuming that the best attribute minimizes the corresponding MDL score.
  • Unsupervised discretization (MDLdiscretize.class). The numeric attributes are discretized by splitting the range of their values in two intervals, which are determined by a breakpoint that minimizes the MDL of the resulting data split.
  • Hierarchical clustering (MDLcluster.class). The algorithm starts with the data split produced by the attribute that minimizes MDL and recursively applies the same procedure to the resulting splits, thus generating a hierarchical clustering tree similar to a decision tree. The process of growing the tree is controlled by a parameter evaluating the information compression at each node, which is computed as the difference between the code length of the data at the current node and the MDL of the attribute that produces the data split at that node. If the compression becomes lower than a specified cutoff value the process of growing the tree stops and a leaf node is created.

The algorithms computational complexity is linear in the number of data instances and quadratic in the total number of different attribute-values in the data. However it is substantially reduced by an efficient implementation using bit-level parallelism. This makes it suitable not only for processing large number of instances, but also for large number of attributes, which is typical in text and web mining.

In addition to the above algorithms the suite also includes a utility for creating data files from text or web documents and a lab project for web document clustering illustrating the basic steps of document collection, creating the vector space model, data preprocessing, clustering and attribute selection.

A set of popular benchmark data is provided for experimenting and comparisons. Below are some examples of running the MDL clustering algorithm on a 3-class version of the Reuters data in the Weka command-line interface.

> java MDLcluster data/reuters-3class.arff 280000trade=0 (584469.05)  rate=0 (277543.73) [339,18,30] money  rate=1 (259602.51) [168,0,177] interesttrade=1 (206999.39) [101,301,12] trade

The above clusters are explicitly described by attribute values, which makes the clustering models easier to understand and use. The tree has three clusters (exactly as the number of classes) and identifies the two most important attributes in the data. Further, if we are interested in the structure of these clusters we may lower the compression threshold and obtain the following tree, which shows another important attribute (market) that splits the “interest” cluster (rate=1) in two.

> java MDLcluster data/reuters-3class.arff 250000trade=0 (584469.05)  rate=0 (277543.73)    mln=0 (160192.77) [178,10,28] money    mln=1 (129134.46) [161,8,2] money  rate=1 (259602.51)    market=0 (117196.81) [66,0,117] interest    market=1 (107679.77) [102,0,60] moneytrade=1 (206999.39) [101,301,12] trade

The clustering trees produced by MDLcluster in an unsupervised manner are very similar to the decision trees created by supervised learning algorithms. For example, the following tree is produced by the Weka’s J48 algorithm.

> java weka.classifiers.trees.J48 a€“t data/reuters-3class.arff -M 150trade = 0|   rate = 0: money (387.0/48.0)|   rate = 1|   |   market = 0: interest (183.0/66.0)|   |   market = 1: money (162.0/60.0)trade = 1: trade (414.0/113.0)

This similarity indicates that the MDL evaluation measure can identify the important attributes in the data without using class information. A more detailed study [1] shows that the MDL unsupervised attribute ranking performs comparably with the supervised ranking based on information gain (used by the decision tree learning algorithm). Another empirical study [2] show that the MDL clustering algorithm compares favorably with k-means and EM on popular benchmark data and performs particularly well on binary and sparse data (e.g. text and web documents).

References

  • Zdravko Markov. MDL-based Unsupervised Attribute Ranking, Proceedings of the 26th International Florida Artificial Intelligence Research Society Conference (FLAIRS-26), St. Pete Beach, Florida, USA, May 22-24, AAAI Press 2013, pp. 444-449.http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS13/paper/view/5845/6115
  • Zdravko Markov. MDL-Based Hierarchical Clustering, Proceedings of the IEEE 14th International Conference on Machine Learning and Applications (ICMLA 2015), December 9-11, 2015, Miami, Florida, USA, pp. 464-467. PDF
  • Zdravko Markov and Daniel T. Larose. MDL-Based Model and Feature Evaluation, in Chapter 4 of Data Mining the Web: Uncovering Patterns in Web Content, Structure, and Usage, Wiley, April 2007, ISBN: 978-0-471-66655-4.

Bio: Zdravko Markov is a professor of Computer Science at Central Connecticut State University, where he teaches courses in Programming, Computer Architecture, Artificial Intelligence, Machine Learning, Data and Web Mining. His research area is Artificial Intelligence with a focus on Machine Learning, Data and Web Mining, where he is developing software and a project-based frameworks for teaching core AI topics. Dr. Markov has published 4 books and more than 60 research papers in conference proceedings and journals. His most recent book (co-authored with Daniel Larose) is "Data Mining The Web: Uncovering Patterns in Web Content, Structure, and Usage", published by Wiley in 2007.



二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Clustering attribute Cluster Ranking tribute software learning classes Java file

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
Kamize 学生认证  发表于 2016-8-30 13:58:32 来自手机 |显示全部楼层 |坛友微信交流群
oliyiyi 发表于 2016-8-27 10:10
MDL Clustering is a free software suite for unsupervised attribute ranking, discretization, and clus ...
谢谢分享了啊
已有 1 人评分论坛币 收起 理由
oliyiyi + 30 沙发

总评分: 论坛币 + 30   查看全部评分

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-19 10:29