Data mining is the process of discovering
patterns in large data sets involving methods
at the intersection of machine learning, statistics,
and database systems. Data mining is an interdisciplinary
subfield of computer science and statistics
with an overall goal to extract information
(with intelligent methods) from a data set
and transform the information into a comprehensible
structure for further use. Data mining is
the analysis step of the "knowledge discovery
in databases" process, or KDD. Aside from
the raw analysis step, it also involves database
and data management aspects, data pre-processing,
model and inference considerations, interestingness
metrics, complexity considerations, post-processing
of discovered structures, visualization, and
online updating. The difference between data
analysis and data mining is that data analysis
is to summarize the history such as analyzing
the effectiveness of a marketing campaign,
in contrast, data mining focuses on using
specific machine learning and statistical
models to predict the future and discover
the patterns among data.The term "data mining"
is in fact a misnomer, because the goal is
the extraction of patterns and knowledge from
large amounts of data, not the extraction
(mining) of data itself. It also is a buzzword
and is frequently applied to any form of large-scale
data or information processing (collection,
extraction, warehousing, analysis, and statistics)
as well as any application of computer decision
support system, including artificial intelligence
(e.g., machine learning) and business intelligence.
The book Data mining: Practical machine learning
tools and techniques with Java (which covers
mostly machine learning material) was originally
to be named just Practical machine learning,
and the term data mining was only added for
marketing reasons. Often the more general
terms (large scale) data analysis and analytics
– or, when referring to actual methods,
artificial intelligence and machine learning
– are more appropriate.
The actual data mining task is the semi-automatic
or automatic analysis of large quantities
of data to extract previously unknown, interesting
patterns such as groups of data records (cluster
analysis), unusual records (anomaly detection),
and dependencies (association rule mining,
sequential pattern mining). This usually involves
using database techniques such as spatial
indices. These patterns can then be seen as
a kind of summary of the input data, and may
be used in further analysis or, for example,
in machine learning and predictive analytics.
For example, the data mining step might identify
multiple groups in the data, which can then
be used to obtain more accurate prediction
results by a decision support system. Neither
the data collection, data preparation, nor
result interpretation and reporting is part
of the data mining step, but do belong to
the overall KDD process as additional steps.
The related terms data dredging, data fishing,
and data snooping refer to the use of data
mining methods to sample parts of a larger
population data set that are (or may be) too
small for reliable statistical inferences
to be made about the validity of any patterns
discovered. These methods can, however, be
used in creating new hypotheses to test against
the larger data populations.
== Etymology ==
In the 1960s, statisticians and economists
used terms like data fishing or data dredging
to refer to what they considered the bad practice
of analyzing data without an a-priori hypothesis.
The term "data mining" was used in a similarly
critical way by economist Michael Lovell in
an article published in the Review of Economic
Studies 1983. Lovell indicates that the practice
"masquerades under a variety of aliases, ranging
from "experimentation" (positive) to "fishing"
or "snooping" (negative).The term data mining
appeared around 1990 in the database community,
generally with positive connotations. For
a short time in 1980s, a phrase "database
mining"™, was used, but since it was trademarked
by HNC, a San Diego-based company, to pitch
their Database Mining Workstation; researchers
consequently turned to data mining. Other
terms used include data archaeology, information
harvesting, information discovery, knowledge
extraction, etc. Gregory Piatetsky-Shapiro
coined the term "knowledge discovery in databases"
for the first workshop on the same topic (KDD-1989)
and this term became more popular in AI and
machine learning community. However, the term
data mining became more popular in the business
and press communities. Currently, the terms
data mining and knowledge discovery are used
interchangeably.
In the academic community, the major forums
for research started in 1995 when the First
International Conference on Data Mining and
Knowledge Discovery (KDD-95) was started in
Montreal under AAAI sponsorship. It was co-chaired
by Usama Fayyad and Ramasamy Uthurusamy. A
year later, in 1996, Usama Fayyad launched
the journal by Kluwer called Data Mining and
Knowledge Discovery as its founding editor-in-chief.
Later he started the SIGKDD Newsletter SIGKDD
Explorations. The KDD International conference
became the primary highest quality conference
in data mining with an acceptance rate of
research paper submissions below 18%. The
journal Data Mining and Knowledge Discovery
is the primary research journal of the field.
== Background ==
The manual extraction of patterns from data
has occurred for centuries. Early methods
of identifying patterns in data include Bayes'
theorem (1700s) and regression analysis (1800s).
The proliferation, ubiquity and increasing
power of computer technology has dramatically
increased data collection, storage, and manipulation
ability. As data sets have grown in size and
complexity, direct "hands-on" data analysis
has increasingly been augmented with indirect,
automated data processing, aided by other
discoveries in computer science, such as neural
networks, cluster analysis, genetic algorithms
(1950s), decision trees and decision rules
(1960s), and support vector machines (1990s).
Data mining is the process of applying these
methods with the intention of uncovering hidden
patterns in large data sets. It bridges the
gap from applied statistics and artificial
intelligence (which usually provide the mathematical
background) to database management by exploiting
the way data is stored and indexed in databases
to execute the actual learning and discovery
algorithms more efficiently, allowing such
methods to be applied to ever larger data
sets.
== Process ==
The knowledge discovery in databases (KDD)
process is commonly defined with the stages:
Selection
Pre-processing
Transformation
Data mining
Interpretation/evaluation.It exists, however,
in many variations on this theme, such as
the Cross-industry standard process for data
mining (CRISP-DM) which defines six phases:
Business understanding
Data understanding
Data preparation
Modeling
Evaluation
Deploymentor a simplified process such as
(1) Pre-processing, (2) Data Mining, and (3)
Results Validation.
Polls conducted in 2002, 2004, 2007 and 2014
show that the CRISP-DM methodology is the
leading methodology used by data miners. The
only other data mining standard named in these
polls was SEMMA. However, 3–4 times as many
people reported using CRISP-DM. Several teams
of researchers have published reviews of data
mining process models, and Azevedo and Santos
conducted a comparison of CRISP-DM and SEMMA
in 2008.
=== Pre-processing ===
Before data mining algorithms can be used,
a target data set must be assembled. As data
mining can only uncover patterns actually
present in the data, the target data set must
be large enough to contain these patterns
while remaining concise enough to be mined
within an acceptable time limit. A common
source for data is a data mart or data warehouse.
Pre-processing is essential to analyze the
multivariate data sets before data mining.
The target set is then cleaned. Data cleaning
removes the observations containing noise
and those with missing data.
=== Data mining ===
Data mining involves six common classes of
tasks:
Anomaly detection (outlier/change/deviation
detection) – The identification of unusual
data records, that might be interesting or
data errors that require further investigation.
Association rule learning (dependency modelling)
– Searches for relationships between variables.
For example, a supermarket might gather data
on customer purchasing habits. Using association
rule learning, the supermarket can determine
which products are frequently bought together
and use this information for marketing purposes.
This is sometimes referred to as market basket
analysis.
Clustering – is the task of discovering
groups and structures in the data that are
in some way or another "similar", without
using known structures in the data.
Classification – is the task of generalizing
known structure to apply to new data. For
example, an e-mail program might attempt to
classify an e-mail as "legitimate" or as "spam".
Regression – attempts to find a function
which models the data with the least error
that is, for estimating the relationships
among data or datasets.
Summarization – providing a more compact
representation of the data set, including
visualization and report generation.
=== Results validation ===
Data mining can unintentionally be misused,
and can then produce results which appear
to be significant; but which do not actually
predict future behaviour and cannot be reproduced
on a new sample of data and bear little use.
Often this results from investigating too
many hypotheses and not performing proper
statistical hypothesis testing. A simple version
of this problem in machine learning is known
as overfitting, but the same problem can arise
at different phases of the process and thus
a train/test split - when applicable at all
- may not be sufficient to prevent this from
happening.
The final step of knowledge discovery from
data is to verify that the patterns produced
by the data mining algorithms occur in the
wider data set. Not all patterns found by
the data mining algorithms are necessarily
valid. It is common for the data mining algorithms
to find patterns in the training set which
are not present in the general data set. This
is called overfitting. To overcome this, the
evaluation uses a test set of data on which
the data mining algorithm was not trained.
The learned patterns are applied to this test
set, and the resulting output is compared
to the desired output. For example, a data
mining algorithm trying to distinguish "spam"
from "legitimate" emails would be trained
on a training set of sample e-mails. Once
trained, the learned patterns would be applied
to the test set of e-mails on which it had
not been trained. The accuracy of the patterns
can then be measured from how many e-mails
they correctly classify. A number of statistical
methods may be used to evaluate the algorithm,
such as ROC curves.
If the learned patterns do not meet the desired
standards, subsequently it is necessary to
re-evaluate and change the pre-processing
and data mining steps. If the learned patterns
do meet the desired standards, then the final
step is to interpret the learned patterns
and turn them into knowledge.
== Research ==
The premier professional body in the field
is the Association for Computing Machinery's
(ACM) Special Interest Group (SIG) on Knowledge
Discovery and Data Mining (SIGKDD). Since
1989, this ACM SIG has hosted an annual international
conference and published its proceedings,
and since 1999 it has published a biannual
academic journal titled "SIGKDD Explorations".Computer
science conferences on data mining include:
CIKM Conference – ACM Conference on Information
and Knowledge Management
European Conference on Machine Learning and
Principles and Practice of Knowledge Discovery
in Databases
KDD Conference – ACM SIGKDD Conference on
Knowledge Discovery and Data MiningData mining
topics are also present on many data management/database
conferences such as the ICDE Conference, SIGMOD
Conference and International Conference on
Very Large Data Bases
== Standards ==
There have been some efforts to define standards
for the data mining process, for example the
1999 European Cross Industry Standard Process
for Data Mining (CRISP-DM 1.0) and the 2004
Java Data Mining standard (JDM 1.0). Development
on successors to these processes (CRISP-DM
2.0 and JDM 2.0) was active in 2006, but has
stalled since. JDM 2.0 was withdrawn without
reaching a final draft.
For exchanging the extracted models – in
particular for use in predictive analytics
– the key standard is the Predictive Model
Markup Language (PMML), which is an XML-based
language developed by the Data Mining Group
(DMG) and supported as exchange format by
many data mining applications. As the name
suggests, it only covers prediction models,
a particular data mining task of high importance
to business applications. However, extensions
to cover (for example) subspace clustering
have been proposed independently of the DMG.
== Notable uses ==
Data mining is used wherever there is digital
data available today. Notable examples of
data mining can be found throughout business,
medicine, science, and surveillance.
== Privacy concerns and ethics ==
While the term "data mining" itself may have
no ethical implications, it is often associated
with the mining of information in relation
to peoples' behavior (ethical and otherwise).The
ways in which data mining can be used can
in some cases and contexts raise questions
regarding privacy, legality, and ethics. In
particular, data mining government or commercial
data sets for national security or law enforcement
purposes, such as in the Total Information
Awareness Program or in ADVISE, has raised
privacy concerns.Data mining requires data
preparation which can uncover information
or patterns which may compromise confidentiality
and privacy obligations. A common way for
this to occur is through data aggregation.
Data aggregation involves combining data together
(possibly from various sources) in a way that
facilitates analysis (but that also might
make identification of private, individual-level
data deducible or otherwise apparent). This
is not data mining per se, but a result of
the preparation of data before – and for
the purposes of – the analysis. The threat
to an individual's privacy comes into play
when the data, once compiled, cause the data
miner, or anyone who has access to the newly
compiled data set, to be able to identify
specific individuals, especially when the
data were originally anonymous.It is recommended
that an individual is made aware of the following
before data are collected:
the purpose of the data collection and any
(known) data mining projects;
how the data will be used;
who will be able to mine the data and use
the data and their derivatives;
the status of security surrounding access
to the data;
how collected data can be updated.Data may
also be modified so as to become anonymous,
so that individuals may not readily be identified.
However, even "de-identified"/"anonymized"
data sets can potentially contain enough information
to allow identification of individuals, as
occurred when journalists were able to find
several individuals based on a set of search
histories that were inadvertently released
by AOL.The inadvertent revelation of personally
identifiable information leading to the provider
violates Fair Information Practices. This
indiscretion can cause financial,
emotional, or bodily harm to the indicated
individual. In one instance of privacy violation,
the patrons of Walgreens filed a lawsuit against
the company in 2011 for selling
prescription information to data mining companies
who in turn provided the data
to pharmaceutical companies.
=== Situation in Europe ===
Europe has rather strong privacy laws, and
efforts are underway to further strengthen
the rights of the consumers. However, the
U.S.-E.U. Safe Harbor Principles currently
effectively expose European users to privacy
exploitation by U.S. companies. As a consequence
of Edward Snowden's global surveillance disclosure,
there has been increased discussion to revoke
this agreement, as in particular the data
will be fully exposed to the National Security
Agency, and attempts to reach an agreement
have failed.
=== Situation in the United States ===
In the United States, privacy concerns have
been addressed by the US Congress via the
passage of regulatory controls such as the
Health Insurance Portability and Accountability
Act (HIPAA). The HIPAA requires individuals
to give their "informed consent" regarding
information they provide and its intended
present and future uses. According to an article
in Biotech Business Week, "'[i]n practice,
HIPAA may not offer any greater protection
than the longstanding regulations in the research
arena,' says the AAHC. More importantly, the
rule's goal of protection through informed
consent is approach a level of incomprehensibility
to average individuals." This underscores
the necessity for data anonymity in data aggregation
and mining practices.
U.S. information privacy legislation such
as HIPAA and the Family Educational Rights
and Privacy Act (FERPA) applies only to the
specific areas that each such law addresses.
Use of data mining by the majority of businesses
in the U.S. is not controlled by any legislation.
== Copyright law ==
=== Situation in Europe ===
Due to a lack of flexibilities in European
copyright and database law, the mining of
in-copyright works such as web mining without
the permission of the copyright owner is not
legal. Where a database is pure data in Europe
there is likely to be no copyright, but database
rights may exist so data mining becomes subject
to regulations by the Database Directive.
On the recommendation of the Hargreaves review
this led to the UK government to amend its
copyright law in 2014 to allow content mining
as a limitation and exception. Only the second
country in the world to do so after Japan,
which introduced an exception in 2009 for
data mining. However, due to the restriction
of the Copyright Directive, the UK exception
only allows content mining for non-commercial
purposes. UK copyright law also does not allow
this provision to be overridden by contractual
terms and conditions. The European Commission
facilitated stakeholder discussion on text
and data mining in 2013, under the title of
Licences for Europe. The focus on the solution
to this legal issue being licences and not
limitations and exceptions led to representatives
of universities, researchers, libraries, civil
society groups and open access publishers
to leave the stakeholder dialogue in May 2013.
=== Situation in the United States ===
By contrast to Europe, the flexible nature
of US copyright law, and in particular fair
use means that content mining in America,
as well as other fair use countries such as
Israel, Taiwan and South Korea is viewed as
being legal. As content mining is transformative,
that is it does not supplant the original
work, it is viewed as being lawful under fair
use. For example, as part of the Google Book
settlement the presiding judge on the case
ruled that Google's digitisation project of
in-copyright books was lawful, in part because
of the transformative uses that the digitisation
project displayed - one being text and data
mining.
== Software ==
=== 
Free open-source data mining software and
applications ===
The following applications are available under
free/open source licenses. Public access to
application source code is also available.
Carrot2: Text and search results clustering
framework.
Chemicalize.org: A chemical structure miner
and web search engine.
ELKI: A university research project with advanced
cluster analysis and outlier detection methods
written in the Java language.
GATE: a natural language processing and language
engineering tool.
KNIME: The Konstanz Information Miner, a user
friendly and comprehensive data analytics
framework.
Massive Online Analysis (MOA): a real-time
big data stream mining with concept drift
tool in the Java programming language.
MEPX - cross platform tool for regression
and classification problems based on a Genetic
Programming variant.
ML-Flex: A software package that enables users
to integrate with third-party machine-learning
packages written in any programming language,
execute classification analyses in parallel
across multiple computing nodes, and produce
HTML reports of classification results.
mlpack: a collection of ready-to-use machine
learning algorithms written in the C++ language.
NLTK (Natural Language Toolkit): A suite of
libraries and programs for symbolic and statistical
natural language processing (NLP) for the
Python language.
OpenNN: Open neural networks library.
Orange: A component-based data mining and
machine learning software suite written in
the Python language.
R: A programming language and software environment
for statistical computing, data mining, and
graphics. It is part of the GNU Project.
scikit-learn is an open source machine learning
library for the Python programming language
Torch: An open-source deep learning library
for the Lua programming language and scientific
computing framework with wide support for
machine learning algorithms.
UIMA: The UIMA (Unstructured Information Management
Architecture) is a component framework for
analyzing unstructured content such as text,
audio and video – originally developed by
IBM.
Weka: A suite of machine learning software
applications written in the Java programming
language.
=== Proprietary data-mining software and applications
===
The following applications are available under
proprietary licenses.
Angoss KnowledgeSTUDIO: data mining tool
Clarabridge: text analytics product.
KXEN Modeler: data mining tool provided by
KXEN Inc..
LIONsolver: an integrated software application
for data mining, business intelligence, and
modeling that implements the Learning and
Intelligent OptimizatioN (LION) approach.
Megaputer Intelligence: data and text mining
software is called PolyAnalyst.
Microsoft Analysis Services: data mining software
provided by Microsoft.
NetOwl: suite of multilingual text and entity
analytics products that enable data mining.
OpenText Big Data Analytics: Visual Data Mining
& Predictive Analysis by Open Text Corporation
Oracle Data Mining: data mining software by
Oracle Corporation.
PSeven: platform for automation of engineering
simulation and analysis, multidisciplinary
optimization and data mining provided by DATADVANCE.
Qlucore Omics Explorer: data mining software.
RapidMiner: An environment for machine learning
and data mining experiments.
SAS Enterprise Miner: data mining software
provided by the SAS Institute.
SPSS Modeler: data mining software provided
by IBM.
STATISTICA Data Miner: data mining software
provided by StatSoft.
Tanagra: Visualisation-oriented data mining
software, also for teaching.
Vertica: data mining software provided by
Hewlett-Packard.
=== 
Marketplace surveys ===
Several researchers and organizations have
conducted reviews of data mining tools and
surveys of data miners. These identify some
of the strengths and weaknesses of the software
packages. They also provide an overview of
the behaviors, preferences and views of data
miners. Some of these reports include:
Hurwitz Victory Index: Report for Advanced
Analytics as a market research assessment
tool, it highlights both the diverse uses
for advanced analytics technology and the
vendors who make those applications possible.Recent-research
Rexer Analytics Data Miner Surveys (2007–2015)
2011 Wiley Interdisciplinary Reviews: Data
Mining and Knowledge Discovery
Forrester Research 2010 Predictive Analytics
and Data Mining Solutions report
Gartner 2008 "Magic Quadrant" report
Robert A. Nisbet's 2006 Three Part Series
of articles "Data Mining Tools: Which One
is Best For CRM?"
Haughton et al.'s 2003 Review of Data Mining
Software Packages in The American Statistician
Goebel & Gruenwald 1999 "A Survey of Data
Mining a Knowledge Discovery Software Tools"
in SIGKDD Explorations
== See also ==
Data mining portalMethods
Application domains
Application examples
Related topicsData mining is about analyzing
data; for information about extracting information
out of data, see:
Other resourcesInternational Journal of Data
Warehousing and Mining
