Name Propositionalization

Intent

Create an ensemble of classifiers that have different perspectives of the same data.

Motivation

How can we learn from relational data?

Structure

<Diagram>

Discussion

Known Uses

Related Patterns

<Diagram>

References

https://en.wikipedia.org/wiki/Deep_feature_synthesis

http://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_680 (aka Propositionalization) Propositionalization is the process of explicitly transforming a Relational dataset into a propositional dataset.

The input data consists of examples represented by structured terms (cf. Learning from Structured Data), several predicates in First-Order Logic, or several tables in a relational database. We jointly refer to these as relational representations. The output is an Attribute-value representation in a single table, where each example corresponds to one row and is described by its values for a fixed set of attributes. New attributes are often called features to emphasize that they are built from the original attributes. The aim of propositionalization is to pre-process relational data for subsequent analysis by attribute-value learners.

http://arxiv.org/pdf/1607.02399v1.pdf Translating Bayesian Networks into Entity Relationship Models

The main contribution of the paper is a method to translate Bayesian networks, a main conceptual language for probabilistic graphical models, into usable entity relationship models

http://arxiv.org/abs/1511.02136v6 Diffusion-Convolutional Neural Networks

DCNNs also share strong ties to probabilistic relational models (PRMs), a family of graphical models that are capable of representing distributions over relational data [16]. In contrast to PRMs, DCNNs are deterministic, which allows them to avoid the exponential blowup in learning and inference that hampers PRMs.

http://arxiv.org/abs/1607.05695v1 FusionNet: 3D Object Classification Using Multiple Data Representations

We use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures.