From c02784378e5ff29f95ea746e293cd2cfc8a83108 Mon Sep 17 00:00:00 2001 From: Ignas Ausiejus Date: Sat, 27 Oct 2018 01:29:35 +0300 Subject: [PATCH] Update library.md fix minor typos Signed-off-by: IgnasAusiejus --- library.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/library.md b/library.md index 9391d5a..778c2f5 100644 --- a/library.md +++ b/library.md @@ -82,7 +82,7 @@ Details: * The authors propose a procedure for (i) determining the node sequences for which neighborhood graphs are created and (ii) computing a normalization of neighborhood graphs. * Node sequence selection: sort nodes according to some labeling (e.g. color refinement a.k.a. naive vertex classification), then traverse this sequence with some stride and generate receptive fields for each selected node. - * For each selecte node we assemble its neighborhood by BFS. + * For each selected node we assemble its neighborhood by BFS. * Each neighborhood is normalized to produce a receptieve field: pick neighboring nodes according to the receptive field size and canonize the subgraph for these nodes. * We can interpret node and edge features as channels, thus we can feed the generated receptive fields to a CNN. @@ -126,7 +126,7 @@ http://dl.acm.org/citation.cfm?id=2806512 Details: * The authors propose the same loss as in skip-gram, but with Noise Contrastive Estimation. -* Turns out optimizing this loss is equivalent to factorizing PMI for transition probability matrix, thus we ccould use lower dimensional representation of our nodes. +* Turns out optimizing this loss is equivalent to factorizing PMI for transition probability matrix, thus we could use lower dimensional representation of our nodes. * We can generate multiple k-step transition probability matrices (it contains probabilities for reaching other vertices in exactly k steps), and concatenate their respective lower dimensional approximations. Thoughts: Matrix factorization based methods can't learn complex non-linear interactions, unless it's explicitly encoded in the matrix itself. This method overcomes some of these limitations by utilizing info from many transition probability matrices, but it feels that "Deep Neural Networks for Learning Graph Representations" offers a better way to handle non-linear dependencies in data. @@ -178,4 +178,4 @@ https://arxiv.org/abs/1702.06921v1 Details: -* \ No newline at end of file +*