Here I analyze the given question in current perspective (2020), by juxtaposing it against Turing’s seminal take and arguments on the question: ‘Can machines think?’ using the ‘imitation game’ in 1950 . Due to the relative subjectiveness and broadness of the notion ‘the way people do’ (hereafter, humanlike), for this write-up, I’ll presume the setup similar to the imitation game, and constrain the discussion as such.
Arguments for ‘Yes’: Of the contrary views posited by Turing, the one most relevant even after 50+ years is the ‘argument of consciousness’ (AoC). It has seen reincarnations in various forms and arguments in AI research and philosophy by many luminaries following Turing, like Dreyfus, Searle, Harnad, Haugeland [2, 6, 4, 5] to name a few. …
Graphical representations are increasingly getting popular in machine learning (ML) and data science research. Skimming recent literature in geometric learning and/or Graphical Neural Networks (GNNs) techniques like GCN , GAT can be befuddling for fresh eyes. Thus, in a series of succinct posts, I will elucidate atomic/foundational concepts that can be helpful in the long run to grasp the overall concepts.
Before delving into topics like ‘Graph Laplacians’ and subsequent computations, let’s start things off with a very simple atomic concept: row normalizing adjacency matrices of a graph.
As a recap, for a graph with
n vertices, the entries of the
n * n adjacency matrix, A are defined…