Semantic Empathy Operator
The installation 'Semantic Empathy Operator’ by Hashiba is designed to exploit the workings of human perception in our pre-singularity modern society, as an artificial intelligence system, aided by the visitors presence and reactions, races against a traditional highly deterministic system to predict and alter a remote video stream sequence.
'Semantic Empathy Operator’ comprises three parts: first, A screen that shows random video sequences of 3 minutes. Second; A capture setup based on a Kinect sensor; and third a wall mounted display that shows, on one side, visitor’s vitals and Candidate solutions for the formal features within the upcoming frames of the next seconds of the main video sequence. While on the other side; a realtime 3D Visual representation of a simulated brain based on the hybrid input of the visitor and the neurological inference pattern straight from the AI system.
Upon entry into the exhibition space, a visitor is greeted by a large screen setup showing a video sequence in the middle. The main sensor analyses, record and stores visitor interaction and response to the images, this process happens hundreds of times per second. At the same time two concurrent systems act in the background, the first one; encodes formal compositional image characteristics of each video frame (such as histogram, simplified waveform, highest brightness point location, etc) and it’s relationship with the human response, in a Raven-like string (1,11,111,2,22,222,3,333, etc…) the second system; takes part of the string and runs it trough a virtual eye (64x64 pixels after compression) powered by a Semantic Pointer Architecture Network, the aim of this configuration is to predict, with a realistic human cognition framework, which would be the next sequence of images. In essence replicating perception. A key aspect of this module is that most of the limitations of the neural engineering framework, especially short memory access constrains will be implemented.
The combined data coming from the human response and the simulated brain is displayed as a glowing/growing network that flourishes every time the empirical evidence matches the predicted dataset.
The intent is to raise questions about what is cognition? How a symbiotic relationship between two intelligent entities, biological and synthetic, might give birth to new conceptions of life? Propose a critical view of the traditional human machine interaction paradigms by engaging the audience as a key part of the building blocks of cognition.
If the experience succeeds the video-perception module, that allows integrating new perception principles before, neural compression takes place, could be added to the Nengo Neural and Cognitive Models Repository.
General Technical Description
The project, a neural dynamics experiment, involves a system where the reactions of an audience to a series of moving images are classified by a semantic pointer architecture network. SPAUN, it is a fixed model that integrates perception, cognition, and action across several different tasks.
The output, a Raven-alike string is used to predict the next images. Semantics, syntax, control, and learning and memory architecture updates are visualized in real-time creating a recurrent neural network.
// If you are looking for something specific (benchmarks, images, code, etc..) don't hesitate to contact me.