This visualization shows:
This visualization shows a body, 3 sensors, and 4 objects.
In the simulation below, the animal moves its sensors over 4 objects. It does 1-shot learning. (1-shot learning is not necessary, it's just good for this demo.)
To learn, the animal activates a random "body relative to specific object", then proceeds to move its sensors over the object.
import htmresearchviz0.IPython_support as viz
viz.init_notebook_mode()
with open("logs/100-cells-learn.log", "r") as fileIn:
viz.printMultiColumnInferenceRecording(fileIn.read())
For every sensation, a feature-location pair is predicted after 1 timestep.
These feature-location pairs are associated with 4 active cells per module. Because these discrete neurons are handling continuous space, the "sensor relative to specific object" is always at least 4 cells, because the "sensor relative to body" is somewhat ambiguous.
Now the animal has moved elsewhere in the world. It's going to experience this objects at an egocentric location that it has never experienced these objects at before.
It touches every object twice.
In this visualization, every timestep is divided into two parts: "?" and "!", which you can pronounce as "predict" and "sense".
with open("logs/100-cells-infer.log", "r") as fileIn:
viz.printMultiColumnInferenceRecording(fileIn.read())